sentences
sequence | labels
sequence |
---|---|
[
"We propose a transition-based bubble parser to perform coordination structure identification and dependency-based syntactic analysis simultaneously.",
"Bubble representations were proposed in the formal linguistics literature decades ago; they enhance dependency trees by encoding coordination boundaries and internal relationships within coordination structures explicitly.",
"In this paper, we introduce a transition system and neural models for parsing these bubble-enhanced structures.",
"Experimental results on the English Penn Treebank and the English GENIA corpus show that our parsers beat previous state-of-the-art approaches on the task of coordination structure prediction, especially for the subset of sentences with complex coordination structures.",
"1 1 Introduction Coordination structures are prevalent in treebank data (Ficler and Goldberg, 2016a), especially in long sentences (Kurohashi and Nagao, 1994), and they are among the most challenging constructions for NLP models.",
"Difficulties in correctly identifying coordination structures have consistently contributed to a significant portion of errors in state-of-the-art parsers (Collins, 2003; Goldberg and El-hadad, 2010; Ficler and Goldberg, 2017).",
"These errors can further propagate to downstream NLP modules and applications, and limit their performance and utility.",
"For example, Saha et al. (2017) report that missing conjuncts account for two-thirds of the errors in recall made by their open information extraction system.",
"Coordination constructions are particularly challenging for the widely-adopted dependency-based paradigm of syntactic analysis, since the asymmetric definition of head-modifier dependency relations is not directly compatible with the symmetric 1 Code at github.com/tzshi/bubble-parser-acl21 .",
"nature of the relations among the participating conjuncts and coordinators.",
"2 Existing treebanks usually resort to introducing special relations to represent coordination structures.",
"But, there remain theoretical and empirical challenges regarding how to most effectively encode information like modifier sharing relations while still permitting accurate statistical syntactic analysis.",
"In this paper, we explore Kahane's (1997) alternative solution: extend the dependency-tree representation by introducing bubble structures to explicitly encode coordination boundaries.",
"The coheads within a bubble enjoy a symmetric relationship, as befits a model of conjunction.",
"Further, bubble trees support representation of nested coordination, with the scope of shared modifiers identifiable by the attachment sites of bubble arcs.",
"Figure 1 compares a bubble tree against a Universal Dependencies (UD; Nivre et al., 2016, 2020) tree for the same sentence.",
"2 Rambow (2010) comments on other divergences between syntactic representation and syntactic phenomena.",
"of the formalism was not broadly pursued, for reasons unknown to us.",
"Given its appealing and intuitive treatment of coordination phenomena, we revisit the bubble tree formalism, introducing and implementing a transition-based solution for parsing bubble trees.",
"Our transition system, Bubble-Hybrid, extends the Arc-Hybrid transition system (Kuhlmann et al., 2011) with three bubble-specific transitions, each corresponding to opening, expanding, and closing bubbles.",
"We show that our transition system is both sound and complete with respect to projective bubble trees (defined in 2.2).",
"Experiments on the English Penn Treebank (PTB; Marcus et al., 1993) extended with coordination annotation (Ficler and Goldberg, 2016a) and the English GENIA treebank (Kim et al., 2003) demonstrate the effectiveness of our proposed transition-based bubble parsing on the task of coordination structure prediction.",
"Our method achieves state-of-the-art performance on both datasets and improves accuracy on the subset of sentences exhibiting complex coordination structures.",
"A dependency tree encodes syntactic relations via directed bilexical dependency edges.",
"These are natural for representing argument and adjunct modifi-cation, but Popel et al. (2013) point out that depen-dency representation is at a loss when it comes to representing paratactic linguistic phenomena such as coordination, whose nature is symmetric (two or more conjuncts play the same role), as opposed to the head-modifier asymmetry of dependencies (pg. 517).",
"If one nonetheless persists in using dependency relations to annotate all syntactic structures, as is common practice in most dependency treebanks (Hajic et al., 2001; Nivre et al., 2016, inter alia ), then one must introduce special relations to represent coordination structures and promote one element from each coordinated phrase to become the representational head.",
"One choice is to specify one of the conjuncts as the head (Mel'cuk, 1988, 2003; Jrvinen and Tapanainen, 1998; Lombardo and Lesmo, 1998) (e.g., in Figure 1, the visually asymmetric conj relation between coffee and tea is overloaded to admit a symmetric relation-ship), but it is then non-trivial to distinguish shared modifiers from private ones (e.g., in the UD tree at the bottom of Figure 1, it is difficult to tell that hot is private to coffee and tea, which share it, but hot does not modify bun).",
"Another choice is let one of the coordinators dominate the phrase (Hajic et al., 2001, 2020), but the coordinator does not directly capture the syntactic category of the coordinated phrase.",
"Decisions on which of these dependency-based fixes is more workable are further complicated by the interaction between representation styles and their learnability in statistical parsing (Nilsson et al., 2006; Johansson and Nugues, 2007; Rehbein et al., 2017).",
"Enhanced UD A tactic used by many recent releases of UD treebanks is to introduce certain extra edges and non-lexical nodes (Schuster and Manning, 2016; Nivre et al., 2018; Bouma et al., 2020).",
"While some of the theoretical issues still persist in this approach with respect to capturing the symmetric nature of relations between conjuncts, this solution better represents shared modifiers in coordinations, and so is a promising direction.",
"In work concurrent with our own, Grnewald et al. (2021) manually correct the coordination structure annotations in an English treebank under the enhanced UD representation format.",
"We leave it to future work to explore the feasibility of automatic conversion of coordination structure representations between enhanced UD trees and bubble trees , which we discuss next.",
"An alternative solution to the coordination-in-dependency-trees dilemma is to permit certain restricted phrase-inspired constructs for such structures.",
"Indeed, Tesnire's (1959) seminal work on dependency grammar does not describe all syntactic relations in terms of dependencies, but rather reserves a primitive relation for connecting coordinated items.",
"Hudson (1984) further extends this idea by introducing explicit markings of coordination boundaries.",
"In this paper, we revisit bubble trees , a representational device along the same vein introduced by Kahane (1997) for syntactic representation.",
"(Ka-hane credits Gladkij (1968) with a formal",
"study.) Bubbles are used to denote coordinated phrases; otherwise, asymmetric dependency relations are retained.",
"Conjuncts immediately within the bubble may co-head the bubble, and the bubble itself may establish dependencies with its governor and modifiers.",
"Figure 1 depicts an example bubble tree.",
"We now formally define bubble trees and their projective subset, which will become the focus of our transition-based parser in 3.",
"The following formal descriptions are adapted from Kahane (1997), tailored to the presentation of our parser.",
"Formal Definition Given a dependency-relation label set L , we define a bubble tree for a length-n sentence W = w 1 , . . . , w n to be a quadruple ( V, B , , A ) , where V = { RT , w 1 , . . . , w n } is the ground set of nodes ( RT is the dummy root), B is a set of bubbles, the function : B (cid:55) (2 V \\{ } ) gives the content of each bubble as a non-empty 3 subset of V , and A B L B defines a labeled directed tree over B .",
"Given labeled directed tree A , we say 1 2 if and only if ( 1 , l, 2 ) A for some l .",
"We denote the reflexive transitive closure of relation by .",
"Bubble tree ( V, B , , A ) is well-formed if and only if it satisfies the following conditions: 4 No partial overlap: 1 , 2 B , either ( 1 ) ( 2 ) = or ( 1 ) ( 2 ) or ( 2 ) ( 1 ) ; Non-duplication: there exists no non-identical 1 , 2 B such that ( 1 ) = ( 2 ) ; Lexical coverage: for any singleton (i.e., one-element) set s in 2 V , B such that ( ) = s ; Roothood: the root RT appears in exactly one bubble, a singleton that is the root of the tree defined by A .",
"Containment: if 1 , 2 B such that ( 2 ) ( 1 ) , then 1 2 .",
"Projectivity Our parser focuses on the subclass of projective well-formed bubble trees.",
"Visually, a projective bubble tree only contains bubbles covering a consecutive sequence of words (such that we can draw boxes around the span of words to represent them) and can be drawn with all arcs arranged spatially above the sentence where no two arcs or bubble boundaries cross each other.",
"The bubble tree in Figure 1 is projective.",
"Formally, we define the projection ( ) 2 V of a bubble B to be all nodes the bubble and its subtree cover, that is, v ( ) if and only if (cid:48) and v ( (cid:48) ) for some (cid:48) .",
"Then, we can define a well-formed bubble tree to be projective if and only if it additionally satisfies the following: Continuous coverage: for any bubble B , if w i , w j ( ) and i < k < j , then w k ( ) ; 3 Our definition does not allow empty nodes; we leave it to future work to support them for gapping constructions.",
"4 We do not use for bubbles because we reserve the symbol for our parser's buffer.",
"Continuous projections: for any bubble B , if w i , w j ( ) and i < k < j , then w k ( ) ; Contained projections: for 1 , 2 B , if 1 2 , then either ( 2 ) ( 1 ) or ( 2 ) ( 1 ) = .",
"Although, as we have seen, bubble trees have theoretical benefits in representing coordination structures that interface with an overall dependency-based analysis, there has been a lack of parser implementations capable of handling such representations.",
"In this section, we fill this gap by introducing a transition system that can incrementally build projective bubble trees.",
"Transition-based approaches are popular in dependency parsing (Nivre, 2008; Kbler et al., 2009).",
"We propose to extend the Arc-Hybrid transition system (Kuhlmann et al., 2011) with transitions specific to bubble structures.",
"5 3.1 Bubble-Hybrid Transition System A transition system consists of a data structure describing the intermediate parser states, called configurations ; specifications of the initial and terminal configurations ; and an inventory of transitions that advance the parser in configuration space towards reaching a terminal configuration.",
"Our transition system uses a similar configuration data structure to that of Arc-Hybrid, which consists of a stack, a buffer, and the partially-committed syntactic analysis.",
"Initially, the stack only contains a singleton bubble corresponding to { RT } , and the buffer contains singleton bubbles, each representing a token in the sentence.",
"Then, through taking transitions one at a time, the parser can incrementally move items from the buffer to the stack, or reduce items by attaching them to other bubbles or merging them into larger bubbles.",
"Eventually, the parser should arrive at a terminal configuration where the stack contains the singleton bubble of { RT } again, but the buffer is empty as all the tokens are now attached to or contained in other bubbles that are now descendants of the 5 Our strategy can be adapted to other transition systems as well; we focus on Arc-Hybrid here because of its comparatively small inventory of transitions, absence of spurious ambiguities (there is a one-to-one mapping between a gold tree and a valid transition sequence), and abundance of existing implementations (e.g., Kiperwasser and Goldberg, 2016).",
"{ RT } singleton, and we can retrieve a completed bubble-tree parse.",
"Table 1 lists the available transitions in our Bubble-Hybrid system.",
"The SHIFT , LEFTARC , and RIGHTARC transitions are as in the Arc-Hybrid system.",
"We introduce three new transitions to handle coordination-related bubbles: BUBBLEOPEN puts the first two items on the stack into an open bubble, with the first item in the bubble, i.e., previously the second topmost item on the stack, labeled as the first conjunct of the resulting bubble; BUBBLEATTACH absorbs the topmost item on the stack into the open bubble that is at the second topmost position; and finally, BUBBLECLOSE closes the open bubble at the top of the stack and moves it to the buffer, which then allows it to take modifiers from its left through LEFTARC transitions.",
"Figure 2 visualizes the stack and buffer throughout the process of parsing the example sentence in Figure",
"1. In particular, the last two steps in the left column of Figure 2 show the bubble corresponding to the phrase cof-fee or tea receiving its left modifier hot through a LEFTARC transition after it is put back on the buffer by a BUBBLECLOSE transition.",
"Formal Definition Our transition system is a quadruple ( C, T, c i , C ) , where C is the set of configurations to be defined shortly, T is the set of transitions with each element being a partial function t T : C (cid:55) (cid:42) C , c i maps a sentence to its intial configuration, and C C is a set of terminal configurations.",
"Each configuration c C is a septuple ( , , V, B , , A, O ) , where V , B , , and A define a partially-recognized bubble tree, and are each an (ordered) list of items in B , and O B is a set of open bubbles.",
"For a sentence W = w 1 , . . . , w n , we let c i ( W ) = ( 0 , 0 , V, B 0 , 0 , {} , {} ) , where V = { RT , w 1 , . . . , w n } , B 0 contains n + 1 items, 0 ( B 0 0 ) = { RT } , 0 ( B 0 i ) = { w i } for i from 1 to n , 0 = [ B 00 ] , and 0 = [ B 01 , . . . , B 0 n ] .",
"We write | s 1 and b 1 | to denote a stack and a buffer with their topmost items being s 1 and b 1 and the remainders being and respectively.",
"We also omit the con-stant V in describing c when the context is clear.",
"For the transitions T , we have: SHIFT [( , b 1 | , B , , A, O )] = ( | b 1 , , B , , A, O ) ; LEFTARC lbl [( | s 1 , b 1 | , B , , A, O )] = ( , b 1 | , B , , A { ( b 1 , lbl , s 1 ) } , O ) ; RIGHTARC lbl [( | s 2 | s 1 , , B , , A, O )] = ( | s 2 , , B , , A { ( s 2 , lbl , s 1 ) } , O ) ; BUBBLEOPEN lbl [( | s 2 | s 1 , , B , , A, O )] = ( | , , B { } , (cid:48) , A { ( , conj , s 2 ) , ( , lbl , s 1 ) } , O { } ) , where is a new bubble, and (cid:48) = (cid:100) { (cid:55) ( s 2 ) ( s 1 ) } (i.e., (cid:48) is almost the same as , but with added to the function's domain, mapped by the new function to cover the projections of both s 2 and s 1 ); BUBBLEATTACH lbl [( | s 2 | s 1 , , B , , A, O )] = ( | s 2 , , B , (cid:48) , A { s 2 , lbl , s 1 } , O ) , where (cid:48) = (cid:100) { s 2 (cid:55) ( s 2 ) ( s 1 ) } ; BUBBLECLOSE [( | s 1 , , B , , A, O )] = ( , s 1 | , B , , A, O\\{ s 1 } ) .",
"In this section, we show that our Bubble-Hybrid transition system is both sound and complete (de-fined below) with respect to the subclass of projective bubble trees.",
"6 Define a valid transition sequence = t 1 , . . . , t m for a given sentence W to be a sequence such that for the corresponding sequence of configurations c 0 , . . . , c m , we have c 0 = c i ( W ) , c i = t i ( c i 1 ) , and c m C , We can then state soundness and completeness properties, and present high-level proof sketches below, adapted from Nivre's (2008) proof frameworks.",
"sequence produces a projective bubble tree.",
"Proof Sketch.",
"We examine the requirements for a projective bubble tree one by one.",
"The set of edges satisfies the tree constraints since every bubble except for the singleton bubble of RT must have an in-degree of one to have been reduced from the stack, and the topological order of reductions implies acyclicness.",
"Lexical coverage is guaranteed by c i .",
"Roothood is safeguarded by the transition pre-conditions.",
"Non-duplication is ensured because newly-created bubbles are strictly larger.",
"All the other properties can be proved by induction over the lengths of transition sequence prefixes since each of our transitions preserves zero partial overlap, containment, and projectivity constraints.",
"Lemma",
"2. (Completeness) For every projective bubble tree over any given sentence W , there exists a corresponding valid transition sequence .",
"Proof Sketch.",
"The proof proceeds by strong induction on sentence length.",
"We omit relation labels without loss of generality.",
"The base case of | W | = 1 is trivial.",
"For the inductive step, we enumerate how to decompose the tree's top-level 6 More precisely, our transition system handles the subset where each non-singleton bubble has 2 internal children.",
"structure.",
"(1) When the root has multiple children: Due to projectivity, each child bubble tree i covers a consecutive span of words w x i , . . . , w y i that are shorter than | W | .",
"Based on the induction hypothesis, there exisits a valid transition sequence i to construct the child tree over RT , w x i , . . . , w y i .",
"Here we let i to denote the transition sequence excluding the always-present final RIGHTARC transition that attaches the subtree to RT ; this is for explicit illustration of what transitions to take once the subtrees are constructed.",
"The full tree can be constructed by = 1 , RIGHTARC , 2 , RIGHTARC , . . . (expanding each i sequence into its component transitions), where we simply attach each subtree to RT immediately after it is constructed.",
"(2) When the root has a single child bubble , we cannot directly use the induction hypothesis since covers the same number of words as W .",
"Thus we need to further enumerate the top-level structure of .",
"(2a)",
"If has children with their projections outside of ( ) , then we can find a sequence 0 for constructing the shorter-length bubble and placing it on the buffer (this corresponds to an empty transition sequence if is a singleton; otherwise, 0 ends with a BUBBLECLOSE transition) and i s for 's outside children; say it has l children left of its contents.",
"We construct the entire tree via = 1 ,.",
".",
". , l , 0 , LEFTARC , . . . , LEFTARC , SHIFT , l +1 , RIGHTARC , . . . , RIGHTARC , where we first construct all the left outside children and leave them on the stack, next build the bubble and use LEFTARC transitions to attach its left children while it is on the buffer, then shift to the stack before finally continuing on building its right children subtrees, each immediately followed by a RIGHTARC transition.",
"(2b)",
"If is a non-singleton bubble without any outside children, but each of its inside children can be parsed through i based on the inductive hypothesis, then we can define = 1 , 2 , BUBBLEOPEN , 3 , BUBBLEATTACH , . . . , BUBBLECLOSE , SHIFT , RIGHTARC , where we use a BUBBLEOPEN transition once the first two bubble-internal children are built, each subsequent child is attached via BUBBLEATTACH immediately after construction, and the final three transitions ensure proper closing of the bubble and its attachment to RT .",
"4 Models Our model architecture largely follows that of Kiperwasser and Goldberg's (2016) neural Arc-Hybrid parser, but we additionally introduce feature composition for non-singleton bubbles, and a rescoring module to reduce frequent coordination-boundary prediction errors.",
"Our model has five components: feature extraction, bubble-feature composition, transition scoring, label scoring, and boundary subtree rescoring.",
"where the inputs to the bi-LSTM are concatenations of word embeddings, POS-tag embeddings, and character-level LSTM embeddings.",
"We also report experiments replacing the bi-LSTM with pre-trained BERT features (Devlin et al., 2019).",
"Bubble-Feature Composition We initialize the features 7 for each singleton bubble B i in the initial configuration to be v B i = w i .",
"For a non-singleton bubble , we use recursively composed features v = g ( { v (cid:48) | ( , conj , (cid:48) ) A } ) , where g is a composition function combining features from the co-heads (conjuncts) immediately inside the bubble.",
"8 For our model, for any V (cid:48) = { v i 1 , . . . , v i N } , we set g ( V (cid:48) ) = tanh( W g mean ( V (cid:48) )) , where mean () computes element-wise averages and W g is a learnable square matrix.",
"We also experiment with a parameter-free version: g = mean.",
"Neither of the feature functions distinguishes between open and closed bubbles, so we append to each v vector an indicator-feature embedding based on whether the bubble is open, closed, or singleton.",
"Transition Scoring Given the current parser configuration c , the model predicts the best unlabeled transition to take among all valid transitions valid ( c ) whose pre-conditions are satisfied.",
"We 7 We adopt the convenient abuse of notation of allowing indexing by arbitrary objects.",
"8 Comparing with the subtree-feature composition functions in dependency parsing that are motivated by asymmetric headed constructions (Dyer et al., 2015; de Lhoneux et al., 2019; Basirat and Nivre, 2021), our definition focuses on composing features from an unordered set of vectors representing the conjuncts in a bubble.",
"The composition function is recursively applied when there are nested bubbles.",
"model the log-linear probability of taking an action with a multi-layer perceptron (MLP): P ( t | c ) exp( MLP trans t ([ v s 3 v s 2 v s 1 v b 1 ])) , where denotes vector concatenation, s 1 through s 3 are the first through third topmost items on the stack, and b 1 is the immediately accessible buffer item.",
"We experiment with varying the number of stack items to extract features from.",
"Label Scoring We separate edge-label prediction from (unlabeled) transition prediction, but the scoring function takes a similar form: P ( l | c, t ) exp( MLP lbl l ([ v h ( c,t ) v d ( c,t ) ])) , where ( h ( c, t ) , l, d ( c, t )) is the edge to be added into the partial bubble tree in t ( c ) .",
"Boundary Subtree Rescoring In our preliminary error analysis, we find that our models tend to make more mistakes at the boundaries of full coordination phrases than at the internal conjunct boundaries, due to incorrect attachments of children choosing between the phrasal bubble and the first/last conjunct.",
"For example, our initial model predicts if you owned it and liked it Friday instead of the annotated if you owned it and liked it Friday (the predicted and gold conjuncts are both italicized and underlined), incorrectly attaching Friday to liked.",
"We attribute this problem to the greedy nature of our first formulation of the parser, and propose to mitigate the issue through rescoring.",
"To rescore boundary attachments of a non-singleton bubble , for each of the left dependents d of and its first conjunct f , we (re)-decide the attachment via P ( d | f ) = logistic ( MLP re ([ v d v v f ])) , and similarly for the last conjunct l and a potential right dependent.",
"Training and Inference Our parser is a locally-trained greedy parser.",
"In training, we optimize the model parameters to maximize the log-likelihoods of predicting the target transitions and labels along the paths generating the gold bubble trees, and the log-likelihoods of the correct attachments in rescoring; 9 during inference, the parser greedily commits to the highest-scoring transition and label for each of its current parser configurations, and after reaching a terminal configuration, it rescores and readjusts all boundary subtree attachments.",
"Task and Evaluation We validate the utility of our transition-based parser using the task of coordination structure prediction.",
"Given an input sentence, the task is to identify all coordination structures and the spans for all their conjuncts within that sentence.",
"We mainly evaluate based on exact metrics which count a prediction of a coordination structure as correct if and only if all of its conjunct spans are correct.",
"To facilitate comparison with pre-existing systems that do not attempt to identify all conjunct boundaries, following Teranishi et al. (2017, 2019), we also consider inner (=only consider the correctness of the two conjuncts adjacent to the coordinator) and whole (=only consider the boundary of the whole coordinated phrase) metrics.",
"Data and Experimental Setup We experiment with two English datasets, the Penn Treebank (PTB; Marcus et al., 1993, newswire) with added coordination annotations (Ficler and Goldberg, 2016a) and the GENIA treebank (Kim et al., 2003, research abstracts).",
"We use the conversion tool distributed with the Stanford Parser (Schuster and Manning, 2016) to extract UD trees from the PTB-style phrase-structure annotations, which we then merge with coordination annotations to form bub-Bubble-Hybrid (Ours) Edge-Factored Prec.",
"ble trees.",
"We follow prior work in reporting PTB results on its standard splits and GENIA results using 5 -fold cross-validation.",
"10 During training (but not test), we discard all non-projective sentences.",
"See Appendix A for dataset pre-processing and statistics and Appendix B for implementation details.",
"Baseline Systems We compare our models with several baseline systems.",
"Hara et al. (2009, HSOM09) use edit graphs to explicitly align coordinated conjuncts based on the idea that they are usually similar; Ficler and Goldberg (2016b, FG16) score candidate coordinations extracted from a phrase-structure parser by modeling their symme-10 We affirm that, as is best practice, only two test-set/crossval-suite runs occurred (one with BERT and one with-out), happening after we fixed everything else; that is, no other models were tried after seeing the first test-set/cross-validation results with and without BERT .",
"try and replaceability properties; Teranishi et al. (2017, TSM17) directly predict boundaries of coordinated phrases and then split them into conjuncts; 11 Teranishi et al. (2019, TSM19) use separate neural models to score the inner and outer boundaries of conjuncts relative to the coordinators, and then use a chart parser to find the globally-optimal coordination structures.",
"Main Results Table 2 and Table 3 show the main evaluation results on the PTB and GENIA datasets.",
"Our models surpass all prior results on both datasets.",
"While the BERT improvements may not seem surprising, we note that Teranishi et al. (2019) report that their pre-trained language models specifically, static ELMo embeddings fail to improve their model performance.",
"General Parsing Results We also evaluate our models on standard parsing metrics by converting the predicted bubble trees to UD-style dependency trees.",
"On PTB, our parsers reach unlabeled and labeled attachment scores (UAS/LAS) of 95 .",
"81 / 94 .",
"46 with BERT and 94 .",
"49 / 92 .",
"88 with bi-LSTM, which are similar to the scores of prior transition-based parsers equipped with similar feature extractors (Kiperwasser and Goldberg, 2016; Mohammadshahi and Henderson, 2020).",
"12 Table 4 compares the general parsing results of our bubble parser and an edge-factored graph-based dependency parser based on Dozat and Manning's (2017) parser architecture and the same feature encoder as our parser and trained on the same data.",
"Our bubble parser shows a slight improvement on identifying the conj relations, despite having a lower overall accuracy due to the greedy nature of our transition-based decoder.",
"Additionally, our 11 We report results for the extended model of TSM17 as described by Teranishi et al. (2019).",
"12 Results are not strictly comparable with previous PTB evaluations that mostly focus on non-UD dependency conversions.",
"Table 4 makes a self-contained comparison using the same UD-based and coordination-merged data conversions.",
"bubble parser simultaneously predicts the boundaries of each coordinated phrase and conjuct, while a typical dependency parser cannot produce such structures.",
"Model Analysis Table 5 shows results of our models with alternative bubble-feature composition functions and varying feature-set sizes.",
"We find that the parameterized form of composition function g performs better, and the F1 scores mostly degrade as we use fewer features from the stack.",
"Interestingly, the importance of our rescoring module becomes more prominent when we use fewer features.",
"Our results resonate with Shi et",
"al.'s (2017) findings on Arc-Hybrid that we need at least one stack item but not necessarily two.",
"Table 6 shows that our model performs better than previous methods on complex sentences with multiple coordination structures and/or more than two conjuncts, especially when we use BERT as feature extractor.",
"Coordination Structure Prediction Very early work with heuristic, non-learning-based approaches (Agarwal and Boggess, 1992; Kurohashi and Nagao, 1994) typically report difficulties in distinguishing shared modifiers from private ones, although such heuristics have been recently incorporated in unsupervised work (Sawada et al., 2020).",
"Generally, researchers have focused on symmetry principles, seeking to align conjuncts (Kurohashi and Nagao, 1994; Shimbo and Hara, 2007; Hara et al., 2009; Hanamoto et al., 2012), since coordinated conjuncts tend to be semantically and syntactically similar (Hogan, 2007), as attested to by psycholinguistic evidence of structural parallelism (Frazier et al., 1984, 2000; Dubey et al., 2005).",
"Ficler and Goldberg (2016a) and Teranishi et al. (2017) additionally leverage the linguistic principle of replaceability one can typically replace a coordinated phrase with one of its conjuncts without the sentence becoming incoherent; this idea has resulted in improved open information extraction (Saha and Mausam, 2018).",
"Using these principles may further improve our parser.",
"Coordination in Constituency Grammar While our paper mainly focuses on enhancing dependency-based syntactic analysis with coordination structures, coordination is a well-studied topic in constituency-based syntax (Zhang, 2009), including proposals and treatments under lexical functional grammar (Kaplan and Maxwell III, 1988), tree-adjoining grammar (Sarkar and Joshi, 1996; Han and Sarkar, 2017), and combinatory categorial grammar (Steedman, 1996, 2000).",
"Tesnire Dependency Structure Sangati and Mazza (2009) propose a representation that is faithful to Tesnire's (1959) original framework.",
"Similar to bubble trees, their structures include special attention to coordination structures respecting conjunct symmetry, but they also include constructs to handle other syntactic notions currently beyond our parser's scope.",
"13 Such representations have been used for re-ranking (Sangati, 2010), but not for (direct) parsing.",
"Perhaps our work can inspire a future Tesnire Dependency Structure parser.",
"Non-constituent Coordination Seemingly incomplete (non-constituent) conjuncts are particularly challenging (Milward, 1994), and our bubble parser currently has no special mechanism for them.",
"Dependency-based analyses have adapted by extending to a graph structure (Gerdes and Kahane, 2015) or explicitly representing elided elements (Schuster et al., 2017).",
"It may be straightforward to integrate the latter into our parser, la Kahane's (1997) proposal of phonologically-empty bubbles.",
"We revisit Kahane's (1997) bubble tree representations for explicitly encoding coordination boundaries as a viable alternative to existing mechanisms in dependency-based analysis of coordination structures.",
"We introduce a transition system that is both sound and complete with respect to the subclass of projective bubble trees.",
"Empirically, our bubble parsers achieve state-of-the-art results on the task of coordination structure prediction on two English datasets.",
"Future work may extend the research scope to other languages, graph-based, and non-projective parsing methods.",
"Acknowledgements We thank the anonymous reviewers for their constructive comments, Yue Guo for discussion, and Hiroki Teranishi for help with experiment setup.",
"This work was supported in part by a Bloomberg Data Science Ph.D.",
"Fellowship to Tianze Shi and a gift from Bloomberg to Lillian Lee."
] | [
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"other",
"other",
"other",
"abstain",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Natural-language prompts have recently been used to coax pretrained language models into performing other AI tasks, using a fill-in-the-blank paradigm (Petroni et al., 2019) or a few-shot extrapolation paradigm (Brown et al., 2020).",
"For example, language models retain factual knowledge from their training corpora that can be extracted by asking them to fill in the blank in a sentential prompt.",
"However, where does this prompt come from?",
"We explore the idea of learning prompts by gradient descenteither fine-tuning prompts taken from previous work, or starting from random initialization.",
"Our prompts consist of soft words, i.e., continuous vectors that are not necessarily word type embeddings from the language model.",
"Furthermore, for each task, we optimize a mixture of prompts, learning which prompts are most effective and how to ensemble them.",
"Across multiple English LMs and tasks, our approach hugely outperforms previous methods, showing that the implicit factual knowledge in language models was previously underestimated.",
"Moreover, this knowledge is cheap to elicit: random initialization is nearly as good as informed initialization.",
"Pretrained language models, such as ELMo (Pe-ters et al., 2018), BERT (Devlin et al., 2019), and BART (Lewis et al., 2020a), have proved to provide useful representations for other NLP tasks.",
"Recently, Petroni et al. (2019) and Jiang et al. (2020) demonstrated that language models (LMs) also contain factual and commonsense knowledge that can be elicited with a prompt.",
"For example, to query the date-of-birth of Mozart, we can use the prompt MozartMozartMozart MozartMozartMozart MozartMozartMozartMozartMozart MozartMozartMozart MozartMozart Mozart was born in , where we have filled the first blank with Mozart, and ask a cloze language model to fill in the second blank.",
"The prompts used by Petroni et al. (2019) are manually created, while Jiang et al. (2020) use mining and paraphrasing based methods to automatically augment the prompt sets.",
"Finding out what young children know is diffi-cult because they can be very sensitive to the form of the question (Donaldson, 1978).",
"Opinion polling is also sensitive to question design (Broughton, 1995).",
"We observe that when we are querying an LM rather than a human, we have the opportunity to tune prompts using gradient descentthe workhorse of modern NLPso that they better elicit the desired type of knowledge.",
"A neural LM sees the prompt as a sequence of continuous word vectors (Baroni et al., 2014).",
"We tune in this continuous space, relaxing the constraint that the vectors be the embeddings of actual English words.",
"Allowing soft prompts consisting of soft words is not only convenient for optimization, but is also more expressive.",
"Soft prompts can emphasize particular words (by lengthening their vectors) or particular dimensions of those words.",
"They can also adjust words that are misleading, ambiguous, or overly specific.",
"Consider the following prompt for the relation date-of-death : x performed until his death in y .",
"This prompt may work for the male singer Cab Calloway, but if we want it to also work for the female painter Mary Cassatt, it might help to soften performed and his so that they do not insist on the wrong occupation and gender, and perhaps to soften until into a weaker connective (as Cassatt was in fact too blind to paint in her final years).",
"Another way to bridge between these cases is to have one prompt using performed and another using painted.",
"In general, there may be many varied lexical patterns that signal a particular relation, and having more patterns will get better coverage (Hearst, 1992; Riloff and Jones, 1999).",
"We therefore propose to learn a mixture of soft prompts.",
"common sense relations from 3 datasets.",
"Comparing on held-out examples, our method dramatically outperforms previous work, even when initialized randomly.",
"So when regarded as approximate knowledge bases, language models know more than we realized.",
"We just had to find the right ways to ask.",
"Factual knowledge is traditionally extracted from large corpora using a pipeline of NLP tools (Surdeanu and Ji, 2014), including entity extraction (Lample et al., 2016), entity linking (Rao et al., 2013) and relation extraction (Sorokin and Gurevych, 2017).",
"However, recent work has shown that simply training a system to complete sentenceslanguage modelingcauses it to implicitly acquire nonlinguistic abilities from its training corpora (Rogers et al., 2020), including factual knowledge (Petroni et al., 2019; Jiang et al., 2020), common sense (Bisk et al., 2019), reasoning (Talmor et al., 2020; Brown et al., 2020), summarization (Radford et al., 2019), and even arithmetic (Bouraoui et al., 2020).",
"Most of the previous work manually creates prompts to extract answers from the trained language model.",
"We use LAMA (Petroni et al., 2019) as a baseline.",
"Building on LAMA, the LM Prompt And Query Archive (LPAQA) method (Jiang et al., 2020) searches for new prompts by either mining a corpus or paraphrasing existing prompts.",
"AutoPrompt (Shin et al., 2020) searches for improved prompts using a gradient signal, although its prompts are limited to sequences of actual (hard) English words, unlike our method.",
"We compare our novel soft prompts against all of these systems.",
"After we submitted the present paper in November 2020, two still unpublished manuscripts appeared on arXiv that also investigated soft prompts.",
"Li and Liang (2021) considered the setting of generating text from a pretrained language model (GPT-2 or BART) conditioned on a textual prompt.",
"To improve the results, they prepended a few task-specific soft tokens to the prompt and tuned the embeddings of only these tokens (at all embedding layers).",
"Liu et al. (2021) adopted a strategy similar to ours by tuning fill-in-the-blank prompts in a continuous space, testing on GPT-2 and BERT models, although they did not use the enhancements we proposed in 3.23.4 below.",
"Like our work, both these papers achieved strong gains.",
"prompts from a corpus, then fine-tune the whole language model so that it more accurately completes the prompts.",
"Schick and Schtze (2020a,b) are similar but fine-tune the language model differently for each prompt.",
"Our method complements these by tuning the prompts themselves.",
"Probing systems that ask what language models know about particular sentences (e.g., Eich-ler et al., 2019) usually use feedforward networks rather than further natural-language prompts.",
"Yet Shin et al. (2020) show how to use natural-language prompts to ask about particular sentences.",
"Our method could potentially be applied to those prompts, or to few-shot learning prompts that include input-output examples (Brown et al., 2020).",
"Our experiments will specifically aim at extracting relational knowledge from language models.",
"We are given a fixed pretrained LM, a specific binary relation r such as date-of-death , and a training dataset E r consisting of known ( x, y ) pairs in r , such as (Mary Cassatt, 1926).",
"We will then train a system to predict y from x , and evaluate it on held-out ( x, y ) pairs of the same relation.",
"A prompt t is a sentence or phrase that includes two blanks, as illustrated in 1.",
"To pose the query, we fill the x blank with x : Mary Cassatt Mary Cassatt Mary Cassatt Mary Cassatt Mary Cassatt Mary Cassatt Mary Cassatt Mary Cassatt Mary Cassatt Mary Cassatt Mary Cassatt Mary Cassatt Mary Cassatt Mary Cassatt Mary Cassatt Mary Cassatt Mary Cassatt performed until his death in y .",
"We can ask the LM for its probability distribution p LM ( y | t , x ) over single words that can now fill y .",
"The correct answer would be 1926.",
"Suppose the LM identifies the word types with vectors in R d .",
"We also allow t to be a soft prompt, in which the tokens can be arbitrary vectors in R d : x v 1 v 2 v 3 v 4 v 5 y v 6 We can initialize these vectors to match those of a given hard prompt.",
"(Each token of a hard prompt may be a word, subword, or punctuation mark, according to the tokenization procedure used by the LM.)",
"However, we can then tune the vectors continuously.",
"We do not change the number of vectors or their positions.",
"For the prompt shown above, we have a 6 d -dimensional search space.",
"For each token i of a prompt, the vector v i enters into the LM's computations that complete the prompt.",
"For example, a Transformer architecture computes successively deeper contextual embeddings of the token, v ( (cid:96) ) i : 0 (cid:96) L .",
"Here v (0) i = v i and the embedding v ( (cid:96) ) i at layer (cid:96) > 0 is computed from all tokens' embeddings v ( (cid:96) 1) j at the previous layer, using the LM's parameters.",
"We can tune the prompt by additively perturbing each v ( (cid:96) ) i by a small vector ( (cid:96) ) i before it is used in further computations.",
"The vectors for a given hard prompt are initialized to 0 and then tuned.",
"Perturbing only layer 0 is equivalent to tuning v i directly as in 3.1.",
"However, if we are more aggressive and perturb all layers, we now have 6 d ( L + 1) parameters to tune a 6-token prompt.",
"The perturbations ( vectors) can be kept small through early stopping or some other form of regularization.",
"Our intuition is that small perturbations will yield more familiar activation patterns that are similar to those that the LM was originally trained on.",
"(Li and Liang (2021) tried a rather different approach to preventing overfitting when tuning all",
"layers.) 3.3 Mixture Modeling Given a set T r of soft prompts for relation r , we can define the ensemble predictive distribution p ( y | x, r ) = (cid:88) t T r p ( t | r ) p LM ( y | t , x ) (1) where the learned mixture weights p ( t | r ) form a distribution over the soft prompts t T r .",
"En-sembling techniques other than mixture-of-experts could also be used, including product-of-experts (Jiang et al., 2020).",
"As an extension, we can replace the mixture weights p ( t | r ) with p ( t | r, x ) , to allow the model to select prompts that are appropriate for the given x .",
"For example, a plural noun x might prefer prompts t that use a plural verb.",
"While we could directly build a neural softmax model for p ( t | r, x ) , it seems useful to capture the intuition that t may work better if x is plausible in its x .",
"Thus, we instead use Bayes' Theorem to write p ( t | r, x ) as proportional to p ( t | r ) p ( x | t , r ) 1 /T , where we have included T to modulate the strength of the above intuition.",
"1 Here p ( t | r ) is still a learned distribution over prompts, and we use the fixed language model to estimate the second factor as (cid:80) y p LM ( x, y | t ) (dropping the dependence on r just as we did for the second factor of (1)).",
"log T is tuned along with all other parameters.",
"Given an initial set of prompts T r , we jointly optimize the soft prompts t T and their mixture weights p ( t | r ) (and log T in 3.4) to minimize the log-loss of the predictive distribution (1):",
"This is a continuous and differentiable objective whose gradient can be computed by back-propagation.",
"It can be locally minimized by gradient descent (using a softmax parameterization of the mixture weights).",
"Equivalently, it can be locally minimized by the EM algorithm: the E step finds a posterior distribution over latent prompts for each ( x, y ) example, and the M step performs gradient descent to optimize the prompts in that mixture.",
"The relations we learn to predict are T-REx original (Elsahar et al., 2018), T-REx extended (Shin et al., 2020), Google-RE (Orr, 2013), and ConceptNet (Speer et al., 2017)or rather, the subsets that were used by the LAMA and AutoPrompt papers.",
"See Appendix A for some statistics.",
"Following Petroni et al. (2019), we interrogate BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019).",
"These are masked (cloze) language models.",
"For variety, we also interrogate BART (Lewis et al., 2020a), which conditions on the prompt with empty y and generates a copy where y has been filled in (by a single token).",
"We constrain BART's decoding to ensure that its answer does take this form.",
"Unlike BERT and RoBERTa, BART could be used to fill y with 1 Raising the temperature T increases the entropy of the mixture to get the benefits of ensembling; without T , the strong language model usually places almost all the weight on a single prompt.",
"For the two T-REx datasets, we inherit the training-validation-test split from Shin et al. (2020).",
"For the other datasets, we split randomly in the ratio 80-10-10.",
"3 Since all pairs ( x, y ) are distinct, there are no common triples among these three sets.",
"Common x values are also rare because each dataset has at least 174 distinct x values.",
"However, the number of distinct y values can be as small as 6.",
"Thus, in another set of experiments (Appendix E), we used a more challenging split that ensures that there are no common y values among these three sets.",
"This tests whether our model generalizes to unseen values.",
"For the T-REx and Google-RE datasets, we have four sources of initial prompts:",
"(sin.) LAMA provides a sin gle manually created hard prompt for each relation type r .",
"(par.) LPAQA (Jiang et al., 2020) provides a set of 1330 hard prompts for each r , which are par aphrases of the LAMA prompt.",
"4 (min.) LPAQA also provides a set of 629 hard prompts for each r , based on text min ing. (ran.) For each (min.) prompt, we replace each word with a ran dom vector, drawn from a Gaussian distribution fit to all of the LM's word embeddings.",
"The number of words and the position of the blanks are preserved.",
"For the ConceptNet dataset, LAMA uses the gold Open Mind Common Sense (OMCS) dataset (Singh et al., 2002).",
"In this dataset, each example ( x i , y i ) is equipped with its own prompt t i .",
"(Each example is really a sentence with two substrings marked as x and y , which are removed to obtain t i .)",
"These prompts are often overly specific: often y i can be predicted from ( t i , x i ) , or just from t i alone, 2 Among other filters, the LAMA and AutoPrompt papers keep only the triples ( r, x, y ) such that y is a single token according to the language models used by LAMA.",
"When working with BART, we further require y to be a single token according to BART's tokenization; thus, the BART results are not comparable with the other language models.",
"3 The LAMA paper (Petroni et al., 2019) provided no split but used everything as test data for their zero-shot method.",
"but y j cannot be predicted from ( t i , x j ) .",
"Thus, for each relation r , we use only the prompts that appear more than 10 times, resulting in 138 prompts.",
"Statistics about the prompts are in Appendix B. We used only a single copy of each prompt, but a generalization would be to allow multiple slightly perturbed copies of each prompt, which could diverge and specialize during training (Rose, 1998).",
"We optimize equation (2) with the method introduced in 3.5.",
"We use the Adam optimizer (Kingma and Ba, 2015) with its default configu-ration.",
"For gradient training, we set the batch size as 64, early-stop patience as 4, and test with the model that performs best on the dev set among 16 training epochs.",
"Training is fast.",
"Even for our largest model (BERT-large-cased) and largest dataset (T-REx ex-tended), tuning a single prompt completes within a few minutes.",
"With a mixture of prompts, training scales roughly linearly with the number of prompts.",
"It is still presumably much cheaper in time and memory than fine-tuning the entire BERT model, which must back-propagate a much larger set of gradients.",
"Our method outputs the most probable y given ( r, x ) .",
"Here and in the supplementary material, we report its average performance on all test examples, with precision-at-1 (P@1), precision-at-10 (P@10) and mean reciprocal rank (MRR) as metrics.",
"We measure the improvement from tuning LAMA, LPAQA, and random prompts.",
"We also compare with AutoPrompt.",
"Baseline numbers come from prior papers or our reimplementations.",
"Table 1 shows results on T-REx datasets obtained by querying three BERT-style models, with P@1 as the metric.",
"Additional metrics and language models are shown in Tables 2 and 3 as well as Tables 5 and 6 in the supplementary material.",
"We consistently get large improvements by tuning the initial prompts.",
"Remarkably, our method beats all prior methods even when throwing away the words of their informed prompts in favor of random initial vectors.",
"It simply finds a prompt that works well on the ( x, y ) training examples.",
"form) or only the word vectors in the prompts t .",
"As Table 4 shows, each helps, but the major ben-efit comes from tuning the word vectors to get soft prompts.",
"Appendix C visualizes a set of soft prompts, and Appendix D analyzes the mixture weights.",
"We also experiment on a challenging setting where the y labels are distinct for training and test (Appendix E in the supplementary materials), and find that soft prompts still yield some benefits.",
"The above results are for our basic method that tunes only the words of the prompt (i.e., layer 0).",
"When we tune all layersthe deeply perturbed prompts of 3.2we typically obtain small additional gains, across various models and initializations, although tuning all layers does substantially hurt RoBERTa.",
"These results are shown in Tables 5 and 6 in the supplementary material.",
"The tables show that the winning system for each combination of language model, T-REx dataset, and evaluation metric always uses a mixture of soft prompts initialized to mined prompts.",
"It always tunes all layers, except with RoBERTa.",
"Well-crafted natural language prompts are a powerful way to extract information from pretrained language models.",
"In the case of cloze prompts used to query BERT and BART models for single-word answers, we have demonstrated startlingly large and consistent improvements from rapidly learning prompts that workeven though the resulting soft prompts are no longer natural language.",
"Our code and data are available at https:// github.com/hiaoxui/soft-prompts .",
"How about few-shot prediction with pretrained generative LMs?",
"Here, Lewis et al. (2020b) show how to assemble a natural language prompt for input x from relevant input-output pairs ( x i , y i ) selected by a trained retrieval model.",
"Allowing fine-tuned soft string pairs is an intriguing future possibility for improving such methods without needing to fine-tune the entire language model.",
"We thank the anonymous reviewers for helpful comments.",
"This work was supported by DARPA KAIROS and by the National Science Foundation under Grant No. 1718846.",
"The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes.",
"The views and conclusions contained in this publication are those of the authors, and should not be interpreted as representing official policies nor endorsement by the funding agencies or by Microsoft (where Dr. Eisner is also a paid employee, in an arrangement that has been reviewed and approved by the Johns Hopkins University in accordance with its conflict of interest policies)."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"result",
"other",
"other",
"other",
"method",
"other",
"method",
"objective",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"In this paper, we detail the relationship between convolutions and self-attention in natural language tasks.",
"We show that relative position embeddings in self-attention layers are equivalent to recently-proposed dynamic lightweight convolutions, and we consider multiple new ways of integrating convolutions into Transformer self-attention.",
"Specifically, we propose composite attention, which unites previous relative position embedding methods under a convolutional framework.",
"We conduct experiments by training BERT with composite attention, finding that convolutions consistently improve performance on multiple downstream tasks, replacing absolute position embeddings.",
"To inform future work, we present results comparing lightweight convolutions, dynamic convolutions, and depthwise-separable convolutions in language model pretraining, considering multiple injection points for convolutions in self-attention layers.",
"In recent years, Transformer-based language models have brought dramatic improvements on a wide range of natural language tasks (Brown et al., 2020; Devlin et al., 2019).",
"The central innovation of Transformer architectures is the self-attention mechanism (Vaswani et al., 2017), which has grown beyond NLP, extending into domains ranging from computer vision (Dosovitskiy et al., 2021) and speech recognition (Dong et al., 2018) to reinforcement learning (Parisotto et al., 2020; Touvron et al., 2020).",
"In computer vision, self-attention and convolutions have been combined to achieve competitive results for image classification (Bello et al., 2019).",
"Similarly, researchers in NLP have begun integrating convolutions into self-attention for natural language tasks.",
"Recent work has shown initial success adding convolutional modules to self-attention in pre-trained language models (Jiang et al., 2020), or even replacing self-attention entirely with dynamic convolutions (Wu et al., 2019).",
"These successes defy theoretical proofs showing that multi-headed self-attention with relative position embeddings is strictly more expressive than convolution (Cordon-nier et al., 2020).",
"To identify why convolutions have been successful in NLP, we seek to isolate the differences between self-attention and convolution in the context of natural language.",
"In this work, we formalize the relationship between self-attention and convolution in Transformer encoders by generalizing relative position embeddings, and we identify the benefits of each approach for language model pre-training.",
"We show that self-attention is a type of dynamic lightweight convolution, a data-dependent convolution that ties weights across input channels (Wu et al., 2019).",
"Notably, previous methods of encoding relative positions (Shaw et al., 2018; Raffel et al., 2020) are direct implementations of lightweight convolutions.",
"Under our framework, the benefits of convolution come from an ability to capture local position information in sentences.",
"Then, we propose composite attention, which applies a lightweight convolution that combines previous relative position embedding methods.",
"We find that composite attention sufficiently captures the information provided by many other convolutions.",
"To validate our framework, we train BERT models that integrate self-attention with multiple convolution types, evaluating our models on the GLUE benchmark (Wang et al., 2018).",
"All of our convolutional variants outperform the default model, demonstrating the effectiveness of convolutions in enhancing self-attention for natural language tasks.",
"Our empirical results provide evidence for future research integrating convolutions and self-attention for NLP.",
"First, we outline the relationship between self-attention and convolutions.",
"Specifically, we show that a self-attention operation can be viewed as a dynamic lightweight convolution, a depthwise convolution that ties weights along channels (Wu et al., 2019).",
"We then isolate the differences between self-attention and lightweight convolutions, highlighting the benefits of each approach in language models.",
"In a Transformer self-attention layer, inputs x 1 , ..., x n R d are projected to corresponding queries, keys, and values by linear transformations WQ , WK , WV R d d h for each attention head, projecting into the head dimension size d h .",
"Output vectors y 1 , ..., y n R d are linear combinations of values, concatenating all attention heads.",
"Value weights (before softmaxing) are determined by: ij = ( x i WQ )( x j WK ) T d h .",
"Intuitively, ij represents the attention that token i pays to token j , incorporating the value x j WV into the resulting vector y i .",
"From the attention scores between various tokens i and j , an attention map of ij is produced (see Figure 1).",
"In contrast, a standard one-dimensional convolution slides a kernel of weights along the input sequence; each feature in each output representation y i is a weighted sum of all features (called chan-nels) in the surrounding x i .",
"To save parameters, it is common to consider depthwise convolutions where each channel c in y i is a weighted sum only of the features in channel c for the surrounding x i .",
"Formally, each entry of y i can be written as: y i,c = (cid:88) k j i k j i,c x j,c (2) where k is the kernel size in each direction.",
"Each scalar j i,c represents the attention paid to relative position j i for channel c .",
"To further simplify depthwise convolutions for use in language models, Wu et al. (2019) propose lightweight convolutions, which tie weights j i,c along all channels c .",
"As a result, the lightweight convolution contains only 2 k + 1 weights, one scalar j i for each relative position considered.",
"Then, each y i is a linear combination of surrounding x i : y i = (cid:88) k j i k j i x j (3) Importantly, we can then consider each j i as an attention weight analogous to self-attention, representing the attention that token i pays to token j .",
"The lightweight convolution produces an attention map of j i as visualized in Figure",
"1. Finally, furthering the similarity between lightweight convolutions and self-attention, Wu et al. (2019) propose dynamic lightweight convolutions, which dynamically compute relative weights j i based on individual input tokens.",
"In other words, each row in Figure 1 has relative weights determined dynamically based on the input token x i for that row.",
"Because attentions for relative positions are no longer fixed across rows, the attention map in Figure 1 achieves similar flexibility to standard self-attention.",
"We have shown that both self-attention and lightweight convolution compute linear combinations of token representations, but we now isolate the differences between the two approaches.",
"Perhaps most importantly, the two methods assign attention scores ij and j i in fundamentally different ways.",
"Self-attention computes ij based on the dot product between query i and key j , ignoring the relative position between i and j .",
"In this way, self-attention layers model interactions exclusively between token representations.",
"If the tokens are arbitrarily shuffled in a standard self-attention layer, the output for each token is unchanged.",
"All position information is injected before the first self-attention layer in the form of absolute position embeddings.",
"In contrast, dynamic lightweight convolutions assign attention scores directly to relative positions.",
"This allows convolutions to directly integrate relative position information without relying on absolute positions.",
"Thus, convolutions could be better at capturing local information in sentences.",
"However, convolutions alone are limited in their ability to model interactions between tokens because they lack the query-key mechanism central to standard self-attention.",
"In future sections, we consider methods of integrating the two approaches.",
"Previous work has sought to integrate local information into global self-attention.",
"This can be achieved by restricting the range of self-attention to nearby tokens, or by incorporating relative position information into attention maps (Hofstatter et al., 2020; Raganato et al., 2020; Wei et al., 2021).",
"Notably, Shaw et al. (2018) introduced relative position embeddings, which inspired similar embeddings in models such as Transformer-XL and XLNet (Dai et al., 2019; Yang et al., 2019).",
"In this section, we show that several previous methods of encoding relative positions are direct implementations of lightweight convolutions.",
"First, the simplest way to combine self-attention with lightweight convolution is to generate a standard attention map, then add the attention map generated by a lightweight convolution.",
"Given a fixed lightweight convolution, this results in attention scores as follows: ij = ( x i WQ )( x j WK ) T d h + j i (4) This is exactly the relative position term used in T5 (Raffel et al., 2020) and TUPE (Ke et al., 2021).",
"We further consider a dynamic lightweight convolution, where the j i weights are computed by passing the query through a linear feedforward layer WC R d h (2 k +1) (Wu et al., 2019).",
"1 Because WC is linear, each weight j i is equal to the dot product between the query and the ( j i ) column of WC .",
"We then obtain attention scores: ij = ( x i WQ )( x j WK ) T d h + ( x i WQ )( W Cj i ) T If we scale the dynamic lightweight convolution term according to the head dimension size, we obtain precisely the relative embeddings proposed in Shaw et al. (2018): ij = ( x i WQ )( x j WK + W Cj i ) T d h (5) Under this interpretation, Shaw's relative embeddings are essentially identical to the dynamic lightweight convolutions used in Wu et al. (2019).",
"In both formulations, relative position weights are computed as dot products between the query and a learned relative position embedding.",
"Previous work has considered relative positions in language models independently from convolutions, but our derivations suggest that the underlying mechanisms may be the same.",
"1 Wu et al. (2019) generate dynamic lightweight convolutions based on the entire query layer (dimension size d ).",
"In our work, we generate convolutions based on queries for individual attention heads (dimension size d h ), to be consistent with the relative embeddings in Shaw et al. (2018).",
"To validate lightweight convolutions in combination with self-attention, we pre-trained and evaluated BERT-small models (Devlin et al., 2019; Clark et al., 2020) that incorporated lightweight convolutions.",
"Pre-training To maximize similarity with Devlin et al. (2019), we pre-trained models on the BookCorpus (Zhu et al., 2015) and WikiText-103 datasets (Merity et al., 2017) using masked language modeling.",
"Small models were pre-trained for 125,000 steps, with batch size 128 and learning rate 0.0003.",
"Full pre-training and fine-tuning details are outlined in Appendix A.1.",
"2 Evaluation Models were evaluated on the GLUE benchmark, a suite of sentence classification tasks including natural language inference (NLI), grammaticality judgments, sentiment classification, and textual similarity (Wang et al., 2018).",
"For each task, we ran ten fine-tuning runs and used the model with the best score on the development set.",
"We report scores on the GLUE test set.",
"Development scores and statistics for all experiments are reported in Appendix A.2.",
"Models We trained two baseline models, a default BERT-small with standard absolute position embeddings, and a BERT-small with no position information whatsoever.",
"Then, we trained models with fixed lightweight convolutions (Equation 4; 2 Code is available at https://github.com/ mlpc-ucsd/BERT_Convolutions , built upon the Huggingface Transformers library (Wolf et al., 2020).",
"Raffel et al. 2020), and dynamic lightweight convolutions that generated convolution weights based on each query (i.e. using relative embeddings, Equation 5; Shaw et al. 2018).",
"Finally, we propose composite attention, which simply adds dynamic lightweight convolutions to fixed lightweight convolutions, resulting in attention scores ij as follows: ( x i WQ )( x j WK ) T d h (cid:124) (cid:123)(cid:122) (cid:125) Self-attention + ( x i WQ )( W Cj i ) T d h (cid:124) (cid:123)(cid:122) (cid:125) Dynamic convolution (relative embeddings) + j i (cid:124)(cid:123)(cid:122)(cid:125) Fixed convolution (6) Intuitively, composite attention has the flexibility of dynamic lightweight convolutions, while still allowing models to incorporate relative positions directly through fixed lightweight convolutions.",
"Alternatively, composite attention can be interpreted as adding a fixed bias term to relative position embeddings.",
"All of our experiments used a convolution kernel size of 17, or eight positions in each direction, a mid-range value that has been found to work well for both relative positions and convolution in language models (Huang et al., 2020; Jiang et al., 2020; Shaw et al., 2018).",
"As in Shaw et al. (2018), relative embeddings W Cj i shared weights across heads.",
"Unless stated otherwise, models used no absolute position embeddings.",
"For completeness, we also considered dynamic lightweight convolutions based on the key (as opposed to the query).",
"In contrast to query-based 4326 lightweight convolutions, key-based convolutions allow each token to dictate which relative positions should pay attention to it, rather than dictating which relative positions it should pay attention to.",
"Referring to the visualization in Figure 1, key-based dynamic convolutions correspond to columns instead of rows.",
"These key-based dynamic lightweight convolutions are the same as the relative embeddings proposed in Huang et al. (2020), but they are now formulated as dynamic lightweight convolutions.",
"GLUE test set results are presented in Table",
"Lightweight convolutions consistently improved performance.",
"Notably, even the fixed lightweight convolution was sufficient to replace absolute position embeddings, outperforming the default BERT-small model.",
"This indicates that even nave sampling from nearby tokens can be beneficial to language model performance.",
"Dynamic convolutions provided further improvements.",
"When the lightweight convolutions were generated dynamically based on token queries, the models outperformed the default model by even larger margins.",
"This improvement over fixed lightweight convolutions suggests that different tokens find it useful to generate different lightweight convolutions, paying attention to different relative positions in a sentence.",
"Composite attention performed the best.",
"Combining fixed lightweight convolutions with dynamic lightweight convolutions proved an effective strategy for encoding relative positions.",
"Although composite attention is simply a combination of Shaw et al. (2018) and Raffel et al. (2020)'s relative position embeddings, it validates convolution as a viable method of encoding relative positions in self-attention.",
"additional benefit.",
"When we generated an additional lightweight convolution based on keys, the model performed worse than composite attention alone (GLUE 74.0 compared to 75.2).",
"This result clarifies the findings of Huang et al. (2020), who reported only small improvements from query and key-based relative position embeddings for a subset of the GLUE tasks.",
"sensitive to position information.",
"On the CoLA task (the corpus of linguistic acceptability; Warstadt et al. 2019), there was a dramatic performance drop when absolute position embeddings were removed.",
"However, when any type of lightweight convolution was added, performance improved even over the baseline established by absolute positions.",
"The pronounced effects of local position information on the CoLA task support the intuitive hypothesis that local dependencies are particularly important for grammaticality judgments.",
"This result also suggests that convolutions could be beneficial to more local tasks (e.g. token-level tasks) along with sentence classification tasks.",
"To better understand how lightweight convolutions improve language models, we visualized the learned lightweight convolution kernel weights in Figure",
"2. Qualitatively, the kernels exhibited spe-cific types of patterns: Paying particular attention to the previous or next token.",
"Paying graded attention either to past or future tokens, dictated by how far the target token is from the present token.",
"These observations support the assumption that nearby tokens are relevant to the interpretation of the current token.",
"They also align with the findings 4327 of Voita et al. (2019), who identified positional attention heads that focus primarily on the next or previous token.",
"From this perspective, lightweight convolutions allow language models to explicitly represent nearby tokens' positions.",
"Interestingly, we also found that some kernels paid fairly uniform attention to all tokens, even decreasing attention to nearby and adjacent tokens.",
"It is likely that these attention heads focused on more global information, relying on the query-key attention mechanism rather than the convolution.",
"To thoroughly assess the impact of composite attention on pre-trained language models, we trained full-sized BERT models for 1M steps each, replicating our BERT-small experiments.",
"Pre-training details are outlined in Appendix A.1.",
"Results are presented in Table",
"1. Differences between models decreased substantially for full sized models, and the relative performances of different approaches varied across tasks.",
"Our results suggest that relative position information is more useful for smaller or more data-limited models; extending the benefits of convolutions robustly from small models to larger models is an important direction for future research.",
"That said, even in the larger models, composite attention slightly outperformed the other position embedding methods in overall GLUE score.",
"Our results demonstrate that convolutions can perform at least on par with absolute position embeddings even in larger models.",
"The previous section found that lightweight convolutions consistently improved pre-trained language model performance.",
"Next, we investigated whether the additional flexibility of non-lightweight convolutions could provide additional benefits.",
"Specifically, we considered convolutions that were fixed but non-lightweight.",
"In other words, convolution weights were fixed regardless of the input query, but weights were not tied across channels, equivalent to a standard depthwise convolution.",
"We only considered fixed depthwise convolutions because under existing frameworks, dynamic depthwise convolutions would introduce large numbers of parameters.",
"To implement depthwise convolutions, we added a convolution term identical to the fixed lightweight convolution in Equation 4, except that j i was Figure 3: Learned convolution kernel weights j i,c (Equation 7) for the depthwise convolution in the deepest attention layer.",
"This is equivalent to adding a depthwise convolution of the token values to the standard self-attention output.",
"We ran experiments using the same setup as the lightweight convolution experiments in Section 3.2.",
"To compare the effects of dynamic lightweight convolutions (e.g. composite attention) and non-lightweight (depthwise) convolutions, we trained models using each possible combination of the two convolutions.",
"Results are presented in Table",
"2. Depthwise convolutions were less effective than lightweight convolutions.",
"As with lightweight convolutions, the depthwise convolutions effectively replaced absolute position embeddings, outperforming the default model.",
"However, fixed depthwise convolutions performed worse than fixed lightweight convolutions on the majority of tasks.",
"This indicates that flexibility across channels is not critical to the success of convolutions in language models.",
"3 For computational efficiency, we applied the softmax to the attention scores prior to adding the convolution term j i,c , to avoid computing softmax scores separately for each individual channel.",
"Softmax is not commonly applied in depthwise convolutions.",
"Composite attention already provided the necessary flexibility.",
"Composite attention outperformed the fixed depthwise convolutions; even when composite attention was combined with depthwise convolutions, there was no overall improvement over composite attention alone.",
"This suggests that in the context of language, dynamic lightweight convolutions efficiently encode any local position information provided by depthwise convolutions.",
"Depthwise convolutions differentiated previous and next tokens.",
"In previous sections, we found that lightweight convolution kernels often pay attention specifically to adjacent tokens.",
"As can be seen in Figure 3, this result was even more pronounced in depthwise convolutions, with individual channels focusing on the previous or next token.",
"Interestingly, other channels specifically directed attention away from adjacent tokens.",
"This indicates that the relevant information about next and previous tokens can be compressed into a subset of the feature channels, freeing other channels to consider more distant or position-independent information.",
"Improvements over the non-convolutional baselines indicate that convolutions are beneficial to language model pre-training, serving as replacements for absolute position embeddings.",
"Our previous experiments applied different types of convolutions to self-attention values.",
"To take this result one step further, we replaced the linear query, key, and value projections themselves with convolutional layers.",
"Intuitively, applying convolutions before self-attention induces even more mixing of token representations.",
"If convolutions are built into every query, key, and value, then it becomes impossible for a token i to pay attention to a single token j without also incorporating information about tokens surrounding token j .",
"As in Sections 3.2 and 4.1, we ran experiments on BERT-small.",
"We replaced the query, key and value projections with depthwise-separable convolutions in half of the self-attention heads.",
"4 This aligns with previous work in which only half of the output dimensions for each token were generated using convolutions (Jiang et al., 2020).",
"Indeed, our initial explorations found that it was more effective to replace the linear projections in only half, not all, the attention heads.",
"Then, we considered whether convolutions from previous experiments provided additional benefits over convolutional queries, keys, and values.",
"To test this, we trained BERT-small models with composite attention (Equation 6), adding convolutional queries, keys, and values.",
"4 Depthwise-separable convolutions are a common way to save convolution parameters.",
"A depthwise convolution is applied first, applying an independent convolution for each channel.",
"Then, a pointwise convolution (i.e. a feedforward layer) mixes the channels to produce the final output.",
"Results are presented in Table",
"3. Similar to our previous convolution experiments, all convolutional replacements successfully outperformed the default model.",
"These results strongly support the conclusion that convolutions are a viable method of encoding positional information for language tasks.",
"However, all convolutional replacements for queries, keys, and values slightly decreased the performance of models using composite attention.",
"Convolutional values in particular were effective in models without composite attention, but they slightly decreased performance in models that already incorporated such lightweight convolutions.",
"We conclude that although convolutions can benefit models by adding local position information, there is a limit to how much local mixing should be done.",
"It is sufficient to apply convolutions to token values on top of self-attention; additional convolutional layers applied before the self-attention map enforce unnecessary mixing of token representations.",
"Our results demonstrate that convolutions provide consistent benefits to pre-trained language models.",
"Our proposed composite attention mechanism combines previous relative position embedding methods, showing that convolutions can effectively compensate for the lack of local position information in Transformer models.",
"Our work unites and builds upon previous work using convolutions and relative positions in Transformers.",
"We adopted the relative embeddings from Shaw et al. (2018) and Huang et al. (2020), showing that these embeddings are equivalent to the dynamic lightweight convolutions in Wu et al. (2019).",
"Combining these dynamic lightweight convolutions with fixed lightweight convolutions (equivalent to the relative position terms in Raffel et al. 2020), we studied relative embeddings under the framework of convolution integrated with self-attention.",
"As far as we are aware, our work is the first to holistically compare relative positions, convolutions, and self-attention in language models.",
"Building upon dynamic lightweight convolutions, recent work has incorporated both depthwise-separable and dynamic lightweight convolutions in pre-trained language models.",
"Jiang et al. (2020) proposed ConvBERT, which adds a convolutional module alongside the standard self-attention mechanism in BERT.",
"ConvBERT's convolutional module consists of a depthwise-separable convolution combining with a query to generate a dynamic lightweight convolution.",
"Under our integrated framework, this is analogous to the model which uses depthwise-separable convolutions for queries and keys, using composite attention as a query-based dynamic lightweight convolution (see Table 3).",
"To make this comparison concrete, we trained a ConvBERT-small model using the same setup as our experiments.",
"Indeed, the analogous model under our framework outperformed ConvBERT-small (GLUE score 74.5 compared to 70.3).",
"Details for the ConvBERT comparison can be found in Appendix A.3.",
"Finally, recent work has proved theoretical relationships between self-attention and convolution.",
"Cordonnier et al. (2020) showed that given enough self-attention heads, self-attention weights can express any convolution; in fact, they showed that self-attention layers often learn such convolutional structures when trained on vision tasks.",
"However, this theoretical equivalence does not explain convolution-based improvements for Transformers in language tasks.",
"To clarify the relationship between self-attention and convolution in language, our work characterizes self-attention as a type of dynamic lightweight convolution.",
"By establishing a per-parameter equivalence between relative position embeddings and Wu's dynamic lightweight convolutions, we provide a concrete foundation where self-attention and convolution are used together in practice.",
"In this work, we formalized the relationship between self-attention and convolution.",
"We proposed composite attention, which combines self-attention with lightweight convolution, uniting previous approaches to relative positions.",
"Our formulation and empirical results demonstrate that convolutions can improve self-attention by providing local position information in sentences, capable of replacing absolute position embeddings entirely.",
"Our findings provide a solid foundation from which to study convolutions and self-attention in language tasks.",
"The spatially-oriented nature of convolutional neural networks translates directly into positional information in language.",
"As vision and language researchers strive towards common 4330 deep learning architectures, it is important to recognize how architectures for vision tasks can be adapted to linguistic domains.",
"Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, and Shuicheng Yan.",
"2020.",
"ConvBERT: Improving BERT with span-based dynamic convolution.",
"In Proceedings of the 34th Conference on Neural Information Processing Systems .",
"Guolin Ke, Di He, and Tie-Yan Liu.",
"2021.",
"Rethinking positional encoding in language pre-training.",
"In Proceedings of the International Conference on Learning Representations .",
"Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher.",
"2017.",
"Pointer sentinel mixture models.",
"In Proceedings of the Fifth International Conference on Learning Representations .",
"Jason Phang, Thibault Fevry, and Samuel Bowman.",
"2018.",
"Sentence encoders on STILTs: Supplementary training on intermediate labeled-data tasks.",
"This work is funded by NSF IIS-1717431.",
"Zhuowen Tu is also funded under the Qualcomm Faculty Award.",
"Tyler Chang is partially supported by the UCSD HDSI graduate fellowship."
] | [
"abstain",
"objective",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"objective",
"result",
"method",
"objective",
"result",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"objective",
"other",
"other",
"other",
"result",
"method",
"result",
"other",
"other",
"other",
"other",
"abstain",
"method",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Effective question-asking is a crucial component of a successful conversational chatbot.",
"It could help the bots manifest empathy and render the interaction more engaging by demonstrating attention to the speaker's emotions.",
"However, current dialog generation approaches do not model this subtle emotion regulation technique due to the lack of a taxonomy of questions and their purpose in social chitchat.",
"To address this gap, we have developed an empathetic question taxonomy (EQT), with special attention paid to questions' ability to capture communicative acts and their emotion-regulation intents.",
"We further design a crowdsourcing task to annotate a large subset of the EmpatheticDialogues dataset with the established labels.",
"We use the crowd-annotated data to develop automatic labeling tools and produce labels for the whole dataset.",
"Finally, we employ information visualization techniques to summarize co-occurrences of question acts and intents and their role in regulating interlocutor's emotion.",
"These results reveal important question-asking strategies in social dialogs.",
"The EQT classification scheme can facilitate computational analysis of questions in datasets.",
"More importantly, it can inform future efforts in empathetic question generation using neural or hybrid methods.",
"1 1 Introduction Questions constitute a considerable part of casual conversations and play many important social functions (Huang et al., 2017; Enfield et al., 2010).",
"Asking follow-up questions about the speaker's statement indicates responsiveness, attention, and care for the partner (Bregman, 2020; Huang et al., 2017).",
"Listeners who manifest such an empathetic and curious attitude are more likely to establish the common ground for meaningful communication 1 Our code and the annotated dataset are publicly accessible at https://github.com/Sea94/EQT .",
"The vital role of questions in social interaction makes question-asking a desirable property for open-domain chatbots.",
"These chatbots aim to engage in a natural conversation with the users while practicing active listening to deliver understanding and recognition of users' feelings (Rashkin et al., 2019).",
"In fact, generating meaningful questions is so important that this has become one of the central objectives of such agents (Xiao et al., 2020).",
"However, asking questions effectively is challenging as not all questions can achieve a particular social goal, such as demonstrating attentiveness or empathy (Huang et al., 2017; Robinson and Heritage, 2006; Paukert et al., 2004).",
"Given the task complexity, automatic conversational question generation is still gaining momentum, with only few results reported so far.",
"See et al. (2019) suggested a way to control the number of questions produced by the model with conditional training.",
"Wang et al. (2019) proposed a question-generation method to increase their semantic coherence with the answer, employing reinforcement learning followed by the adversarial training procedure.",
"Wang et al. (2018) devised a model generating appropriate questions for a variety of topics by modeling the types of words used in a question (interrogatives, topic words, and ordinary words).",
"These works presented approaches to produce contextually appropriate and diverse questions, but none of them considered the effect of questions on the interlocutor's emotional state.",
"We attribute the deficiency in this research to the lack of resources allowing to analyze and model various question-asking strategies in affect-rich social exchanges.",
"To address this gap, we present a categorization and analysis of questions in social dialogs, with four main contributions.",
"First, we develop an Empathetic Question Taxonomy, EQT, by manually annotating a subset of the EmpatheticDialogues (ED) 2952 dataset (Rashkin et al., 2019) (4).",
"EQT delineates the acts and intents of questions.",
"Question acts capture semantic-driven communicative actions of questions, while question intents describe the emotional effect the question should have on the dialog partner.",
"For example, a listener may request information (question act) about the age of speaker's daughter by asking How old is she? after learning about her success with the aim to amplify speaker's pride of his child (question intent).",
"Second, we design and launch a crowd-sourcing annotation task to grow the original labeled seed subset tenfold (5).",
"Third, we devise an automatic classification model, QBERT, to generate labels for the rest of the ED dataset to demonstrate one important application of the taxonomy (6).",
"QBERT can facilitate the development of chatbots that offer engaging and empathetic conversations by raising meaningful questions.",
"Finally, we inspect co-occurrences of acts and intents and their effect on the interlocutor's emotion using visualization techniques (7).",
"The analysis illustrates the most prominent question-asking strategies in human emotional dialogs.",
"To conclude, we discuss the implications of these results for future question generation approaches.",
"Previously proposed taxonomies of dialog acts frequently differ in types of assisted natural language tasks.",
"The Dialog Act Markup in Several Layers (DAMSL) tag set was designed to enable computational modeling of conversational speech using statistical methods (Jurafsky et al., 1997; Core and Allen, 1997).",
"It consists of 42 communicative acts derived from a Switchboard corpus.",
"Eight of these labels describe different question types according to their semantic role, e.g., Wh-question or Rhetorical-Question .",
"Several works proposed hierarchical taxonomies of dialog acts, targeted at modeling users' intents in human-machine conversations.",
"Montenegro et al. (2019) introduced their annotation scheme for a symbolic dialog system intended to improve the lives of the elderly, while Yu and Yu (2021) designed a scheme for facilitating general human-machine chit-chat.",
"In both works the logs of human-machine interactions were used for producing the taxonomies.",
"Each of them features labels devoted to questions, characterizing them either by a question word, e.g., How or What , or the form of expected answer, e.g., Open-ended or Yes/No question .",
"Finally, Welivita and Pu (2020) suggested a taxonomy of empathetic response intents in dialogs from the ED dataset with the purpose of improving controllability in neural dialog generation approaches.",
"It further stated that Questioning is one of the most frequent intents of the empathetic listeners.",
"However, none of these works focused on the fine-grained analysis of questions and their role in empathetic dialogs.",
"Meanwhile, several linguistic studies closely examined the pragmatics of questions and offered a number of classification schemes.",
"Graesser et al. (1994) developed a scheme of 18 tags based on the information sought by the question.",
"Their taxonomy applies well for transactional exchanges, but does not capture the social dimension.",
"Freed (1994) studied the correspondence between the social function of questions and their syntactic form.",
"She established 16 social question functions occurring in dyadic spoken conversations between friends.",
"In another research effort, a group of linguists explored the range of social actions performed by questions across 10 languages (Enfield et al., 2010).",
"The authors developed a coding scheme comprising 3 semantic question types and 7 social actions and applied it to questions in spontaneous spoken conversations (Stivers and Enfield, 2010).",
"Finally, Huang et al. (2017) developed a taxonomy of 6 question types to describe questions occurring in their dataset of chat-based conversations between strangers instructed to get to know each other.",
"The described works provide an insightful basis for studying questions in social conversations.",
"However, they do not consider the effect of questions on their addressee's emotional states, neither do they describe specific mechanisms to handle computational modeling.",
"Moreover, most of them apply to spoken dialogs, impeding the extension of their results to chat-based exchanges due to the inherent differences in these modalities.",
"Lastly, they relied mainly on manual annotation, yielding comparatively smaller datasets.",
"In our study, we extended the derived taxonomy to a large corpus using crowd-sourcing and automatic methods and analyzed the emerging patterns on a large scale.",
"We summarize the comparison of our question taxonomy with the existing schemes in Table 1. 3 Dataset For taxonomy derivation, we sought a dataset that contains social dialogs with diverse emotional expressions and could be applicable to train a chat-2953 Taxonomy # labels socialfunction emotionalfunction dataset (Graesser et al., 1994) 18 (Freed, 1994) 16 (Enfield et al., 2010) 7 (Huang et al., 2017) 6 EQT 21 Table 1: Comparison of question taxonomies.",
"bot with advanced question-generating abilities.",
"We avoided datasets featuring multi-modal dialogs (IEMOCAP (Busso et al., 2008), MELD (Poria et al., 2019)) as well as transcribed spoken conversations (Emotionlines (Hsu et al., 2018), Switchboard (Jurafsky et al., 1997)).",
"Such dialogs contain back-channel communication and other sensory signals that are not present in chat-based conversations and, therefore, are not well-suited for the modeling task.",
"Similarly, we rejected datasets that assist other tasks than social conversation modeling, such as SQuAD (Rajpurkar et al., 2016) (reading comprehension) or QoQA (Reddy et al., 2019) (in-formation gathering).",
"Finally, we did not consider datasets from social media as they can contain toxic and aggressive responses (Zhang et al., 2018).",
"We opted for the EmpatheticDialogues (ED) dataset (Rashkin et al., 2019), a benchmark dataset for empathetic dialog generation containing 24,850 conversations grounded in emotional contexts.",
"Each dialog is initiated by a speaker describing a feeling or experience and continued by a listener who was instructed to respond empathetically.",
"The dialogs are evenly distributed over the 32 emotional contexts, covering various speaker sentiments (e.g., sad , joyful , proud ).",
"We found the ED dataset to be a rich source of question-asking as over 60% of all dialogs contain a question in one of the listeners' turns, resulting in a total of 20K listener questions.",
"Basic statistics of the dataset are given in Table 2. Descriptor Value # dialogs in total 24,850 # turns per dialog on avg.",
"Given the community's interest in question-asking functionality for chatbots and its significance for empathetic response generation, we aimed at developing a taxonomy of listeners' questions asked in response to speakers' emotional inputs.",
"For this purpose, being guided by prior literature review, we employed a qualitative coding method, which is an established approach for such tasks (Stivers and Enfield, 2010; Huang et al., 2017; Zeinert et al., 2021).",
"Qualitative coding is a process of grouping and labeling similar types of data and iteratively validating the labels.",
"To cover a diverse range of speakers' emotions, we sampled several hundred dialogs uniformly from the 32 emotional contexts in the ED corpus.",
"The sample size was chosen to balance the need for the diversity of questions with researchers' ability to consider each question carefully and was consistent with prior practice.",
"The coding process was informed by previous question classification schemes (Table 1) and knowledge about general principles of emotional regulation (Gross, 2013).",
"Iterative adjustments were applied resulting from discussions of the concrete data.",
"Specifically, the first author made several iterations of coding trials to develop an initial set of labels.",
"Throughout the process, a number of review sessions were held with the last author to merge the labels into more focused classes.",
"As a result, we developed the Empathetic Question Taxonomy (EQT) with two distinguished branches: question acts describe semantic-driven features of questions (e.g., ask for confirmation , positive rhetoric ), whereas question intents characterize their emotion-regulation functions targeted at the interlocutor's emotional state (e.g., sympathize , amplify excitement ).",
"As it will be revealed further (7), an empathetic listener can use different question acts to deliver the same intent, justifying the proposed branching.",
"Overall, more than 310 questions were annotated.",
"EQT consists of 9 labels for question acts and 12 labels for question intents.",
"The granularity of the taxonomy was driven by earlier linguistic findings and empirical observations about the interplay of the labels in two branches.",
"For example, question acts request information (Enfield et al., 2010), ask about consequence (Graesser et al., 1994), and ask about antecedent (Graesser et al., 1994) are related and could possibly be grouped.",
"However, we de-2954 cided to keep them separately as listeners use them with unequal frequencies in positive and negative emotional contexts and combine them with different question intents (7).",
"Similarly, the initial set of labels for question intents was created based on the variety of emotions present in the dataset.",
"We further reduced it to a manageable size to make it more applicable for an annotation task, while still preserving sufficient expressiveness of labels to represent subtleties of the data (Zeinert et al., 2021).",
"We present the labels with their definitions below and provide several examples in Figure 1. Examples for each act and intent label are given correspondingly in Tables 4 and 5 from Appendix A. Question acts Request information (38.7%): Ask for new factual information.",
"Ask about consequence (21.0%): Ask about the result of the described action or situation.",
"Ask about antecedent (17.1%): Ask about the reason or cause of the described state or event.",
"Suggest a solution (8.7%): Provide a specific solution to a problem in a form of a question.",
"Irony (1.3%): Ask a question that suggests the opposite of what the speaker may expect, usually to be humorous or pass judgement.",
"Negative rhetoric (1.3%): Ask a question to express a critical opinion or validate a speaker's negative point without expecting an answer.",
"Ask for confirmation (5.8%): Ask a question to confirm or verify the listener's understanding of something that has been described by the speaker.",
"Suggest a reason (5.2%): Suggest a specific reason or cause of the event or state described by the speaker in a form of a question.",
"Positive rhetoric (1.0%): Ask a question to make an encouraging statement or demonstrate agreement with the speaker about a positive point without expecting an answer.",
"Express interest (57.1%): Express the willingness to learn or hear more about the subject brought up by the speaker; demonstrate curiosity.",
"Express concern (20.3%): Express anxiety or worry about the subject brought up by the speaker.",
"Offer relief (4.8%): Reassure the speaker who is anxious or distressed.",
"encouragement to the speaker, demonstrate an interest in and concern for the speaker's success.",
"Amplify pride (2.6%): Reinforce the speaker's feeling of pride.",
"Amplify excitement (1.9%): Reinforce the speaker's feeling of excitement.",
"Amplify joy (1.6%): Reinforce the speaker's glad feeling such as pleasure, enjoyment, or happiness.",
"De-escalate (1.6%): Calm down the speaker who is agitated, angry, or temporarily out of control.",
"Pass judgement (1.6%): Express a (critical) opinion about the subject brought up by the speaker.",
"Motivate (1.0%): Encourage the speaker to move onward.",
"Moralize speaker (1.0%): Judge the speaker.",
"To validate the interpretability of the labels and efficacy of the instructions for the crowd-sourcing task, we invited two other members from our research group and asked them to annotate questions in 20 randomly selected dialogs, containing 25 questions.",
"The annotators were instructed to consider the preceding dialog turns while assigning the labels as the same question might fall into different categories based on the context.",
"For example, the question What happened!? can be classified as Express interest or Express concern , depending on the valence of the speaker's emotion.",
"We computed both the Fleiss kappa (Fleiss, 1971) and the observed agreement among the first author and two annotators.",
"The observed agreement was calculated as a percentage of questions with at least two agreed labels (Endriss and Fernndez, 2013).",
"We considered it as a reliable measure of inter-rater My cat vomited on my shoes today (Negative) Is your cat ill?",
"no he just ate too much (Neutral) I got approved to adopt a dog!",
"(Positive) Yay!",
"I love dogs!",
"Do you have any you want to get specifically or are you just going to look until you find one that clicks?",
"(Ask about consequence, Amplify excitement) Oh I already picked one!",
"I'll be picking her up this weekend.",
"(Positive)",
"agreement as the number of coding categories was large (9 for acts and 12 for intents), yielding relatively low chance agreement (11.1% and 8.3% respectively).",
"The agreement resulted in 92% for acts ( = 0 . 52 ) and 80% for intents ( = 0 . 31 ), supporting the satisfactory interpretability of EQT.",
"For further analysis, we annotated a larger subsample of the ED dataset with the EQT labels by designing and launching a crowd-sourcing task on Amazon Mechanical Turk (Mturk).",
"The design was refined based on three pilot studies: one internal and two Mturk-based.",
"For the annotation, we sampled about 40% of dialogs from each of the original 32 emotional contexts.",
"We only sampled the dialogs with at least one question in one of the listener's turns.",
"The dialogs were then pre-processed so that each dialog ended with a question requiring a label.",
"Further, we distributed the dialogs into individual human intelligent tasks (HITs) and launched them on Mturk in a sequence of batches.",
"For each HIT we collected the annotations from three workers.",
"The incentive for one HIT varied from $0.4 to $0.9 depending on the worker's performance and task configuration.",
"We describe the details about the task design and the annotation procedure below; exhaustive explanations about dialog pre-processing and the task user interface are provided in Appendix B. 5.1 Task design The interface consisted of four main components: instructions, terminology, terminology quiz, and the annotation task.",
"The instructions informed the workers about the purposes of the task.",
"Next, the terminology page outlined the description of the EQT, listing the definition of each label with examples.",
"The terminology quiz contained six dialogs from the terminology page and invited the worker to select correct labels for questions in each dialog.",
"Finally, the annotation task included 25 dialogs, each ending with a listener turn with one or multiple questions.",
"Under each question, labels from two EQT branches were presented, and the worker had to select one most suitable label within each of the sets.",
"2 Twenty out of the 25 dialogs were treated as points for annotation, and the other 5 were bonus 2 In our task design, we chose to ask for a single most suitable label to facilitate further data analysis, however allowing the selection of multiple applicable labels is also possible.",
"dialogs.",
"For the bonus questions, we identified the gold labels during the manual annotation phase and used them to control workers' quality: a worker had to select the correct labels to score the points counting towards additional incentive ($0.2).",
"We required all workers who accepted one of our tasks for the first time to take the terminology quiz.",
"Workers who assigned the correct labels to at least three questions could proceed to the annotation task and were granted bonus payment for passing the quiz ($0.1).",
"The quiz was not required for the workers who had successfully passed it once.",
"In addition to the terminology quiz, we used several mechanisms to control the annotation quality.",
"First, following Mturk recommendations, we only allowed the workers with a 98% approval rate to access our tasks.",
"Second, we rejected assignments whose completion time significantly deviated from the expected average.",
"Further, we ran additional checks for the workers who accepted several of our assignments simultaneously.",
"Lastly, we computed the inter-rater agreement for each batch and discarded the submissions that harmed the agreement.",
"Overall, we launched 556 HITs and 465 of them were completed.",
"The rejection rate after the quality control was 4.7%.",
"Upon obtaining the results, we first computed the Fleiss kappa scores for acts ( = 0 . 34 ) and for intents ( = 0 . 27 ) to validate that the agreement between the workers is acceptable.",
"Then, we identified the final labels using the majority vote: if at least two workers agreed on a label, we chose it as a final label.",
"This resulted in an 83.6% observed agreement score for acts and 75.8% observed agreement for intents.",
"The majority vote approach was shown to be able to filter noisy judgments of amateurs, producing the labeled set of comparable quality to the annotations of experts (Nowak and Rger, 2010).",
"As a final check, we computed the kappa agreement between the crowd-sourced labels and the first author annotations for the subset of 450 randomly sampled questions.",
"The scores equaled 0.57 for acts (71.6% observed agreement) and 0.50 for intents (68.0% observed agreement), indicating moderate agreement, which we treat as satisfactory for this type of task.",
"As a result, an act label was assigned to 6,433 questions and an intent label to 5,826 questions, with an intersection of 4,962 questions.",
"To show how EQT can be operationalized, we demonstrate the use of the taxonomy for annotating the reminder of the ED dataset.",
"We first formulate the question act and intent prediction problems and then build two classification models to address them.",
"Before training, we augmented the labeled set using k -Nearest-Neighbors ( k -NN) method.",
"We also tried training the classifiers without data augmentation, but their performance was weaker (see Appendix D for details).",
"6.1 Data Augmentation We employed the Sentence-BERT (SBERT) framework (Reimers and Gurevych, 2019) to obtain embeddings for all questions with their contexts.",
"Then we used the cosine similarity measure to find k labeled NNs for each question in the unlabeled set and assign the same labels to them.",
"For the first step, we computed the embeddings of each dialog turn using the roberta-base-nli-stsb-mean-tokens SBERT model and then combined them into a single embedding per question with the weighted average.",
"We opted for weighed average instead of concatenation to keep manageable size of the embedding vector.",
"We used a half-decaying weighting scheme, providing the highest weight to the final question to indicate its importance.",
"The usage of this weighting scheme is guided by our previous experiments of similar nature, where we observed that the models with decaying weights performed better than the ones without them (Welivita et al., 2021).",
"Next, we tested several approaches for identifying semantically similar dialogs to propagate the labels.",
"One strategy was to take the same label as the top-1 NN, given that the similarity was higher than a predefined threshold.",
"The other strategy was to use the label identified with the majority vote from the top-3 NNs.",
"We did not experiment with higher values of k due to resource considerations.",
"We ran several cross-validation experiments on the labeled set with grid search over various cosine-similarity thresholds.",
"Top-3 majority vote strategy was shown to produce higher accuracy with a 0.825 cosine similarity threshold value resulting in the acceptable trade-off between the accuracy ( 76% for both label sets) and the number of labeled questions.",
"Therefore, we applied this strategy for the whole dataset, which produced additional 1,911 labels for question acts and 1,886 labels for question intents.",
"More details are provided in Appendix C. 6.2 Classifier Models Using the human-annotated and augmented labels, we trained two classifiers, which we collectively call QBERT.",
"QBERT models have identical architecture and vary only in the number of output categories in the final layer.",
"Each model consists of a BERT-based representation network, an attention layer, one hidden layer, and a softmax layer.",
"For the representation network, we used the architecture with 12 layers, 768 dimensions, 12 heads, and 110M parameters.",
"We initialized it with the weights of RoBERTa language model pre-trained by Liu et al. (2019) and for training used the same hyper-parameters as the authors.",
"As input, we fed a listener question and preceding dialog turns in the reverse order.",
"To prioritize the question, the half-decaying weighting scheme as described above was applied to the token embeddings of each turn.",
"Before training, we took out a stratified random sample of 20% of the questions (1,500) as a test set.",
"The test set contained respectively 1156 human-and 344 SBERT-annotated questions.",
"We separately trained each model on 80% of the remaining datapoints (5,475 acts, 4,969 intents), keeping the rest as a validation set (1,369 acts, 1,243 intents).",
"We trained each model for 15 epochs and for prediction retained the ones with the lowest validation loss (see Appendix D for details).",
"The classifiers achieved 74.7% accuracy for intents and 79.1% accuracy for acts on the test set.",
"Further breakdown accuracies for humanand SBERT-annotated test samples are given in Table 3.",
"According to previous work, human-human agreement can be used as a proxy for human accuracy (Kumar, 2014; Somasundaran and Chodorow, 2014).",
"Given the agreement in our Mturk experiment ( 75-85%), QBERT exhibited reasonable predictive accuracy and validated applicability and usefulness of EQT for language modeling tasks.",
"In this section we present the analysis of questioning strategies adopted by the empathetic listeners.",
"We base our examination on human-annotated questions instead of the whole ED dataset to avoid any potential noise which might have been introduced by automatic classification.",
"Visualizations for the whole dataset are included in Appendix E. Here, by a questioning strategy , we imply a combination of act and intent labels assigned to each question.",
"We first analyzed which labels from the two EQT branches form such strategies by plotting the co-occurrences of each pair (Figure 2).",
"Larger circles represent more frequent strategies, while an empty cell indicates that people do not use the given act to deliver the corresponding intent.",
"For example, to amplify partner's joy, one may request information for more details or ask about consequences of the event, but will unlikely raise a negative rhetorical question.",
"Several strategies are much more frequent than others.",
"Act Request Information and intent Express interest dominate in our dataset, occurring together for 39% of questions.",
"They define the most general type of questions, which are probably easy to ask, providing a reason why listeners use them often.",
"At the same time, dialogs in the ED dataset are relatively short, and it can be difficult for listeners to fully understand the ideas and R e q u e s t i n f o r m a t i o n , 51 .",
"feelings of speakers in a couple of turns.",
"In this case, requesting information and expressing interest demonstrates listener's attentive curiosity about the situation.",
"Once listeners feel more confident about the speakers' sentiments and contexts, they employ more specific question-asking strategies.",
"We further analyzed this phenomenon temporally across dialog turns (Figure 3).",
"Primarily, we studied how listeners' questioning strategies affect speakers' emotions by visualizing the mappings between them.",
"For this visualization, we used 41 emotion and intent labels describing each turn in the ED dataset produced by Welivita and Pu (2020).",
"To avoid clutter, we mapped the original 41 labels to 3 coarser categories: positive, negative, and neutral using our best judgement (see Appendix E for details).",
"Then, for the dialogs containing a question in the second turn, we plotted how speakers' emotions and listeners' questioning strategies shift over the first three turns.",
"We computed the frequencies of all questioning strategies and, for the ones occurring in more than 0.5% of cases, we plotted the flow patterns.",
"We restricted our analysis to the first three turns because over 70% of dialogs in the ED dataset have only four of them, excluding the possibility to study the influence of questioning strategies on further speakers' turns.",
"In order to still get an intuition how listeners' question-asking behavior changes in the consecutive turns, we plotted the dynamics of the ratios of question act and intent labels across the dialog depth.",
"Figure 3a shows the flow rates between speakers' emotions and listeners' questioning strategies.",
"As observed before, listeners most likely use follow-up questions to elicit more details about the situation by expressing interest and requesting information.",
"In most of such cases, the speaker's emotion remains preserved in their consecutive utterance as the speaker elaborates on the first turn, maintaining the sentiment.",
"When speakers explain themselves with sufficient clarity already in the first turn, listeners raise more precise questions, adapting the strategy to the affective context.",
"If speakers share a positive experience, listeners try to amplify their emotions by requesting more information or asking about the consequences of the situation.",
"On the contrary, when speakers disclose a negative sentiment, listeners try to validate and alleviate their feelings.",
"They typically intend to express concern, sympathize, offer relief, or de-escalate the issue, and achieve it by asking about what preceded or fol-2958 Positive Neutral Negative Amplify excitement, Request information Amplify excitement, Ask about consequence Amplify pride, Request information Amplify joy, Request information Express interest, Request information Express interest, Ask for confirmation Express interest, Ask about antecedent Express interest, Ask about consequence Express interest, Suggest a reason Express concern, Request information Express concern, Ask for confirmation Express concern, Ask about antecedent Express concern, Ask about consequence Express concern, Suggest a reason Sympathize, Request information Sympathize, Ask about antecedent Sympathize, Ask about consequence Offer relief, Suggest a solution De escalate, Suggest a solution Positive Neutral Negative 49.6% 58.3% 20.5% 15.7% 13.0% 4.4% 5.9% 4.6% 4.7% 9.3% 4.1% 3.1% 2.1% 4.5% Request information Ask about consequence Ask about antecedent Ask for confirmation Suggest a solution Suggest a reason Rhetorical, irony n=3940 n=1274 2nd t u r n 4 t h t u r n 31.0% 24.0% 20.1% 14.6% 10.9% 19.1% 7.7% 6.4% 7.5% 5.5% 7.3% 8.8% 5.6% 9.4% 4.6% 5.5% 3.6% 3.3% 1.6% 3.3% Sympathize Amplify excitement Offer relief Amplify pride Amplify joy Support De-escalate Pass judgement Moralize speaker Motivate 2nd t u r n 4 t h t u r n n=915 n=362 ( (cid:68)(cid:12) ( (cid:69)(cid:12) ( (cid:70)(cid:12) Figure 3:",
"These specific strategies demonstrate their effectiveness as almost a half of negative speakers' emotions gets mitigated after the question intervention, while two thirds of positive emotions keep up in the following speaker's turn.",
"The examples of dialogs showing how listeners use questions to treat both positive and negative speakers' sentiments are given in Figure 1. Additional examples are also available in Figure 9 of Appendix D. Figures 3b and 3c demonstrate how ratios of different acts and intents evolve over two successive listeners' responses.",
"Even though the horizon of four dialog turns might be too short to trace all the patterns, a few observations can be made.",
"With increasing depth of the dialog, the overall number of questions decreases, while two types get more prominent: general questions ( Request Information , Express interest ) and questions aiming at suppressing speakers' negative emotions (e.g., Suggest a solution , Offer relief ).",
"It may indicate that listeners employ specific strategies to react to positive speakers' emotions immediately after their disclosure, but in case of negative contexts they tend to ask for extra clarifications in the first place and deliver targeted emotional treatment only in the next turn.",
"As dialogs converge to more neutral exchanges, reducing the need to manage speakers' feelings, the ratio of questions demonstrating listeners' general curiously about the subject increases.",
"Finally, we reflected on the scarcely represented labels.",
"Among acts, Positive and Negative rhetoric and Irony appear least frequently.",
"These labels can be broadly classified as rhetorical questions.",
"They typically serve for self-expression than conversational engagement and, therefore, are less common than other forms of questions (Huang et al., 2017).",
"Moreover, negative rhetorical prompts may harm the conversation quality (Zhang et al., 2018), which could also explain why listeners avoided them in empathetic dialogs.",
"The same reasoning applies to the two infrequent intents, Pass judgement and Moralize speaker .",
"Another surprisingly rare intent is Motivate .",
"We believe that motivation might be difficult to express in the form of a question.",
"Moreover, for people who did not undergo special training, expressing motivation might be more challenging than other intents as it suggests a more thorough approach to solving one's problems.",
"Due to the nature of the ED dataset, some EQT labels are less represented than others.",
"We kept them under consideration as we observed their distinctive role in managing speaker's emotions.",
"Their further analysis is crucial for further identifying and designing effective questioning strategies for empathetic conversations, such as promoting motivational questions and avoiding judgmental ones.",
"Eliciting additional samples for these categories could be possible by applying QBERT classifiers to other datasets capturing social dialogs.",
"Our taxonomy does not cover the phatic role of questions typically occurring during greetings, e.g., What's up? or How's it going?",
"Such questions were very rare in the ED dataset.",
"We chose not to analyze them, since these routine questions are the most superficial (Huang et al., 2017) and unlikely to serve any emotion-regulation function.",
"In the design of our annotation task, we opted for asking the crowd workers to choose a single most specific label from each of the two EQT branches.",
"This was done with the aim of facilitating further analysis of questioning strategies withing the scope of this study.",
"Nevertheless, according to Graesser et al. (1994), most adequate classification schemes in the social sciences allow assigning an observation to multiple rather than only one category.",
"This also applies to our case.",
"For example, for the question Did you go through a breakup recently? both Suggest a reason and Request information can be relevant.",
"Future work can explore the possibilities of using multiple applicable labels in addition to the most specific one.",
"Additional labels can be obtained either by tagging the samples manually or by taking top-N most confident predictions from the classifiers.",
"The results of this paper can facilitate the development of question-asking mechanisms of conversational chatbots.",
"One can employ conditional training (See et al., 2019) to train an end-to-end neural model on a subset of most effective questioning strategies as defined by the co-occurrences of the EQT labels and their mappings with speakers' emotions (cf. Figure 3).",
"To achieve even greater interpretability and controllability, researchers can devise architectures that dynamically model the selection of appropriate questioning strategy before generating a question.",
"The strategy can be selected based on the conversational history and speaker's emotion and further passed into the question generation module.",
"The main purpose of such modeling approaches is to lead an engaging empathetic conversation by raising meaningful questions, which deliver desirable effect on user's emotional state.",
"Moreover, EQT along with QBERT models can be used to label questions originating from other corpora or chat logs and evaluate their effectiveness for regulating speaker's emotions, as described above.",
"In this paper we introduced EQT, an Empathetic Question Taxonomy depicting acts and intents of questions in social dialogs.",
"We used crowdsourcing and automatic methods to tag all listeners' questions from the ED dataset with the EQT labels, which validated their interpretability and produced useful annotations for future research.",
"Further analysis of the dataset with the visualization techniques shed light on various question-asking strategies employed by listeners in response to speakers' emotionally-ridden inputs.",
"We identified several useful question-asking behaviors for favorable emotional regulation.",
"We expect that our findings will enable the development of more controllable and effective question-generation models.",
"In this work, we used Mturk platform to collect annotations for the dataset.",
"Crowd workers on Mturk are known to be underpaid according to western standards, earning a median hourly wage of only $2/h (Kaufmann et al., 2011).",
"At the same time, monetary remuneration is not the only factor defining people's motivation to work on such crowdsourcing platforms (Hara et al., 2018).",
"For example, workers might also engage with HITs to learn new or train existing skills, pass free time, or meet new people.",
"Taking these factors into account, we designed our annotation experiments so that workers received $6/h on average to achieve reasonable trade-off between the number of HITs we could launch with the available budget and the offered payment.",
"While being slightly lower than the US minimum wage ($7.25), it was deemed a fair compensation given that it is three times higher than the reported median wage and workers could have other reasons to complete the tasks than purely monetary reward.",
"Nevertheless, we encourage future works of similar nature to offer higher compensation to the workers if possible."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"method",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"other",
"method",
"method",
"abstain",
"other",
"other",
"abstain",
"other",
"objective",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method"
] |
[
"Fine-grained Entity typing (FGET) is the task of assigning a fine-grained type from a hierarchy to entity mentions in the text.",
"As the taxonomy of types evolves continuously, it is desirable for an entity typing system to be able to recognize novel types without additional training.",
"This work proposes a zero-shot entity typing approach that utilizes the type description available from Wikipedia to build a distributed semantic representation of the types.",
"During training, our system learns to align the entity mentions and their corresponding type representations on the known types.",
"At test time, any new type can be incorporated into the system given its Wikipedia descriptions.",
"We evaluate our approach on FIGER, a public benchmark entity tying dataset.",
"Because the existing test set of FIGER covers only a small portion of the fine-grained types, we create a new test set by manually annotating a portion of the noisy training data.",
"Our experiments demonstrate the effectiveness of the proposed method in recognizing novel types that are not present in the training data.",
"Entity Typing assigns a semantic type (e.g., person , location , organization ) to an entity mention in text based on the local context.",
"It is useful for enhancing a variety of Natural Language Process-ing(NLP) tasks such as question answering (Han et al., 2017; Das et al., 2017), relation extraction (Liu et al., 2014; Yaghoobzadeh et al., 2016), and entity linking (Stern et al., 2012).",
"Traditional Named Entity Typing systems consider a small set of coarse types (e.g., person , location , organization ) (Tjong Kim Sang and De Meulder, 2003; Krishnan and Manning, 2006; Chieu and Ng, 2002).",
"Recent studies address larger sets of fine-grained types organized in type hierarchies (e.g., per-son/artist , person/author ) (Ling and Weld, 2012; Corro et al., 2015; Xu and Barbosa, 2018; Murty et al., 2018).",
"Fine-Grained Entity Typing (FGET) is usually approached as a multi-label classification task where an entity mention can be assigned multiple types that usually constitute a path in the hierarchy (Ren et al., 2016).",
"In real-world scenarios, there is a need to deal with ever-growing type taxonomies.",
"New types emerge, and existing types are refined into finer sub-categories.",
"Traditional methods for entity typing assume that the training data contains all possible types, thus require new annotation effort for each new type that emerges.",
"Zero-shot learning (ZSL) , a special kind of transfer learning, allows for new types to be incorporated at the prediction stage without the need for additional annotation and retraining.",
"The main idea behind ZSL is to learn a shared semantic space for representing both the seen and unseen types, which allows the knowledge about how examples link to the seen types to be transferred to unseen types.",
"For fine-grained entity types, we observe that their associated Wikipedia pages often provide a rich description of the types.",
"To capture this, we propose a Description-based Zero-shot Entity Typing (DZET) approach that utilizes the Wikipedia description of each type (e.g., see https://en.wikipedia.org/wiki/ Artist for description of the type person/artist ) to generate a representation of that type.",
"We learn to project the entity-mention representations and the type representations into a shared semantic space, such that the mention is closer to the correct type(s) than the incorrect types.",
"The mid-level type representation derived from the Wikipedia page along with the learned projection function allows the system to recognize new types requiring zero training examples.",
"We investigate different approaches for constructing the type representation based on Wikipedia descriptions.",
"Note that the descriptions can be quite long, often containing many different parts that are useful for recognizing different entity mentions.",
"This motivates us to generate a bag of representations for each type and apply average pooling to aggregate the results.",
"We evaluate the performance of our methods on FIGER, a benchmark dataset for the FNET task, in which types are organized in 2-levels hierarchy.",
"In this work, We focus on testing our method's capability in recognizing unseen fine-grained types ( Level-2 types in this dataset).",
"As the current test set of FIGER contains examples from only a few level-2 types, we created a new test data that covers most of the level-2 types by manually annotating a portion of the noisy training data.",
"Below we summarize our main contributions.",
"We proposed a description-based zero-shot fine-grained entity typing framework that uses Wikipedia descriptions to represent and detect novel types unseen in training.",
"We created a new test set for fine-grained entity typing that provides much better coverage of the level-2 (fine-grained) types compared to the original FIGER test data.",
"We provided experimental evidence of the effectiveness of our approach in comparison with established baselines.",
"Existing work on FGET focuses on performing context-sensitive typing (Gillick et al., 2014; Corro et al., 2015), learning from noisy training data (Abhishek et al., 2017; Ren et al., 2016; Xu and Barbosa, 2018), and exploiting the type hierarchies to improve the learning and inference (Yo-gatama et al., 2015; Murty et al., 2018).",
"More recent studies support even finer granularity (Choi et al., 2018; Murty et al., 2018).",
"However, all the methods above have the limitation that they assume all types are present during training.",
"Zero-Shot Learning has been extensively studied in Computer Vision (CV) (Wang et al., 2019) for tasks such as image classification (Lampert et al., 2014; Zhang and Saligrama, 2015; Socher et al., 2013), object localization (Li et al., 2014, 2017) and image retrieval (Xu et al., 2017; Zhang et al., 2018).",
"A common approach for zero-shot learning in CV is to represent each class (e.g., Zebra) by a set of semantic attributes such as its shape and color.",
"The semantic attributes serve as the intermediate level that connects the visual features with the classes.",
"The model is trained to learn an alignment between the semantic attributes and the visual features where a new class can be recognized using its semantic attributes without the need for any training examples.",
"In contrast, this type of approach tends not to work well for NLP applications as the semantic concepts/classes in NLP are often more complex and cannot be easily described by a set of pre-defined attributes.",
"This explains why the few studies of ZSL for NLP use very different methods to create the transferable intermediate representations.",
"Zero-Shot Learning has been studied for a number of NLP tasks including event extraction (Huang et al., 2018; Lee and Jha, 2018; Srivastava et al., 2018), relation extraction(Liu et al., 2014), Conversational Language Understanding (Lee and Jha, 2018).",
"Specifically, Zero shot entity typing has also been explored, where most of the prior methods adopt the idea of learning a shared semantic space for representing the entities as well as the types, but differ in how they construct the type embeddings.",
"In OTyper (Yuan and Downey, 2018), each type is represented by averaging the embedding of the words constitutes the type label.",
"On the other hand, ProtoLE (Ma et al., 2016) represents each type by a prototype that consists of manually selected entity mentions, where the type embedding is obtained by averaging the prototype mentions' word embeddings.In contrast, our work differs from OTyper and ProtoLE by constructing the type representations based on the Wikipedia descriptions of the types, which not only carry more information about the type but also can be easily adapted to other tasks such as event typing and text classification.",
"Following prior work on fine-grained entity typing, we formulate it as a multi-class multi-label classification problem.",
"Given an entity mention m along with its left textual context c l and right context c r , We learn a classifier that predicts a binary label vector y { 0 , 1 } | L | , where L denotes the set of all types, which forms a hierarchy .",
"Here y ( t ) = 1 if the mention m is of type t , and 0 otherwise.",
"In the case of zero-shot FGET, new types can be introduced and added to L during testing.",
"We will begin by introducing our typing function that is used to compute a score between a given mention and type pair, given their corresponding vector representations.",
"We will discuss how to construct the representations in later sections.",
"Formally, the input to this typing function consists of the representation of the mention, denoted by x R d ; and the representation of a candidate type t , denoted by y t R d .",
"It computes a bi-linear score for the ( x, y t ) pair as follows: f ( x, y t , W ) = x TW y t where W R d d is a compatibility matrix.",
"Following (Yogatama et al., 2015; Ma et al., 2016), we factorize W as a product of two low-rank matrices to reduce the number of parameters.",
"That is W = ATB , where A R h d and B R h d (We use h = 20 ).",
"The scoring function f can be rewritten as: f ( x, y t , A, B ) = ( x, A ) ( y t , B ) = ( Ax ) T By t where ( x, A ) : x Ax and ( y t , B ) : y t By t serve as the projection functions that map x and y t into a shared semantic space.",
"To obtain the representation for entity mentions, we adopt the same neural approach proposed by Shimaoka et al. (2017).",
"Given an entity mention with its context, we compute a vector v m to present the mention m itself, and another vector v c to represent its left and right contexts c l and c r .",
"v m is computed by simply averaging the embedding of the individual words in m .",
"To compute the context embedding v c , we first encode c l and c r using a bidirectional-LSTM.",
"Let c l 1 , ..., c l s and c r 1 , ..., c r s be the word embedding of the left and the right context respectively, where s is the window size (we use s = 10 ), the output layer of the bi-LSTM is denoted as: h l 1 , h l 1 ..., h ls , h ls and h r 1 , h r 1 ..., h rs , h rs .",
"We then compute a scalar attention for each context word using a 2-level feedforward neural network: e ji = tanh ( W e h ji h ji ); a ji = exp ( W a e ji ) Where W e R d h 2 d a , W a R 1 d a , d h is the dimension of LSTM, d a is the attention dimension, j { l, r } .",
"Next, we normalize a ji s such that they sum up to 1.",
"i.e., a ji = a ji (cid:80) si =1 ( a li + a ri ) .",
"Finally the context representation is computed as v c = (cid:80) si =1 ( a li (cid:34) h li h li (cid:35) + a ri (cid:34) h ri h ri (cid:35) ) .",
"The final representation of the entity mention x R d is a concatenation of v m and v c .",
"Let P t be the Wikipedia page that is used to build a representation for type t .",
"Some types do not have a Wikipedia page with a title the same as the type label.",
"In such cases, we manually look for a Wikipedia page of a similar concept.",
"For example, we represent the type living-thing by the Wikipedia page organism .",
"To get a type representation, We started by the simplest possible method which is averaging the embedding of words in the Wikipedia page ( we call this Avg encoder ).",
"Since some words in the Wikipedia page carry more of the type semantic than the other words we also consider a (tf-idf)-weighted version of the Avg encoder.",
"Learning multiple representations.",
"Wikipedia descriptions are often long and contain multiple parts, where different parts may capture different aspects of the type and relate to different mentions.",
"Moreover, sequence models such as LSTM cannot be applied to such long sequences.",
"This motivates us to consider the approach of constructing a bag of multiple representations for each type based on its Wikipedia description.",
"To obtain a bag of representations for type t , we first use a fixed-length window to incrementally break P t into multiple parts, one paragraph at a time.",
"If a paragraph fits in the current Window, it is added.",
"Otherwise, a new window is initiated.",
"Each window of text r ti is then used to generate one representation.",
"To construct an embedding for r ti , we adopt the same Bidirectional LSTM and attention mechanism that are used to embed the mention context.",
"To compute the score for type t given its multiple representations, we compute the score with each individual representation and average them to produce the final score.",
"This is equivalent to applying average pooling to the multiple representations to obtain a single representation due to the bi-linear typing function.",
"Given the training data, we jointly train the representation and the scoring function by minimizing a ranking score.",
"Let Y ( i ) and Y ( i ) denote the set of correct and incorrect types assigned to the example x ( i ) respectively, we learn to score types in Y ( i ) higher than types in Y ( i ) with a multi-label max-margin ranking objective as follows: (cid:88) y Y (cid:88) y Y max (0 , 1 f ( x, y, A, B )+ f ( x, y, A, B )) At testing, both seen and unseen types are mapped to their learned representations, which are then scored for a given input.",
"Given the scores, we conduct a top-down search following the type hierarchy .",
"Starting from the root we recursively find the type with the highest score among the children.",
"Since we focus on the fine-grained types, we stop the search when a leaf type is reached and predict that the mention is positive for all types along the path leading the to leaf type.",
"Datasets.",
"Our experiments use FIGER, a publicly available fine-grained entity typing benchmark dataset in which types are organized into a 2-level hierarchy.",
"The training data consists of sentences sampled from Wikipedia articles and automatically annotated via distant supervision (Ling and Weld, 2012).",
"The test data consisting of manually annotated sentences sampled from news reports.",
"Setting.",
"To evaluate our capability to recognize fine-grained types in zero-shot setting, we assume all second-level types are unseen during training, i.e., we remove all level-2 types from the train and dev data but keep them in the test data.",
"We observe that the FIGER test set covers only a small number of second-level types.",
"This renders it insufficient for testing under the evaluation setting we adopt.",
"Moreover, the training data is noisy since it is automatically annotated by distant supervision.",
"As a result, we cannot just use part of it for testing.",
"To overcome this limitation, We manually annotated a new test set from the noisy training data.",
"We first divide the train set into clean and noisy as suggested in (Ren et al., 2016).",
"Clean examples are those whose types fall on a single path (not necessarily ending with a leaf) in .",
"For instance, the mention with labels person , person/author , and person/doctor is considered as noisy example because the labels form two paths.",
"We then manually verify the correctness of up to 20 examples from the clean training data for every level-2 type.",
"These examples are removed from training and added to the test set.",
"We ignore the types with no clean examples.",
"The statistics of the new and original datasets are reported in Table 1.",
"Baselines.",
"We consider two baselines that employ the same neural architecture but use different type representations.",
"The Label embd baseline use the average of the embedding of the words in the type label as the type representation.",
"ProtoLE baseline uses the prototypes-based label embedding learned by Ma et al. (2016), where each type is represented by the set of the most representative entity mentions.",
"The type embedding is the average of all mentions in the corresponding prototype.",
"Evaluation metrics.",
"Following prior works in FGET, we report Accuracy (StrictF 1 ) , loose Macro-averaged F 1 ( F 1 ma ) and loose Micro-averaged F 1 ( F 1 mi ) (Ling and Weld, 2012).",
"The training and hyperparameter tuning details are described in the Appendices.",
"Results and discussions.",
"Table 2 presents the results on FIGER, evaluated on all types (Over-all), the seen types (Level-1) and the unseen types (Level-2) respectively.",
"From the results, we can see that our description based methods have a particularly strong advantage over baselines on level-2 types.",
"This is consistent with our expectation because Wikipedia descriptions tend to be highly informative for fine-grained types, but less so for coarser types.",
"Among the average encoders, we found that weighting the word embedding by the word tf-idf produces better results than treating the words equivalently.",
"As expected, using LSTM based multi-representation adds a noticeable benefit to our system as it produces the best performance among all tested methods, achieving the best performance for level-2 types and outperforming oth-Approach Overall Level-1 Level-2 Acc F 1 ma F 1 mi F 1 ma F 1 mir F 1 ma F 1 mir Label embd 0.2846 0.5510 0.5603 0.8165 0.8163 0.2854 0.2954 ProtoLE 0.2541 0.4982 0.5093 0.7424 0.7422 0.2541 0.2657 DZET + Avg encoder 0.3141 0.5522 0.5614 0.7903 0.7902 0.3141 0.3247 DZET + Weighted Avg encoder 0.3261 0.5500 0.5607 0.7740 0.7738 0.3261 0.3390 DZET + Multi-rep 0.3806 0.5953 0.6045 0.8100 0.8098 0.3806 0.3926 Table 2: Level-1 , Level-2 and overall performance of the models on FIGER dataset.",
"The effect of description quality.",
"Figure 1 analyzes the relationship between the length of the Wikipedia description as one criterion of the description quality and the performance of Multi-rep method.",
"In particular, we group the types based on the length of their Wikipedia descriptions and provide the five-number summary box plot of the F-scores for each group.",
"It can be readily observed that the performance is low when the description of the type's Wikipedia page is too short ( < 1000 words) or too long ( > 4000 words).",
"Short descriptions are less informative and carry less shared semantics with the type's mentions.",
"On the other hand, overly long descriptions could also be confusing as it might share a significant number of common words with the descriptions of other types.",
"A closer look into the results unveils some exceptions.",
"For example, the F-score on the type /education/educational-degree' is 0.7742 even it has a long description (6845 words).",
"The description of this type is indeed very informative and includes a comprehensive list of the educational degrees awarded all around the world.",
"factor that affects the performance of DZET methods.",
"One factor is the performance on the Level-1 types.",
"Since the inference is performed by following the type hierarchy, if an incorrect type is inferred at level-1, there is no hope to get the correct level-2 type.",
"Another factor is the amount of overlapping between the descriptions of the related types.",
"For instance, Multi-rep produces zero F-score on the types/event/protest' and /lo-cation/province' because they share a lot of common words with the types /event/attack' and /lo-cation/county' respectively, which negatively affects the ability of Multi-rep to distinguish between the related types.",
"Both /event/protest' and /location/province' have a description length between 2000 and 3000 words.",
"To mitigate the effect of the contents overlapping between the highly related type, We plan to apply mention-sensitive attention mechanisms for future work to aggregate the scores in Multi-rep instead of max-pooling.",
"In this paper, we propose a novel zero-shot entity typing approach that uses Wikipedia descriptions to construct type embeddings.",
"Our architecture relies on the type embeddings to make predictions for unseen types.",
"Experimental results demonstrate the effectiveness of the proposed methods.",
"We thank Jordan University of Science and Technology for Ph.D. fellowship (to R. O.)."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"method",
"objective",
"objective",
"objective",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other"
] |
[
"Joint extraction of entities and relations from unstructured texts is a crucial task in information extraction.",
"Recent methods achieve considerable performance but still suffer from some inherent limitations, such as redundancy of relation prediction, poor generalization of span-based extraction and inefficiency.",
"In this paper, we decompose this task into three subtasks, Relation Judgement , Entity Extraction and Subject-object Alignment from a novel perspective and then propose a joint relational triple extraction framework based on P otential R elation and G lobal C orrespondence (PRGC).",
"Specifically, we design a component to predict potential relations, which constrains the following entity extraction to the predicted relation subset rather than all relations; then a relation-specific sequence tagging component is applied to handle the overlapping problem between subjects and objects; finally, a global correspondence component is designed to align the subject and object into a triple with low-complexity.",
"Extensive experiments show that PRGC achieves state-of-the-art performance on public benchmarks with higher efficiency and delivers consistent performance gain on complex scenarios of overlapping triples.",
"1 1 Introduction Identifying entity mentions and their relations which are in the form of a triple (subject, relation, object) from unstructured texts is an important task in information extraction.",
"Some previous works proposed to address the task with pipelined approaches which include two steps: named entity recognition (Tjong Kim Sang and De Meulder, 2003; Ratinov and Roth, 2009) and relation prediction (Zelenko et al., 2002; Bunescu and Mooney, * Corresponding author.",
"2005; Pawar et al., 2017; Wang et al., 2020b).",
"Recent end-to-end methods, which are based on either multi-task learning (Wei et al., 2020) or single-stage framework (Wang et al., 2020a), achieved promising performance and proved their effectiveness, but lacked in-depth study of the task.",
"To better comprehend the task and advance the state of the art, we propose a novel perspective to decompose the task into three subtasks:",
"i) Relation Judgement which aims to identify relations in a sentence,",
"ii) Entity Extraction which aims to extract all subjects and objects in the sentence and",
"iii) Subject-object Alignment which aims to align the subject-object pair into a triple.",
"On the basis, we review two end-to-end methods in Table",
"1. For the multi-task method named CasRel (Wei et al., 2020), the relational triple extraction is performed in two stages which applies object extraction to all relations.",
"Obviously, the way to identify relations is redundant which contains numerous invalid operations, and the span-based extraction scheme which just pays attention to start/end position of an entity leads to poor generalization.",
"Meanwhile, it is restricted to process one subject at a time due to its subject-object alignment mechanism, which is inefficient and difficult to deploy.",
"For the single-stage framework named TPLinker (Wang et al., 2020a), in order to avoid the exposure bias in subject-object alignment, it exploits a rather complicated decoder which leads to sparse label and low convergence rate while the problems of relation redundancy and poor generalization of span-based extraction are still unsolved.",
"To address aforementioned issues, we propose an end-to-end framework which consists of three components: Potential Relation Prediction , Relation-Specific Sequence Tagging and Global Correspondence , which fulfill the three subtasks accordingly as shown in Table",
"1. For Relation Judgement , we predict potential relations by the Potential Relation Prediction component rather than preserve all redundant relations, which reduces computational complexity and achieves better performance, especially when there are many relations in the dataset.",
"2 For Entity Extraction , we use a more robust Relation-Specific Sequence Tagging component (Rel-Spec Sequence Tagging for short) to extract subjects and objects separately, to naturally handle overlapping between subjects and objects.",
"For Subject-object Alignment , unlike TPLinker which uses a relation-based token-pair matrix, we design a relation-independent Global Correspondence matrix to determine whether a specific subject-object pair is valid in a triple.",
"Given a sentence, PRGC first predicts a subset of potential relations and a global matrix which contains the correspondence score between all subjects and objects; then performs sequence tagging to extract subjects and objects for each potential relation in parallel; finally enumerates all predicted entity pairs, which are then pruned by the global correspondence matrix.",
"It is worth to note that the experiment (described in Section 5.2.1) shows that the Potential Relation Prediction component of PRGC is overall beneficial, even though it introduces the exposure bias that is usually mentioned in prior single-stage methods to prove their advantages.",
"Experimental results show that PRGC outperforms the state-of-the-art methods on public benchmarks with higher efficiency and fewer parameters.",
"Detailed experiments on complex scenarios such as various overlapping patterns, which contain the Single Entity Overlap (SEO) , Entity Pair Overlap 2 For example, the WebNLG dataset (Gardent et al., 2017) has hundreds of relations but only seven valid relations for one sentence mostly.",
"(EPO) and Subject Object Overlap (SOO) types 3 show that our method owns consistent advantages.",
"The main contributions of this paper are as follows:",
"1. We tackle the relational triple extraction task from a novel perspective which decomposes the task into three subtasks: Relation Judgement , Entity Extraction and Subject-object Alignment , and previous works are compared on the basis of the proposed paradigm as shown in Table",
"1.",
"2. Following our perspective, we propose a novel end-to-end framework and design three components with respect to the subtasks which greatly alleviate the problems of redundant relation judgement, poor generalization of span-based extraction and inefficient subject-object alignment, respectively.",
"3. We conduct extensive experiments on several public benchmarks, which indicate that our method achieves state-of-the-art performance, especially for complex scenarios of overlapping triples.",
"Further ablation studies and analyses confirm the effectiveness of each component in our model.",
"4. In addition to higher accuracy, experiments show that our method owns significant advantages in complexity, number of parameters, floating point operations (FLOPs) and inference time compared with previous works.",
"Traditionally, relational triple extraction has been studied as two separated tasks: entity extraction and relation prediction.",
"Early works (Zelenko et al., 2002; Chan and Roth, 2011) apply the pipelined methods to perform relation classification between entity pairs after extracting all the entities.",
"To establish the correlation between these two tasks, joint models have attracted much attention.",
"Prior feature-based joint models (Yu and Lam, 2010; Li and Ji, 2014; Miwa and Sasaki, 2014; Ren et al., 2017) require a complicated process of feature engineering and rely on various NLP tools with cumbersome manual operations.",
"Recently, the neural network model which reduces manual involvement occupies the main part of the research.",
"Zheng et al. (2017) proposed a 3 More details about overlapping patterns are shown in Appendix A. Figure 1: The overall structure of PRGC.",
"novel tagging scheme that unified the role of the entity and the relation between entities in the annotations, thus the joint extraction task was converted to a sequence labeling task but it failed to solve the overlapping problems.",
"Bekoulis et al. (2018) proposed to first extract all candidate entities, then predict the relation of every entity pair as a multihead selection problem, which shared parameters but did not decode jointly.",
"Nayak and Ng (2020) employed an encoder-decoder architecture and a pointer network based decoding approach where an entire triple was generated at each time step.",
"To handle the problems mentioned above, Wei et al. (2020) presented a cascade framework, which first identified all possible subjects in a sentence, then for each subject, applied span-based taggers to identify the corresponding objects based on each relation.",
"This method leads to redundancy on relation judgement, and is not robust due to the span-based scheme on entity extraction.",
"Meanwhile, the alignment scheme of subjects and objects limits its parallelization.",
"In order to represent the relation of triple explicitly, Yuan et al. (2020) presented a relation-specific attention to assign different weights to the words in context under each relation, but it applied a naive heuristic nearest neighbor principle to combine the entity pairs which means the nearest subject and object entities will be combined into a triple.",
"This is obviously not in accordance with intuition and fact.",
"Meanwhile, it is also redundant on relation judgement.",
"The state-of-the-art method named TPLinker (Wang et al., 2020a) employs a token pair linking scheme which performs two O ( n 2 ) matrix operations for extracting entities and aligning subjects with objects under each relation of a sentence, causing extreme redundancy on relation judgement and complexity on subject-object alignment, respectively.",
"And it also suffers from the disadvantage of span-based extraction scheme.",
"In this section, we first introduce our perspective of relational triple extraction task with a principled problem definition, then elaborate each component of the PRGC model.",
"An overview illustration of PRGC is shown in Figure",
"1. 3.1 Problem Definition The input is a sentence S = { x 1 , x 2 , ..., x n } with n tokens.",
"The desired outputs are relational triples as T ( S ) = { ( s, r, o ) | s, o E, r R } , where E and R are the entity and relation sets, respectively.",
"In this paper, the problem is decomposed into three subtasks: Relation Judgement For the given sentence S , this subtask predicts potential relations it contains.",
"The output of this task is Y r ( S ) = { r 1 , r 2 , ..., r m | r i R } , where m is the size of potential relation subset.",
"Entity Extraction For the given sentence S and a predicted potential relation r i , this subtask iden-tifies the tag of each token with BIO (i.e., Begin, Inside and Outside) tag scheme (Tjong Kim Sang and Veenstra, 1999; Ratinov and Roth, 2009).",
"Let t j denote the tag.",
"The output of this task is Y e ( S, r i | r i R ) = { t 1 , t 2 , ..., t n } .",
"Subject-object Alignment For the given sentence S , this subtask predicts the correspondence score between the start tokens of subjects and objects.",
"That means only the pair of start tokens of a true triple has a high score, while the other token pairs have a low score.",
"Let M denote the global correspondence matrix.",
"The output of this task is Y s ( S ) = M R n n .",
"The output of PRGC Encoder is Y enc ( S ) = { h 1 , h 2 , ..., h n | h i R d 1 } , where d is the embedding dimension, and n is the number of tokens.",
"We use a pre-trained BERT model 4 (Devlin et al., 2019) to encode the input sentence for a fair comparison, but theoretically it can be extended to other encoders, such as Glove (Pennington et al., 2014) and RoBERTa (Liu et al., 2019).",
"This component is shown as the orange box in Figure 1 where R pot is the potential relations.",
"Different from previous works (Wei et al., 2020; Yuan et al., 2020; Wang et al., 2020a) which redundantly perform entity extraction to every relation, given a sentence, we first predict a subset of potential relations that possibly exist in the sentence, and then the entity extraction only needs to be applied to these potential relations.",
"Given the embedding h R n d of a sentence with n tokens, each element of this component is obtained as: h avg = Avgpool ( h ) R d 1 P rel = ( W r h avg + b r ) (1) where Avgpool is the average pooling operation (Lin et al., 2014), W r R d 1 is a trainable weight and denotes the sigmoid function.",
"We model it as a multi-label binary classification task, and the corresponding relation will be assigned with tag 1 if the probability exceeds a certain threshold 1 or with tag 0 otherwise (as shown in Figure 1), so next we just need to apply the relation-specific sequence tagging to the predicted relations rather than all relations.",
"As shown in Figure 1, we obtain several relation-specific sentence representations of potential relations described in Section 3.3.1.",
"Then, we perform two sequence tagging operations to extract subjects and objects, respectively.",
"The reason why we extract subjects and objects separately is to handle the special overlapping pattern named Subject Object Overlap (SOO) .",
"We can also simplify it to one sequence tagging operation with two types of entities if there are no SOO patterns in the dataset.",
"5 For the sake of simplicity and fairness, we abandon the traditional LSTM-CRF (Panchendrarajan and Amaresan, 2018) network but adopt the simple fully connected neural network.",
"Detailed operations of this component on each token are as follows: P subi,j = Softmax ( W sub ( h i + u j ) + b sub ) P obji,j = Softmax ( W obj ( h i + u j ) + b obj ) (2) where u j R d 1 is the j -th relation representation in a trainable embedding matrix U R d n r where n r is the size of full relation set, h i R d 1 is the encoded representation of the i -th token, and W sub , W obj R d 3 are trainable weights where the size of tag set { B, I, O } is",
"3. 3.3.3 Global Correspondence After sequence tagging, we acquire all possible subjects and objects with respect to a relation of the sentence, then we use a global correspondence matrix to determine the correct pairs of the subjects and objects.",
"It should be noted that the global correspondence matrix can be learned simultaneously with potential relation prediction since it is independent of relations.",
"The detailed process is as follows: first we enumerate all the possible subject-object pairs; then we check the corresponding score in the global matrix for each pair, retain it if the value exceeds a certain threshold 2 or filter it out otherwise.",
"5 For example, the SOO pattern is rare in the NYT (Riedel et al., 2010) dataset.",
"As shown in the green matrix M in Figure 1, given a sentence with n tokens, the shape of global correspondence matrix will be R n n .",
"Each element of this matrix is about the start position of a paired subject and object, which represents the confidence level of a subject-object pair, the higher the value, the higher the confidence level that the pair belongs to a triple.",
"For example, the value about Tom and Jerry at row 1, column 3 will be high if they are in a correct triple such as (Tom, like, Jerry) .",
"The value of each element in the matrix is obtained as follows: P i sub ,j obj = ( W g [ h subi ; h objj ] + b g ) (3) where h subi , h objj R d 1 are the encoded representation of the i -th token and j -th token in the input sentence forming a potential pair of subject and object, W g R 2 d 1 is a trainable weight, and is the sigmoid function.",
"We train the model jointly, optimize the combined objective function during training time and share the parameters of the PRGC encoder.",
"The total loss can be divided into three parts as follows: L rel = 1 n r n r (cid:88) i =1 ( y i log P rel + (1 y i ) log (1 P rel )) (4) L seq = 1 2 n n potr (cid:88) t { sub,obj } n potr (cid:88) j =1 n (cid:88) i =1 y ti,j log P ti,j (5) L global = 1 n 2 n (cid:88) i =1 n (cid:88) j =1 ( y i,j log P i sub ,j obj + (1 y i,j ) log (1 P i sub ,j obj )) (6) where n r is the size of full relation set and n potr is the size of potential relation subset of the sentence.",
"Performance might be better by carefully tuning the weight of each sub-loss, but we just assign equal weights for simplicity (i.e., = = = 1 ).",
"For fair and comprehensive comparison, we follow Yu et al. (2019) and Wang et al. (2020a) to evaluate our model on two public datasets NYT (Riedel et al., 2010) and WebNLG (Gardent et al., 2017), both of which have two versions, respectively.",
"We denote the different versions as NYT*, NYT and WebNLG*, WebNLG.",
"Note that NYT* and WebNLG* annotate the last word of entities, while NYT and WebNLG annotate the whole entity span.",
"The statistics of the datasets are described in Table",
"2. Following Wei et al. (2020), we further characterize the test set w.r.t. the overlapping patterns and the number of triples per sentence.",
"Following prior works mentioned above, an extracted relational triple is regarded as correct only if it is an exact match with ground truth, which means the last word of entities or the whole entity span (depending on the annotation protocol) of both subject and object and the relation are all correct.",
"Meanwhile, we report the standard micro Precision (Prec.), Recall (Rec.) and F1-score for all the baselines.",
"The implementation details are shown in Appendix B. We compare PRGC with eight strong baseline models and the state-of-the-art models CasRel (Wei et al., 2020) and TPLinker (Wang et al., 2020a).",
"All the experimental results of the baseline models are directly taken from Wang et al. (2020a) unless specified.",
"In this section, we present the overall results and the results of complex scenarios, while the results on different subtasks corresponding to different",
"Table 3 shows the results of our model against other baseline methods on four datasets.",
"Our PRGC method outperforms them in respect of almost all evaluation metrics even if compared with the recent strongest baseline (Wang et al., 2020a) which is quite complicated.",
"At the same time, we implement PRGC Random to validate the utility of our PRGC decoder, where all parameters of the encoder BERT are randomly initialized.",
"The performance of PRGC Random demonstrates that our decoder framework (which obtains 7% improvements than CasRel Random ) is still more competitive and robust than others even Model N = 1 N = 2 N = 3 N = 4 N 5 NYT * OrderCopyRE 71.7 72.6 72.5 77.9 45.9 ETL-Span 88.5 82.1 74.7 75.6 76.9 CasRel 88.2 90.3 91.9 94.2 83.7 TPLinker 90.0 92.8 93.1 96.1 90.0 PRGC 91.1 93.0 93.5 95.5 93.0 W e b NLG * OrderCopyRE 63.4 62.2 64.4 57.2 55.7 ETL-Span 82.1 86.5 91.4 89.5 91.1 CasRel 89.3 90.8 94.2 92.4 90.9 TPLinker 88.0 90.1 94.6 93.3 91.6 PRGC 89.9 91.6 95.0 94.8 92.8 Table 5: F1-score ( % ) of sentences with different numbers of triples where N is the number of triples in a sentence.",
"without taking advantage of the pre-trained BERT language model.",
"It is important to note that even though TPLinker BERT has more parameters than CasRel BERT , it only obtains 0.1% improvements on the WebNLG* dataset, and the authors attributed this to problems with the dataset itself.",
"However, our model achieves a 10 improvements than TPLinker on the WebNLG* dataset and a significant promotion on the WebNLG dataset.",
"The reason behind this is that the relation judgement component of our model greatly reduces redundant relations particularly in the versions of WebNLG which contain hundreds of relations.",
"In other words, the reduction in negative relations provides an additional boost compared to the models that perform entity extraction under every relation.",
"Following previous works (Wei et al., 2020; Yuan et al., 2020; Wang et al., 2020a), to verify the capability of our model in handling different overlapping patterns and sentences with different numbers of triples, we conduct further experiments on NYT* and WebNLG* datasets.",
"As shown in Table 4, our model exceeds all the baselines in all overlapping patterns in both datasets except the SOO pattern in the NYT* dataset.",
"Actually, the observation on the latter scenario is not reliable due to the very low percentage of SOO in NYT* (i.e., 45 out of 8,110 as shown in Table 2).",
"As shown in Table 5, the performance of our model is better than others almost in every subset regardless of the number of triples.",
"In general, these two further experiments adequately show the advantages of our model in complex scenarios.",
"efficiency with respect to Complexity , floating point operations ( FLOPs ) (Molchanov et al., 2017), parameters of the decoder ( Params decoder ) and Inference Time 6 of CasRel, TPLinker and PRGC in two datasets which have quite different characteristics in the size of relation set, the average number of relations per sentence and the average number of subjects per sentence.",
"All experiments are conducted with the same hardware configuration.",
"Because the number of subjects in a sentence varies, it is difficult for CasRel to predict objects in a heterogeneous batch, and it is restricted to set batch size to 1 in the official implementation (Wang et al., 2020a).",
"For the sake of fair comparison, we set batch size to 1 and 24 to verify the single-thread decoding speed and parallel processing capability, respectively.",
"The results indicate that the single-thread decoding speed of PRGC is 2 as CasRel and 3 as TPLinker, and our model is significantly better than TPLinker in terms of parallel processing.",
"Note that the model efficiency of CasRel and TPLinker decreases as the size of relation set increases but our model is not affected by the size of relation set, thus PRGC overwhelmingly outperforms both models in terms of all the indicators of efficiency in the WebNLG* dataset.",
"Compared with the state-of-the-art model TPLinker, PRGC is an order of magnitude lower in Complexity and the FLOPs is even 200 times lower, thus PRGC has fewer parameters and obtains 3 speedup in the inference phase while the F1-score is improved by 1.1%.",
"Even though CasRel has lower Complexity and FLOPs in the NYT* dataset, PRGC still has significant advantages and obtains a 5 speedup in the inference time and 3% improvements in F1-score.",
"Meanwhile, Figure 2 proves our advantage in convergence rate.",
"These all confirm the efficiency of 6 The FLOPs and Params decoder are calculated via: https://github.com/sovrasov/flops-counter.pytorch.",
"In this section, we conduct ablation experiments to demonstrate the effectiveness of each component in PRGC with results reported in Table",
"7. Model Prec.",
"We use each relation in the relation set to perform sequence tagging when we remove the Potential Relation Prediction component to avoid the exposure bias.",
"As shown in Table 7, the precision significantly decreases without this component, because the number of predicted triples increases due to relations not presented in the sentences, especially in the WebNLG* dataset where the size of relation set is much bigger and brings tremendous relation redundancy.",
"Meanwhile, with the increase of relation number in sentences, the training and inference time increases three to four times.",
"Through this experiment, the validity of this component that aims to predict a potential relation subset is proved, which is not only beneficial to model accuracy, but also to efficiency.",
"As a comparison for sequence tagging scheme, following Wei et al. (2020) and Wang et al. (2020a), we perform binary classification to detect start",
"and end positions of an entity with the span-based scheme.",
"As shown in Table 7, span-based scheme brings significant decline of performance.",
"Through the case study shown in Figure 3, we observe that the span-based scheme tends to extract long entities and identify the correct subject-object pairs but ignore their relation.",
"That is because the model is inclined to remember the position of an entity rather than understand the underlying semantics.",
"However, the sequence tagging scheme used by PRGC performs well in both cases, and experimental results prove that our tagging scheme is more robust and generalizable.",
"For comparison, we exploit the heuristic nearest neighbor principle to combine the subject-object pairs which was used by Zheng et al. (2017) and Yuan et al. (2020).",
"As shown in Table 7, the precision also significantly decreases without Global Correspondence , because the number of predicted triples increases with many mismatched pairs when the model loses the constraint imposed by this component.",
"This experiment proves that the Global Correspondence component is effective and greatly outperforms the heuristic nearest neighbor principle in the subject-object alignment task.",
"In this paper, we presented a brand-new perspective and introduced a novel joint relational extraction framework based on P otential R elation and G lobal C orrespondence, which greatly alleviates the problems of redundant relation judgement, poor generalization of span-based extraction and inefficient subject-object alignment.",
"Experimental results showed that our model achieved the state-of-the-art performance in the public datasets and successfully handled many complex scenarios with higher efficiency."
] | [
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"objective",
"objective",
"result",
"method",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"method",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"result"
] |
[
"Existing computational models to understand hate speech typically frame the problem as a simple classification task, bypassing the understanding of hate symbols (e.g., 14 words , kigy ) and their secret connotations.",
"In this paper, we propose a novel task of deciphering hate symbols.",
"To do this, we leverage the Urban Dictionary and collected a new, symbol-rich Twitter corpus of hate speech.",
"We investigate neural network latent context models for deciphering hate symbols.",
"More specifically, we study Sequence-to-Sequence models and show how they are able to crack the ciphers based on context.",
"Furthermore, we propose a novel Variational Decipher and show how it can generalize better to unseen hate symbols in a more challenging testing setting.",
"The statistics are sobering.",
"The Federal Bureau of Investigation of United States 1 reported over 6,000 criminal incidents motivated by bias against race, ethnicity, ancestry, religion, sexual orientation, disability, gender, and gender identity during 2016.",
"The most recent 2016 report shows an alarming 4.6% increase, compared with 2015 data 2 .",
"In addition to these reported cases, thousands of Internet users, including celebrities, are forced out of social media due to abuse, hate speech, cyberbullying, and online threats.",
"While such social media data is abundantly available, the broad question we are asking isWhat can machine learning and natural language processing do to help and prevent online hate speech?",
"The vast quantity of hate speech on social media can be analyzed to study online abuse.",
"In 1 https://www.fbi.gov/news/stories/2016-hate-crime-statistics 2 https://www.fbi.gov/news/stories/2015-hate-crime-statistics-released Figure 1: An example tweet with hate symbols.",
"recent years, there has been a growing trend of developing computational models of hate speech.",
"However, the majority of the prior studies focus solely on modeling hate speech as a binary or multiclass classification task (Djuric et al., 2015; Waseem and Hovy, 2016; Burnap and Williams, 2016; Wulczyn et al., 2017; Pavlopoulos et al., 2017).",
"While developing new features for hate speech detection certainly has merits, we believe that understanding hate speech requires us to design computational models that can decipher hate symbols that are commonly used by hate groups.",
"Figure 1 shows an example usage of hate symbols in an otherwise seemingly harmless tweet that promotes hate.",
"For example, Aryan Warrior is a longstanding racist prison gang based in the Nevada prison system.",
"WPWW is the acronym for White Pride World Wide .",
"The hate symbols 1488 and 2316 are more implicit.",
"14 symbolizes the 14 words: WE MUST SECURE THE EXISTENCE OF OUR PEOPLE AND A FUTURE FOR WHITE CHILDREN , spoken by members of the Order neo-Nazi movement.",
"H is the 8th letter of the alphabet, so 88 = HH = Heil Hitler .",
"Similarly, W is the 23rd and P is the 16th letter of the alphabet, so 2316 = WP = White Power .",
"In this work, we propose the first models for deciphering hate symbols.",
"We investigate two families of neural network approaches: the Sequence-to-Sequence models (Sutskever et al., 2014; Cho et al., 2014) and a novel Variational Decipher based on the Conditional Variational Autoen-coders (Kingma and Welling, 2014; Sohn et al., 2015; Larsen et al., 2016).",
"We show how these neural network models are able to guess the meaning of hate symbols based on context embeddings and even generalize to unseen hate symbols during testing.",
"Our contributions are three-fold: We propose a novel task of learning to decipher hate symbols, which moves beyond the standard formulation of hate speech classification settings.",
"We introduce a new, symbol-rich tweet dataset for developing computational models of hate speech analysis, leveraging the Urban Dictionary.",
"We investigate a sequence-to-sequence neural network model and show how it is able to encode context and crack the hate symbols.",
"We also introduce a novel Variational Decipher, which generalizes better in a more challenging setting.",
"In the next section, we outline related work in text normalization, machine translation, conditional variational autoencoders, and hate speech analysis.",
"In Section 3, we introduce our new dataset for deciphering hate speech.",
"Next, in Section 4, we describe the design of two neural network models for the decipherment problem.",
"Quantitative and qualitative experimental results are presented in Section",
"5. Finally, we conclude in Section",
"6. 2 Related Work 2.1 Text Normalization in Social Media The proposed task is related to text normalization focusing on the problems presented by user-generated content in online sources, such as misspelling, rapidly changing out-of-vocabulary slang, short-forms and acronyms, punctuation errors or omissions, etc.",
"These problems usually appear as out-of-vocabulary words.",
"Extensive research has focused on this task (Beaufort et al., 2010; Liu et al., 2011; Gouws et al., 2011; Han and Baldwin, 2011; Han et al., 2012; Liu et al., 2012; Chrupaa, 2014).",
"However, our task is different from the general text normalization in social media in that instead of the out-of-vocabulary words, we focus on the symbols conveying hateful meaning.",
"These hate symbols can go beyond lexical variants of the vocabulary and thus are more challenging to understand.",
"An extensive body of work has been dedicated to machine translation.",
"Knight et al. (2006) study a number of natural language decipherment problems using unsupervised learning.",
"Ravi and Knight (2011) further frame the task of machine translation as decipherment and tackle it without parallel training data.",
"Machine translation using deep learning (Neural Machine Translation) has been proposed in recent years.",
"Sutskever et al. (2014) and Cho et al. (2014) use Sequence to Sequence (Seq2Seq) learning with Recurrent Neural Networks (RNN).",
"Bahdanau et al. (2015) further improve translation performance using the attention mechanism.",
"Google's Neural Machine Translation System (GNMT) employs a deep attentional LSTM network with residual connections (Wu et al., 2016).",
"Recently, machine translation techniques have been also applied to explain non-standard English expressions (Ni and Wang, 2017).",
"However, our deciphering task is not the same as machine translation in that hate symbols are short and cannot be modeled as language.",
"Our task is more closely related to (Hill et al., 2016) and (Noraset et al., 2017).",
"Hill et al. (2016) propose using neural language embedding models to map the dictionary definitions to the word representations, which is the inverse of our task.",
"Noraset et al. (2017) propose the definition modeling task.",
"However, in their task, for each word to be defined, its pre-trained word embedding is required as an input, which is actually the prior knowledge of the words.",
"However, such kind of prior knowledge is not available in our decipherment task.",
"Therefore, our task is more challenging and is not simply a definition modeling task.",
"Unlike the original Seq2Seq model that directly encodes the input into a latent space, the Variational Autoencoder (VAE) (Kingma and Welling, 2014) approximates the underlying probability distribution of data.",
"VAE has shown promise in multiple generation tasks, such as handwritten digits (Kingma and Welling, 2014; Salimans et al., 2015), faces (Kingma and Welling, 2014; Rezende et al., 2014), and machine translation (Zhang et al., 2016).",
"Conditional Variational Autoencoder (Larsen et al., 2016; Sohn et al., 2015) extends the original VAE framework by incorporating conditions during generation.",
"In addition to image generation, CVAE has been successfully applied to some NLP tasks.",
"For example, Zhao et al. (2017) apply CVAE to dialog generation, while Guu et al. (2018) use CVAE for sentence generation.",
"Closely related to our work are Pavlopoulos et al. (2017); Gao et al. (2017).",
"Pavlopoulos et al. (2017) build an RNN supplemented by an attention mechanism that outperforms the previous state of the art system in user comment moderation (Wulczyn et al., 2017).",
"Gao et al. (2017) propose a weakly-supervised approach that jointly trains a slur learner and a hate speech classifier.",
"While their work contributes to the automation of harmful content detection and the highlighting of suspicious words, our work builds upon these contributions by providing a learning mechanism that deciphers suspicious hate symbols used by communities of hate to bypass automated content moderation systems.",
"We first collect hate symbols and the corresponding definitions from the Urban Dictionary.",
"Each term with one of the following hashtags: #hate, #racism, #racist, #sexism, #sexist, #nazi is selected as a candidate and added to the set S 0 .",
"We collected a total of 1,590 terms.",
"Next, we expand this set by different surface forms using the Urban Dictionary API.",
"For each term s i in set S 0 , we obtain a set of terms R i that have the same meaning as s i but with different surface forms.",
"For example, for the term brown shirt , there are four terms with different surface forms: brown shirt, brown shirts, Brownshirts, brownshirt .",
"Each term in R i has its own definition in Urban Dictionary, but since these terms have exactly the same meaning, we select a definition d i with maximum up-vote/downvote ratio for all the terms in R i .",
"For example, for each term in the set R i = { brown shirt, brown shirts, Brownshirts, brownshirt } , the corresponding definition is Soldiers in Hitler's storm trooper army, SA during the Nazi regime...",
"After expanding, we obtain 2,105 distinct hate symbol terms and their corresponding definitions.",
"On average, each symbol consists of 9.9 characters, 1.5 words.",
"Each definition consists of 96.8 characters, 17.0 words.",
"For each of the hate symbols, we collect all tweets from 2011-01-01 to 2017-12-31 that contain exactly the same surface form of hate symbol in the text.",
"Since we only focus on hate speech, we train an SVM (Cortes and Vapnik, 1995) classi-fier to filter the collected tweets.",
"The SVM model is trained on the dataset published by Waseem and Hovy (2016).",
"Their original dataset contains three labels: Sexism, Racism, and None.",
"Since the SVM model is used to filter the non-hate speech, we merge the instances labeled as sexism and racism, then train the SVM model to do binary classification.",
"After filtering out all the tweets classified as non-hate, our final dataset consists of 18,667 (tweet, hate symbol, definition) tuples.",
"We formulate hate symbol deciphering as the following equation:",
"X is the dataset, ( u, s, d ) is the (tweet, symbol, definition) tuple in the dataset.",
"The inputs are the tweet and the hate symbol in this tweet.",
"The output is the definition of the symbol.",
"Our objective is to maximize the probability of the definition given the (tweet, symbol) pair.",
"This objective function is very similar to that of machine translation.",
"So we first try to tackle it based on the Sequence-to-Sequence model, which is commonly used in machine translation.",
"We implement an RNN Encoder-Decoder model with attention mechanism based on Bahdanau et al. (2015).",
"We use GRU (Cho et al., 2014) for decoding.",
"However, instead of also using GRU for encoding, we found that LSTM (Hochreiter Figure 2: Our Seq2Seq model. u , s w are the word embeddings of the tweet text and hate symbol. s c is the character embedding of the symbol. c u is the encoded tweet and h is the concatenated hidden states. d is the generated text. Detailed explanation is in section 4.1. and Schmidhuber, 1997) performs better on our task.",
"Therefore, our Seq2Seq model uses LSTM encoders and GRU decoders.",
"An overview of our Seq2Seq model is shown in Figure",
"2. The computation process is shown as the following equations: c u , h u = f u ( u ) (2) c sw , h sw = f sw ( s w ) (3) c sc , h sc = f sc ( s c ) (4) u is the word embedding of the tweet text, s w is the word embedding of the hate symbol, s c is the character embedding of the symbol.",
"f u , f sw , and f sc are LSTM functions.",
"c u , c sw , c sc are the outputs of the LSTMs at the last time step and h u , h sw , h sc are the hidden states of the LSTMs at all time steps.",
"We use two RNN encoders to encode the symbol, one encodes at the word level and the other one encodes at the character level.",
"The character-level encoded hate symbol is used to provide the feature of the surface form of the hate symbol while the word-level encoded hate symbol is used to provide the semantic information of the hate symbol.",
"The hidden states of the two RNN encoders for hate symbols are concatenated: h = h sw h sc (5) c u is the vector of encoded tweet text.",
"The tweet text is the context of the hate symbol, which provides additional information during decoding.",
"Therefore, the encoded tweet text it is also fed into the RNN decoder.",
"The detailed attention mechanism and decoding process at time step t are as follows: w t = ( l w ( d t 1 e t 1 )) (6) a t = T (cid:88) i =1 w ti h i (7) b t = ( l c ( d t 1 a t )) (8) o t , e t = k ( c u b t , e t 1 ) (9) p ( d t | u, s ) = ( l o ( o t )) (10) w t is the attention weights at time step t and w ti is the i th weight of w t .",
"d t 1 is the generated word at last time step and e t 1 is the hidden state of the decoder at last time step.",
"h i is the i th time step segment of h .",
"l w , l c , and l o are linear functions.",
"is a nonlinear activation function.",
"k is the GRU function.",
"o t is the output and e t is the hidden state of the GRU.",
"p ( d t | u, s ) is the probability distribution of the vocabulary at time step t .",
"The attention weights w t are computed based on the decoder's hidden state and the generated word at time step t 1 .",
"Then the computed weights are applied to the concatenated hidden states h of encoders.",
"The result a t is the context vector for the decoder at time step t .",
"The context vector and the last generated word are combined by a linear function l c followed by a nonlinear activation function.",
"The result b t is concatenated with the encoded tweet context c u , and then fed into GRU together with the decoder's last hidden state e t 1 .",
"Finally, the probability of each vocabulary word is computed from o t .",
"The Variational Decipher is based on the CVAE model, which is another model that can be used to parametrize the conditional probability p ( d | ( u, s )) in the objective function (Equation 1).",
"Unlike the Seq2Seq model, which directly parametrizes p ( d | ( u, s )) , our variational decipher formulates the task as follows: Obj = (cid:88) ( u,s,d ) X log p ( d | ( u, s )) = (cid:88) ( u,s,d ) X log (cid:90) z p ( d | z ) p ( z | ( u, s )) dz (11) where z is the latent variable.",
"p ( d | ( u, s ) is written as the marginalization of the product of two terms over the latent space.",
"Since the integration Figure 3: The Variational Decipher.",
"over z is intractable, we instead try to maximize the evidence lower bound (ELBO).",
"Our variational lower bound objective is in the following form: Obj = E [log p ( d | z, u, s )] DKL [ p ( z | d , u, s ) || p ( z | u, s )] (12) where p ( d | z, u, s ) is the likelihood, p ( z | d , u, s ) is the posterior, p ( z | u, s ) is the prior, and DKL is the Kullback-Leibler (KL) divergence.",
"We use three neural networks to model these three probability distributions.",
"An overview of our variational decipher is shown in Figure",
"3. We first use four recurrent neural networks to encode the (tweet, symbol, definition) pair in the dataset.",
"Similar to what we do in the Seq2Seq model, there are two encoders for the hate symbol.",
"One is at the word level and the other is at the character level.",
"The encoding of symbols and tweets are exactly the same as in our Seq2Seq model (see Equations 2-4).",
"The difference is that we also need to encode definitions for the Variational Decipher.",
"Here, f d is the LSTM function.",
"x is the output of the LSTM at the last time step and h d is the hidden state of the LSTM at all time steps.",
"The condition vector c is the concatenation of the encoded symbol words, symbol characters, and the tweet text: c = c u c sw c sc (14) We use multi-layer perceptron (MLP) to model the posterior and the prior in the objective function.",
"The posterior network and the prior network have the same structure and both output a probability distribution of latent variable z .",
"The only difference is that the input of the posterior network is the concatenation of the encoded definition x and the condition vector c while the input of the prior network is only the condition vector c .",
"Therefore, the output of the posterior network p = p ( z | d , u, s ) and the output of the prior network p (cid:48) = p ( z | u, s ) .",
"By assuming the latent variable z has a multivariate Gaussian distribution, the actual outputs of the posterior and prior networks are the mean and variance: ( , ) for the posterior network and ( (cid:48) , (cid:48) ) for the prior network.",
"g is the MLP function of the posterior network and g (cid:48) is that of the prior network.",
"During training, the latent variable z is randomly sampled from the Gaussian distribution N ( , ) and fed into the likelihood network.",
"During testing, the posterior network is replaced by the prior network, so z is sampled from N ( (cid:48) , (cid:48) ) .",
"The likelihood network is modeled by an RNN decoder with attention mechanism, very similar to the decoder of our Seq2Seq model.",
"The only difference lies in the input for the GRU.",
"The decoder in our Variational Decipher model is to model the likelihood p ( d | z, u, s ) , which is conditioned on the latent variable, tweet context, and the symbol.",
"Therefore, for the Variational Decipher, the condition vector c and the sampled latent variable z are fed into the decoder.",
"e t 1 is the hidden state of the RNN decoder at the last time step.",
"k is the GRU function.",
"o t is its output and e t is its hidden state.",
"Detailed decoding process and explanations are in section 4.1.",
"According to the objective function in Equation 12, the loss function of the Variational Decipher is as follows: L = LREC + LKL = E z p ( z | d ,u,s ) [ log p ( d | z, u, s )]+ DKL [ p ( z | d , u, s ) || p ( z | u, s )] (18) It consists of two parts.",
"The first part LREC is called reconstruction loss.",
"Optimizing LREC can push the sentences generated by the posterior network and the likelihood network closer to the given definitions.",
"The second part LKL is the KL divergence loss.",
"Optimizing this loss can push the output Gaussian Distributions of the prior network closer to that of the posterior network.",
"This means we teach the prior network to learn the same knowledge learned by the posterior network, such that during testing time, when the referential definition d is no longer available for generating the latent variable z , the prior network can still output a reasonable probability distribution over the latent variable z .",
"The complete training and testing process for the Variational Decipher is shown in Algorithm 1.",
"M is the predefined maximum length of the generated text.",
"BCE refers to the Binary Cross Entropy loss.",
"We use the dataset collected as described in section 3 for training and testing.",
"We randomly selected 2,440 tuples for testing and use the remaining 16,227 tuples for training.",
"Note that there are no overlapping hate symbols between the training dataset U and the testing dataset D .",
"We split the 2,440 tuples of the testing dataset D into two separate parts, D s and D d .",
"D s consists of 1,681 examples and D d consists of 759 examples.",
"In the first testing dataset D s , although each hate symbol does not appear in the training dataset, the corresponding definition appears in the training dataset.",
"In the second testing dataset D d , neither the hate symbols nor the corresponding definitions appear in the training dataset.",
"We do this split because deciphering hate symbols in these two cases has different levels of difficulty.",
"This split criterion means that for each hate symbol in D s , there exists some symbol in the training dataset that has the same meaning but in different surface forms.",
"For example, the hate Algorithm 1 Train & Test Variational Decipher 1: function TRAIN ( U ) 2: randomly initialize network parameters , , ; 3: for epoch = 1 , E do 4: for ( tweet, symbol, definition ) in U do 5: get embeddings u , s w , s c , d ; 6: compute x , c and h with RNN encoders; 7: compute , with the posterior network; 8: compute (cid:48) , (cid:48) with the prior network; 9: compute KL-divergence loss LKL ; 10: sample z = reparameterize ( , ) ; 11: initialize the decoder state e 0 = c ; 12: LREC = 0 ; 13: for t = 1 , M do 14: compute attention weights w t ; 15: compute o t , e t and p ( d t | z, u, s ) ; 16: d t = indmax ( p ( d t | z, u, s )) ; 17: LREC + = BCE ( d t , d t ) ; 18: if d t ==EOS then 19: break; 20: end if 21: end for 22: update , , on L = LREC + LKL ; 23: end for 24: end for 25: end function 26:27: function TEST ( V ) 28: for ( tweet, symbol, definition ) in V do 29: get embeddings u , s w , s c ; 30: compute c and h with RNN encoders; 31: compute (cid:48) , (cid:48) with the prior network; 32: sample z = reparameterize ( (cid:48) , (cid:48) ) ; 33: initialize the decoder state e 0 = c ; 34: for t = 1 , M do 35: compute attention weights w ; 36: compute o t , e t and p ( d t | z, u, s ) ; 37: d t = indmax ( p ( d t | z, u, s )) ; 38: if d t ==EOS then 39: break; 40: end if 41: end for 42: end for 43: end function symbol wigwog and Wig Wog have the same definition but one is in the training dataset, the other is in the first testing dataset.",
"We assume that such types of hate symbols share similar surface forms or similar tweet contexts.",
"Therefore, the first testing dataset D s is to evaluate how well the model captures the semantic similarities among the tweet contexts in different examples or the similarities among different surface forms of a hate symbol.",
"Deciphering the hate symbols in the second testing dataset D d is more challenging.",
"Both the unseen hate symbols and definitions require the model to have the ability to accurately capture the semantic information in the tweet context and then make a reasonable prediction.",
"The second testing dataset D d is used to evaluate how well the model generalizes to completely new hate symbols.",
"For the Seq2Seq model, we use negative log-likelihood loss for training.",
"Both models are optimized using Adam optimizer (Kingma and Ba, 2015).",
"The hyper-parameters of two models are exactly the same.",
"We set the maximum generation length M = 50 .",
"The hidden size of the encoders is 64.",
"The size of the word embedding is 200 and that of character embedding is 100.",
"The word embeddings and character embeddings are randomly initialized.",
"Each model is trained for 50 epochs.",
"We report the deciphering results of two models on three testing datasets D , D s and D d .",
"Quantitative Results: We use equally weighted BLEU score for up to 4-grams (Papineni et al., 2002), ROUGE-L (Lin, 2004) and METEOR (Banerjee and Lavie, 2005) to evaluate the decipherment results.",
"The results are shown in Table 1.",
"Figure 4 shows the BLEU score achieved by the two models on three testing datasets D , D s and D d during the training process.",
"Both our Seq2Seq model and Variational Decipher achieve reasonable BLEU scores on the testing datasets.",
"The Seq2Seq model outperforms the Variational Decipher on D s while Variational Decipher outperforms Seq2Seq on D d .",
"Note that D s is more than twice the size of D d .",
"Therefore, Seq2Seq outperforms Variational Decipher on the entire testing dataset D .",
"The different performance of the two models on D s and D d is more obvious in Figure",
"4. The gap between the performance of the Seq2Seq model on D s and D d is much larger than that between the performance of the Variational Decipher on these two datasets.",
"Human Evaluation: We employed crowd-sourced workers to evaluate the deciphering results of two models.",
"We randomly sampled 100 items of deciphering results from D s and another Figure 4: BLEU scores of two models on the testing dataset D , D s and D d .",
"100 items from D d .",
"Each item composes a choice question and each choice question is assigned to five workers on Amazon Mechanical Turk.",
"In each choice question, the workers are given the hate symbol, the referential definition, the original tweet and two machine-generated plain texts from the Seq2Seq model and Variational Decipher.",
"Workers are asked to select the more reasonable of the two results.",
"In each choice question, the order of the results from the two models is permuted.",
"Ties are permitted for answers.",
"We batch five items in one assignment and insert an artificial item with two identical outputs as a sanity check.",
"The workers who fail to choose tie for that item are rejected from our test.",
"The human evaluation results are shown in Table 2, which coincide with the results in Table 1 and Figure",
"4. Discussion: When deciphering the hate symbols that have the same definitions as in the training dataset, the model can rely more on the surface forms of hate symbols than the tweet context to make a prediction because usually the hate symbols that share the same definitions also have similar surface forms.",
"However, when it comes to the hate symbols with unseen definitions, simply relying on the surface forms cannot lead to a reasonable deciphering result.",
"Instead, the model should learn the relationships between the conFigure 5: Some example errors in the generated results of our Seq2Seq model and Variational Decipher.",
"text information and the definition of the symbol.",
"Therefore, the different performances of two models on the two testing datasets D s and D d indicate that the Seq2Seq model is better at capturing the similarities among different surface forms of a hate symbol, while the Variational Decipher is better at capturing the semantic relationship between the tweet context and the hate symbol.",
"The Sequence-to-Sequence model tries to capture such kinds of relationships by compressing all the context information into a fixed length vector, so its deciphering strategy is actually behavior cloning.",
"On the other hand, the Variational Decipher captures such relationships by explicitly modeling the posterior and likelihood distributions.",
"The modeled distributions provide higher-level semantic information compared to the compressed context, which allows the Variational Decipher to generalize better to the symbols with unseen definitions.",
"This explains why the gap between the performance of the Seq2Seq model on two datasets is larger.",
"Figure 5 shows some example errors of the deciphering results of our Seq2Seq model and Variational Decipher.",
"One problem with the deciphering results is that the generated sentences have poor grammatical structure, as shown in Figure",
"5. This is mainly because the size of our dataset is small, and the models need a much larger corpus to learn the grammar.",
"We anticipate that the generation performance will be improved with a larger dataset.",
"For the hate symbols in D s , the deciphering results are of high quality when the length of referential definitions are relatively short.",
"An example is macaca , a French slur shows in Figure",
"5. The deciphering result of the Seq2Seq model is close to the referential definition.",
"As to the Variational Decipher, although the result is not literally the same as the definition, the meaning is close.",
"closet homosexuals in Figure 5 is another example.",
"However, when the length of the referential definition increases, the performance of both models tends to be unsatisfactory, as the third example confederate flag shows in Figure",
"5. Although there exists the symbol Confederate Flag with the same definition in the training set, both models fail on this example.",
"One possible reason is that the complexity of generating the referential definition grows substantially with the increasing length, so when the tweet context and the symbol itself cannot provide enough information, the generation model cannot learn the relationship between the symbol and its definition.",
"Deciphering hate symbols in D d is much more challenging.",
"Even for humans, deciphering completely new hate symbols is not a simple task.",
"The two examples in Figure 5 show that the models have some ability to capture the semantic similarities.",
"For the symbol niggering , the Variational Decipher generates the word nigger and Seq2Seq model generates black .",
"For Heil Hitler , the Variational Decipher generates leader person and Nazi , while Seq2Seq also generates Nazi .",
"Although these generated words are not in the definition, they still make some sense.",
"We propose a new task of learning to decipher hate symbols and create a symbol-rich tweet dataset.",
"We split the testing dataset into two parts to analyze the characteristics of the Seq2Seq model and the Variational Decipher.",
"The different performance of these two models indicates that the models can be applied to different scenarios of hate symbol deciphering.",
"The Seq2Seq model outperforms the Variational Decipher for deciphering the hate symbols with similar definitions to that in the training dataset.",
"This means the Seq2Seq model can better explain the hate symbols when Twitter users intentionally misspell or abbreviate common slur terms.",
"On the other hand, the Variational Decipher tends to be better at deciphering hate symbols with unseen definitions, so it can be applied to explain newly created hate symbols on Twitter.",
"Although both models show promising deciphering results, there still exists much room for improvement."
] | [
"abstain",
"objective",
"objective",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"objective",
"objective",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Understanding how social power structures affect the way we interact with one another is of great interest to social scientists who want to answer fundamental questions about human behavior, as well as to computer scientists who want to build automatic methods to infer the social contexts of interactions.",
"In this paper, we employ advancements in extra-propositional semantics extraction within NLP to study how author commitment reflects the social context of an interactions.",
"Specifically, we investigate whether the level of commitment expressed by individuals in an organizational interaction reflects the hierarchical power structures they are part of.",
"We find that subordinates use significantly more instances of non-commitment than superiors.",
"More importantly, we also find that subordinates attribute propositions to other agents more often than superiors do an aspect that has not been studied before.",
"Finally, we show that enriching lexical features with commitment labels captures important distinctions in social meanings.",
"Social power is a difficult concept to define, but is often manifested in how we interact with one another.",
"Understanding these manifestations is important not only to answer fundamental questions in social sciences about power and social interactions, but also to build computational models that can automatically infer social power structures from interactions.",
"The availability and access to large digital repositories of naturally occurring social interactions and the advancements in natural language processing techniques in recent years have enabled researchers to perform large scale studies on linguistic correlates of power, such as words and phrases (Bramsen et al., 2011; Gilbert, 2012), linguistic coordination (Danescu-Niculescu-Mizil et al., 2012), agenda control (Tay-lor et al., 2012), and dialog structure (Prabhakaran and Rambow, 2014).",
"Another area of research that has recently garnered interest within the NLP community is the modeling of author commitment in text.",
"Initial studies in this area were done in processing hedges, uncertainty and lack of commitment, specifically focused on scientific text (Mer-cer et al., 2004; Di Marco et al., 2006; Farkas et al., 2010).",
"More recently, researchers have also looked into capturing author commitment in non-scientific text, e.g., levels of factuality in newswire (Saur and Pustejovsky, 2009), types of commitment of beliefs in a variety of genres including conversational text (Diab et al., 2009; Prabhakaran et al., 2015).",
"These approaches are motivated from an information extraction perspective, for instance in aiding tasks such as knowledge base population.",
"1 However, it has not been studied whether such sophisticated author commitment analysis can go beyond what is expressed in language and reveal the underlying social contexts in which language is exchanged.",
"In this paper, we bring together these two lines of research; we study how power relations correlate with the levels of commitment authors express in interactions.",
"We use the power analysis framework built by Prabhakaran and Rambow (2014) to perform this study, and measure author commitment using the committed belief tagging framework introduced by (Diab et al., 2009) that distinguishes different types of beliefs expressed in text.",
"Our contributions are two-fold statistical analysis of author commitment in relation with power, and enrichment of lexical features with commitment labels to aid in computational prediction of power relations.",
"In the first part, we find that au-1 The BeSt track of the 2017 TAC-KBP evaluation aimed at detecting the belief and sentiment of an entity toward another entity, relation, or event ( http://www.cs. columbia.edu/rambow/best-eval-2017/ ).",
"thor commitment is significantly correlated with the social power relations between their participants subordinates use more instances of noncommitment, a finding that is in line with sociolinguistics studies in this area.",
"We also find that subordinates use significantly more reported beliefs (i.e., attributing beliefs to other agents) than superiors.",
"This is a new finding; to our knowledge, there has not been any sociolinguistics studies investigating this aspect of interaction in relation with power.",
"In the second part, we present novel ways of incorporating the author commitment information into lexical features that can capture important distinctions in word meanings conveyed through the belief contexts in which they occur; distinctions that are lost in a model that conflates all occurrences of a word into one unit.",
"We first describe the related work in computational power analysis and computational modeling of cognitive states in Section 2. In Section 3, we describe the power analysis framework we use.",
"Section 4 formally defines the research questions we are investigating, and describes how we obtain the belief information.",
"In Section 5, we present the statistical analysis of author commitment and power.",
"Section 6 presents the utility of enriching lexical features with belief labels in the context of automatic power prediction.",
"Section 7 concludes the paper and summarizes the results.",
"The notion of belief that we use in this paper (Diab et al., 2009; Prabhakaran et al., 2015) is closely related to the notion of factuality that is captured in FactBank (Saur and Pustejovsky, 2009).",
"They capture three levels of factuality, certain (CT), probable (PB), and possible (PS), as well as the underspecified factuality (Uu).",
"They also record the corresponding polarity values, and the source of the factuality assertions to distinguish between factuality assertions by the author and those by the agents/sources introduced by the author.",
"While FactBank offers a finer granularity, they are annotated on newswire text.",
"Hence, we use the corpus of belief annotations (Prabhakaran et al., 2015) that is obtained on online discussion forums, which is closer to our genre.",
"Automatic hedge/uncertainty detection is a very closely related task to belief detection.",
"The belief tagging framework we use aims to capture the cognitive states of authors, whereas hedges are linguistic expressions that convey one of those cognitive states non-committed beliefs.",
"Automatic hedge/uncertainty detection has generated active research in recent years within the NLP community.",
"Early work in this area focused on detecting speculative language in scientific text (Mer-cer et al., 2004; Di Marco et al., 2006; Kilicoglu and Bergler, 2008).",
"The open evaluation as part of the CoNLL shared task in 2010 to detect uncertainty and hedging in biomedical and Wikipedia text (Farkas et al., 2010) triggered further research on this problem in the general domain (Agarwal and Yu, 2010; Morante et al., 2010; Velldal et al., 2012; Choi et al., 2012).",
"Most of this work was aimed at formal scientific text in English.",
"More recent work has tried to extend this work to other genres (Wei et al., 2013; Sanchez and Vogel, 2015) and languages (Velupillai, 2012; Vincze, 2014), as well as building general purpose hedge lexicons (Prokofieva and Hirschberg, 2014).",
"In our work, we use the lexicons from (Prokofieva and Hirschberg, 2014) to capture hedges in text.",
"Sociolinguists have long studied the association between level of commitment and social contexts (Lakoff, 1973; O'Barr and Atkins, 1980; Hyland, 1998).",
"A majority of this work studies gender differences in the use of hedges, triggered by the influential work by Robin Lakoff (Lakoff, 1973).",
"She argued that women use linguistic strategies such as hedging and hesitations in order to adopt an unassertive communication style, which she terms women's language.",
"While many studies have found evidence to support Lakoff's theory (e.g., (Crosby and Nyquist, 1977; Preisler, 1986; Carli, 1990)), there have also been contradictory findings (e.g., (O'Barr and Atkins, 1980)) that link the difference in the use of hedges to other social factors (e.g., power).",
"O'Barr and Atkins (1980) argue that the use of hedges is linked more to the social positions rather than gender, suggesting to rename women's language to powerless language.",
"In later work, O'Barr (1982) formalized the notion of powerless language, which formed the basis of many sociolinguistics studies on social power and communication.",
"O'Barr (1982) analyzed courtroom interactions and identified hedges and hesitations as some of the linguistic markers of powerless speech.",
"However, there has not been any computational work which has looked into how power relations relate to the level of commitment expressed in text.",
"In this paper, we use com-1058 putational power analysis to perform a large scale data-oriented study on how author commitment in text reveals the underlying power relations.",
"There is a large body of literature in the social sciences that studies power as a social construct (e.g., (French and Raven, 1959; Dahl, 1957; Emerson, 1962; Pfeffer, 1981; Wartenberg, 1990)) and how it relates to the ways people use language in social situations (e.g., (Bales et al., 1951; Bales, 1970; O'Barr, 1982; Van Dijk, 1989; Bourdieu and Thompson, 1991; Ng and Bradac, 1993; Fairclough, 2001; Locher, 2004)).",
"Recent years have seen growing interest in computationally analyzing and detecting power and influence from interactions.",
"Early work in computational power analysis used social network analysis based approaches (Diesner and Carley, 2005; Shetty and Adibi, 2005; Creamer et al., 2009) or email traffic patterns (Namata et al., 2007).",
"Using NLP to deduce social relations from online communication is a relatively new area of active research.",
"Bramsen et al. (2011) and Gilbert (2012) first applied NLP based techniques to predict power relations in Enron emails, approaching this task as a text classification problem using bag of words or ngram features.",
"More recently, our work has used dialog structure features derived from deeper dialog act analysis for the task of power prediction in Enron emails (Prabhakaran and Rambow, 2014; Prabhakaran et al., 2012; Prabhakaran and Rambow, 2013).",
"In this paper, We use the framework of (Prabhakaran and Rambow, 2014), but we analyze a novel aspect of interaction that has not been studied before what level of commitment do the authors express in language.",
"There has also been work on analyzing power in other genres of interactions.",
"Strzalkowski et al. (2010) and Taylor et al. (2012) concentrate on lower-level constructs called Language Uses such as agenda control to predict power in Wikipedia talk pages.",
"Danescu-Niculescu-Mizil et al. (2012) study how social power and linguistic coordination are correlated in Wikipedia interactions as well as Supreme Court hearings.",
"Bracewell et al. (2012) and Swayamdipta and Rambow (2012) try to identify pursuit of power in discussion forums.",
"Biran et al. (2012) and Rosenthal (2014) study the problem of predicting influence in Wikipedia talk pages, blogs, and other online forums.",
"Prabhakaran et al. (2013) study manifestations of power of confidence in presidential debates.",
"The focus of our study is to investigate whether the level of commitment participants express in their contributions in an interaction is related to the power relations they have with other participants, and how it can help in the problem of predicting social power.",
"In this section, we introduce the power analysis framework as well as the data we use in this study.",
"In order to model manifestations of power relations in interactions, we use our interaction analysis framework from (Prabhakaran and Rambow, 2014), where we introduced the problem of predicting organizational power relations between pairs of participants based on single email threads.",
"The problem is formally defined as follows: given an email thread t , and a related interacting participant pair ( p 1 , p 2 ) in the thread, predict whether p 1 is the superior or subordinate of p 2 .",
"In this formulation, a related interacting participant pair (RIPP) is a pair of participants of the thread such that there is at least one message exchanged within the thread between them (in either direction) and that they are hierarchically related with a supe-rior/subordinate relation.",
"We use the same dataset we used in (Prabhakaran and Rambow, 2014), which is a version of the Enron email corpus in which the thread structure of email messages is reconstructed (Yeh and Harnly, 2006), and enriched by Agarwal et al. (2012) with gold organizational power relations, manually determined using information from Enron organizational charts.",
"The corpus captures dominance relations between 13,724 pairs of Enron employees.",
"As in (Prabhakaran and Rambow, 2014), we use these dominance relation tuples to obtain gold labels for the superior or subordinate relationships between pairs of participants.",
"We use the same train-test-dev split as in (Prabhakaran and Rambow, 2014).",
"We summarize the number of threads and related interacting participant pairs in each subset of the data in Table 1. 4 Research Hypotheses Our first objective in this paper is to perform a large scale computational analysis of author com-1059 Description Train Dev Test Email threads 18079 8973 9144 # of RIPPs 7510 3578 3920 Table 1: Data Statistics.",
"Specifically, we want to investigate whether the commitment authors express towards their contributions in organizational interactions is correlated with the power relations they have with other participants.",
"Sociolinguistics studies have found some evidence to suggest that lack of commitment expressed through hedges and hesitations is associated with lower power status (O'Barr, 1982).",
"However, in our study, we go beyond hedge word lists, and analyze different cognitive belief states expressed by authors using a belief tagging framework that takes into account the syntactic contexts within which propositions are expressed.",
"We use the committed belief analysis framework introduced by (Diab et al., 2009; Prabhakaran et al., 2015) to model different levels of beliefs expressed in text.",
"Specifically, in this paper, we use the 4-way belief distinction COMMITTEDBELIEF , NONCOMMITTEDBELIEF , REPORTEDBELIEF , and NONAPPLICABLE introduced in (Prabhakaran et al., 2015).",
"2 (Prabhakaran et al., 2015) presented a corpus of online discussion forums with over 850K words, annotating each propositional head in text with one of the four belief labels.",
"The paper also presented an automatic belief tagger trained on this data, which we use to obtain belief labels in our data.",
"We describe each belief label and our associated hypotheses below.",
"2 We also performed analysis and experiments using an earlier 3-way belief distinction proposed by (Diab et al., 2009), which also yielded similar findings.",
"We do not report the details of those analyses in this paper.",
"As discussed earlier, lack of commitment in one's writing/speech is identified as markers of powerless language.",
"We thus hypothesize: H. 1. Superiors use more instances of committed belief in their messages than subordinates.",
"Non-committed belief (NCB): the writer explicitly identifies the proposition as something which he or she could believe, but he or she happens not to have a strong belief in, for example by using an epistemic modal auxiliary.",
"E.g.: (2)",
"This class captures a more semantic notion of non-commitment than hedges, since the belief annotation attempts to model the underlying meaning rather than language uses, and hence captures other linguistic means of expressing non-committedness.",
"Following ( O'Barr, 1982), we formulate the below hypothesis: H. 2. Subordinates use more instances of non committed belief in their messages than superiors.",
"Note that this label is only applied when the writer's own belief in the proposition is unclear.",
"For instance, if the first example above was Sara knows John will submit the report on-time , the writer is expressing commitment toward the proposition that John will submit the report and it will be labeled as committed belief rather than reported belief.",
"Reported belief captures instances where the writer is in effect limiting his/her commitment towards what is stated by attributing the belief to someone else.",
"So, in line with our hypotheses for non-committed beliefs, we formulate the following hypothesis: H. 3. Subordinates use more instances of reported beliefs in their messages than superiors.",
"Non-belief propositions (NA): the writer expresses some other cognitive attitude toward the proposition, such as desire or intention (4a), or expressly states that he/she has no belief about the proposition (e.g., asking a question (4b)).",
"E.g.: 1060 (4)",
"As per the above definition, requests for information (i.e., questions) and requests for actions are cases where the author is not expressing a belief about the proposition, but rather expressing the desire that some action be done.",
"In the study correlating power with dialog act tags (Prabhakaran and Rambow, 2014), we found that superiors issue significantly more requests than subordinates.",
"Hence, we expect the superiors to have significantly more non belief expressions in their messages, and formulate the following hypothesis: H. 4. Superiors use more instances of non beliefs in their messages than subordinates.",
"NLP tools are imperfect and may produce errors, which poses a problem when using any NLP tool for sociolinguistic analysis.",
"More than the magnitude of error, we believe that whether the error is correlated with the social variable of interest (i.e., power) is more important; e.g., is the belief-tagger more likely to find ROB false-positives in subordinates text?",
"To test whether this is the case, we performed manual belief annotation on around 500 propositional heads in our corpus.",
"Logistic regression test revealed that the belief-tagger is equally likely to make errors (both false-positives and false-negatives, for all four belief-labels) in sentences written by subordinates as superiors (the null hypothesis accepted at p > 0 . 05 for all eight tests).",
"Now that we have set up the analysis framework and research hypotheses, we present the statistical analysis of how superiors and subordinates differ in their relative use of expressions of commitment.",
"For each participant of each pair of related interacting participants in our corpus, we aggregate each of the four belief tags:",
"CBCount : number of propositional heads tagged as Committed Belief (CB) NCBCount : number of propositional heads tagged as Non Committed Belief (NCB) ROBCount : number of propositional heads tagged as Reported Belief (ROB)",
"Our general hypothesis is that power relations do correlate with the level of commitment people express in their messages; i.e., at least one of H.1 -H.4 is true.",
"In this analysis, each participant of the pair ( p 1 , p 2 ) is a data instance.",
"We exclude the instances for which a feature value is undefined.",
"3 In order to test whether superiors and subordinates use different types of beliefs, we used a linear regression based analysis.",
"For each feature, we built a linear regression model predicting the feature value using power (i.e., superior vs. subordinate) as the independent variable.",
"Since verbosity of a participant can be highly correlated with each of these feature values (we found it to be highly correlated with subordinates (Prabhakaran and Rambow, 2014)), we added token count as a control variable to the linear regression.",
"Our linear regression test revealed significant differences in NCB (b=-.095, t(-8.09), p < .001), ROB (b=-.083, t(-7.162), p < .001) and NA (b=.125, t(4.351), p < .001), and no significant difference in CB (b=.007, t(0.227), p=0.821).",
"Figure 1 pictorially demonstrates these results by plotting the difference between the mean values of each commitment feature (here normalized by token count) of superiors vs. subordinates, as a percentage of mean feature value of the corresponding commitment feature for superiors.",
"Dark bars denote statistically significant differences.",
"The results from our statistical analysis validate our original hypothesis that power relations do correlate with the level of commitment people express in their messages.",
"This finding remains statistically significant ( p < 0 . 001 ) even after applying the Bonferroni correction for multiple testing.",
"The results on NCB confirm our hypothesis that subordinates use more non-committedness in their language.",
"Subordinates' messages contain 48% more instances of non-committed belief than superiors' messages, even after normalizing for the length of messages.",
"This is in line with prior sociolinguistics literature suggesting that people with 3 These are instances corresponding to participants who did not send any messages in the thread (some of the pairs in the set of related interacting participant pairs only had one-way communication) or whose messages were empty (e.g., forwarding messages).",
"less power tend to use less commitment, previously measured in terms of hedges.",
"However, in our work, we go beyond hedge dictionaries and use expressions of non-committedness that takes into account the syntactic configurations in which the words appear.",
"Another important finding is in terms of reported belief (ROB).",
"Our results strongly verify the hypothesis H.3 that subordinates use significantly more reported beliefs than superiors.",
"In fact, it obtained the largest magnitude of relative difference (65.3% more) of all features we analyzed.",
"To our knowledge, ours is the first study that analyzed the manifestation of power in authors attributing beliefs to others.",
"Our results are in line with the finding in (Agarwal et al., 2014) that if many more people get mentioned to a person then that person is the boss, because as subordinates report other people's beliefs to superiors, they are also likely to mention them.",
"The finding that superiors use more NAs con-firms our hypothesis H.4.",
"As discussed earlier, this is expected since superiors issue more requests (as found by (Prabhakaran and Rambow, 2014)), the propositional heads of which would be tagged as NA by the belief tagger.",
"However, our hypothesis H.1 is proven false.",
"Being a superior or subordinate does not affect how often their messages contain CB, which suggests that power differences are manifested only in terms of lack of commitment.",
"Our next step is to explore whether we can utilize the hedge and belief labels to improve the performance of an automatic power prediction system.",
"For this purpose, we use our POWERPREDICTOR system (Prabhakaran and Rambow, 2014) that predicts the direction of power between a pair of related interacting participants in an email thread.",
"It uses a variety of linguistic and dialog structural features consisting of verbosity features (message count, message ratio, token count, token ratio, and tokens per message), positional features (initiator, first message position, last message position), thread structure features (number of all recipients and those in the To and CC fields of the email, reply rate, binary features denoting the adding and removing of other participants), dialog act features (request for action, request for information, providing information, and conven-tional), and overt displays of power, and lexical features (lemma ngrams, part-of-speech ngrams, and mixed ngrams, a version of lemma ngrams with open class words replaced with their part-of-speech tags).",
"The feature sets are summarized in Table 2 ((Prabhakaran and Rambow, 2014) has a detailed description of these features).",
"None of the features used in POWERPREDICTOR use information from the parse trees of sentences in the text However, in order to accurately obtain the belief labels, deep dependency parse based features are critical (Prabhakaran et al., 2010).",
"We use the ClearTk wrapper for the Stanford CoreNLP pipeline to obtain the dependency parses of sentences in the email text.",
"To ensure an unified analysis framework, we also use the Stanford CoreNLP for tokenization, part-of-speech tagging, and lemmatization steps, instead of OpenNLP.",
"This change affects our analysis in two ways.",
"First, the source of part-of-speech tags and word lemmas is different from what was presented in the original system, which might affect the performance of the dialog act tagger and overt display of power tagger (DIA and ODP features).",
"Second, we had to exclude 117 threads (0.3%) from the corpus for which the Stanford CoreNLP failed to parse some sentences, resulting in the removal of 11 data points (0.2%), only one of which 1062 was in the test set.",
"On randomly checking, we found that they contained non-parsable text such as dumps of large tables, system logs, or unedited dumps of large legal documents.",
"In order to better interpret how the commitment features help in power prediction, we use a linear kernel SVM in our experiments.",
"Linear kernel SVMs are significantly faster than higher order SVMs, and our preliminary experiments revealed the performance gain by using a higher order SVM to be only marginal.",
"We use the best performing feature set from (Prabhakaran and Rambow, 2014) as a strong baseline for our experiments.",
"This baseline feature set is the combination of thread structure features (THR) and lexical features (LEX).",
"This baseline system obtained an accuracy of 68.8% in the development set.",
"Adding the belief label counts into the SVM directly as features will not yield much performance improvements, as signal in the aggregate counts would be minimal given the effect sizes of differences we find in Section 5. In this section, we investigate a more sophisticated way of incorporating the belief tags into the power prediction framework.",
"Lexical features are very useful for the task of power prediction.",
"However, it is often hard to capture deeper syntactic/semantic contexts of words and phrases using ngram features.",
"We hypothesize that incorporating belief tags into the ngrams will enrich the representation and will help disambiguate different usages of same words/phrases.",
"For example, let us consider two sentences: I need the report by tomorrow vs. If I need the report, I will let you know .",
"The former is likely coming from a person who has power, whereas the latter does not give any such indication.",
"Applying the belief tagger to these two sentences will result in I need(CB) the report ... and If I need(NA) the report ... .",
"Capturing the difference between need(CB) vs. need(NA) will help the machine learning system to make the distinction between these two usages and in turn improve the power prediction performance.",
"In building the ngram features, whenever we encounter a token that is assigned a belief tag, we append the belief tag to the corresponding lemma or part-of-speech tag in the ngram.",
"We call it the Append version of corresponding ngram feature.",
"We summarize the different versions of each type of Feature Configuration in LEXICAL Accuracy LN + PN + MN ( BaseLine ) 68.8 LN CBApnd + PN + MN 69.3 LN + PN CBApnd + MN 68.6 LN + PN + MN CBApnd 69.0 LN CBApnd + PN + MN CBApnd 69.2 Table 3: Power prediction results using different configurations of LEX features.",
"LN CBApnd : word lemma ngram with appended belief tags; e.g., i need(CB) the .",
"PN : the original part-of-speech ngram; e.g., PRP VB DT .",
"PN CBApnd : part-of-speech ngram with appended belief tags; e.g., PRP VB(CB) DT .",
"MN : the original mixed ngram; e.g., i VB the .",
"MN CBApnd : mixed ngram with appended belief tags; e.g., i VB(CB) the .",
"In Table 3, we show the results obtained by incorporating the belief tags in this manner to the LEXICAL features of the original baseline feature set.",
"The first row indicates the baseline results and the following rows show the impact of incorporating belief tags using the Append method.",
"While the Append version of both lemma ngrams and mixed ngrams improved the results, the Append version of part of speech ngrams reduced the results.",
"The combination of best performing version of each type of ngram obtained slightly lower result than using the Append version of word ngram alone, which posted the overall best performance of 69.3%, a significant improvement (p < 0.05) over not using any belief information.",
"We use the approximate randomization test (Yeh, 2000) for testing statistical significance of the improvement.",
"Finally, we verified that our best performing feature sets obtain similar improvements in the unseen test set.",
"The baseline system obtained 70.2% accuracy in the test set.",
"The best performing configuration from Table 3 significantly improved this accuracy to 70.8%.",
"The second best performing configuration of using the Append version of both word and mixed ngrams obtained only a small improvement upon the baseline in the test set.",
"We inspect the feature weights assigned to the LN CBApnd version of lemma ngrams in our best performing model.",
"Each lemma ngram that contains a propositional head (e.g., need ) has four possible LN CBApnd ngram versions: need(CB) , need(NCB) , need(ROB) , and need(NA) .",
"For each lemma ngram, we calculate the standard deviation of weights assigned to different LN CBApnd versions in the learned model as a measure of variation captured by incorporating belief tags into that ngram.",
"4 Figure 2 shows the feature weights of different LN CBApnd versions of twenty five propositional heads whose lemma unigrams had the highest standard deviation.",
"The y-axis lists propositional heads arranged in the decreasing order of standard deviation from bottom to top, while the x-axis denotes the feature weights.",
"The markers distinguish the different LN CBApnd versions of each propositional head square denotes COMMITTEDBE 4 Not all lemma ngrams have all four versions; we calculated standard deviation using the versions present.",
"LIEF , circle denotes NONCOMMITTEDBELIEF , triangle denotes REPORTEDBELIEF , and diamond denotes NONAPPLICABLE .",
"The feature versions with negative weights are associated more with subordinates' messages, whereas those with positive weights are associated more with superiors' messages.",
"Since NCB and ROB versions are rare, they rarely get high weights in the model.",
"We find that by incorporating belief labels into lexical features, we capture important distinctions in social meanings expressed through words that are lost in the regular lemma ngram formulation.",
"For example, propositional heads such as know , need , hold , mean and want are indicators of power when they occur in CB contexts (e.g., i need ... ), whereas their usages in NA contexts (e.g., do you need? , if i need... , etc.) are indicators of lack of power.",
"In contrast, the CB version of attend , let , plan , could , check , discuss , and feel (e.g., i will attend/check/plan ... ) are strongly associated with lack of power, while their NA versions (e.g., can you attend/check/plan? ) are indicators of power.",
"In this paper, we made two major contributions.",
"First, we presented a large-scale data oriented analysis of how social power relations between participants of an interaction correlate with different types of author commitment in terms of their relative usage of hedges and different levels of beliefs committed belief, non-committed belief, reported belief, and non-belief.",
"We found evidence that subordinates use significantly more propositional hedges than superiors, and that superiors and subordinates use significantly different proportions of different types of beliefs in their messages.",
"In particular, subordinates use significantly more non-committed beliefs than superiors.",
"They also report others' beliefs more often than superiors.",
"Second, we investigated different ways of incorporating the belief tag information into the machine learning system that automatically detects the direction of power between pairs of participants in an interaction.",
"We devised a sophisticated way of incorporating this information into the machine learning framework by appending the heads of propositions in lexical features with corresponding belief tags, demonstrating its utility in distinguishing social meanings expressed through the different belief contexts.",
"This study is based on emails from a single corporation, at the beginning of the 21st century.",
"Our findings on the correlation between author commitment and power may be reflective of the work culture that prevailed in that organization at the time when the emails were exchanged.",
"It is important to replicate this study on emails from multiple organizations in order to assess whether these results generalize across board.",
"It is likely that behavior patterns are affected by factors such as ethnic culture (Cox et al., 1991) of the organization, and the kinds of conversations interactants engage in (for instance, co-operative vs. competitive behavior (Hill et al., 1992)).",
"We intend to explore this line of inquiry in future work.",
"This paper is partially based upon work supported by the DARPA DEFT program under a grant to Columbia University; all three co-authors were at Columbia University when portions of this work were performed.",
"The views expressed here are those of the author(s) and do not reflect the offi-cial policy or position of the Department of Defense or the U.S. Government.",
"We thank Dan Ju-rafsky and the anonymous reviewers for their helpful feedback."
] | [
"method",
"method",
"objective",
"result",
"result",
"result",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"other",
"abstain",
"result",
"objective",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"other",
"other",
"other",
"objective",
"abstain",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"other",
"other",
"other"
] |
[
"We introduce Self-CRItic Pretraining Transformers (SCRIPT) for representation learning of text.",
"The popular masked language modeling (MLM) pretraining methods like BERT replace some tokens with [MASK] and an encoder is trained to recover them, while ELECTRA trains a discriminator to detect replaced tokens proposed by a generator.",
"In contrast, we train a language model as in MLM and further derive a discriminator or critic on top of the encoder without using any additional parameters.",
"That is, the model itself is a critic.",
"SCRIPT combines MLM training and discriminative training for learning rich representations and computeand sample-efficiency.",
"We demonstrate improved sample-efficiency in pretraining and enhanced representations evidenced by improved downstream task performance on GLUE and SQuAD over strong baselines.",
"Also, the self-critic scores can be directly used as pseudo-log-likelihood for efficient scoring.",
"In natural language processing, the landscape of unsupervised learning methods is dominated by masked language modeling (MLM) for bidirectional encoders, such as BERT (Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019; Joshi et al., 2020; Lan et al., 2019; Lewis et al., 2020; Jiao et al., 2019), and causal masking for uni-directional autoregressive decoders (Radford et al., 2018, 2019; Brown et al., 2020; Raffel et al., 2020; Lewis et al., 2019) such as GPT.",
"In MLM an encoder is pre-trained on a generic corpus of text with the hope of learning universal contextual embeddings, which, then, are fine-tuned on a specific down-stream task.",
"Whereas recent developments in causal masking aim to learn a large-scale model once and define the down-stream task as an auto-regressive manner in the form of few-shot evaluation (Brown Research conducted at Salesforce Einstein et al., 2020).",
"In practice, while an universal autoregressive neural backbone model without the need for fine-tuning such as GPT-3 is desirable, the computational complexity at inference time remains an open problem.",
"While the two-stage approach of MLM of smaller models is computationally convenient, the pretraining still incurs a substantial computational cost.",
"Hence, in this work, we focus on learning contextual bi-directional representations with the goal of improving upon sample efficiency.",
"In MLM, the input sequence of tokens is perturbed by randomly masking out a small subset of the identities of tokens (Devlin et al., 2018) or attention scores to those tokens (Yang et al., 2019).",
"Then, the generative model is learned as a denoising auto-encoder (Vincent et al., 2008) which recovers the masked out tokens.",
"While the learned contextual representations achieve remarkable performance on down-stream tasks, the pretraining requires substantial compute.",
"This is mainly due to learning from gradients from the restricted subset of tokens (Clark et al., 2020).",
"In ELECTRA (Clark et al., 2020), the input sequence is perturbed by replacing a subset of tokens by sampled tokens drawn from an auxiliary generator model in the form of a bi-directional encoder, which itself is learned by MLM.",
"Then, the discriminative model is learned by a binary classification task which detects whether a token is unperturbed or has been replaced.",
"This approach enjoys remarkable sample efficiency, which, we believe, stems primarily from reducing the complexity of the classification task from masked token prediction over a large set of classes (i.e., a typical vocabulary size of 30 , 522 classes) to replaced token detection (i.e., 2 classes).",
"Despite it being less efficient, MLM training guides the model to learn rich representations.",
"ELECTRA uses MLM only in learning the auxiliary generator which is discarded after pretraining.",
"We propose to combine MLM and discriminative Figure 1: An overview of SCRIPT.",
"training.",
"The resulting model thus has the rich representations from both MLM and discriminative learning and enjoys compute and sample efficiency from its discriminative learning.",
"Furthermore, instead of learning an auxiliary model in addition to the main encoder, our approach learns a single model which is leveraged to recover masked tokens, propose token replacements, and detect replaced tokens.",
"Hence the encoder itself is also a critic, giving the name of our model, Self-CRItic Pretraining Transformers (SCRIPT).",
"Our experiments show that SCRIPT has improved compute and sample efficiency in pretraining and enhanced representations, hence outperforming strong baselines in fine-tuning on downstream tasks.",
"Contributions.",
"(1) We propose a novel pretraining approach in which the model acts as a self-critic.",
"(2) We demonstrated improved downstream task performance over state-of-the-art under computational constraints.",
"(3) We show the self-critic scores may serve as computationally efficient pseudo-log-likelihood for scoring tasks.",
"We propose a pretraining approach which combines masked token recovery and replaced token detection and does not introduce any additional parameters compared to a regular BERT.",
"In the following sections, we first introduce MLM training which is the same as that in BERT, and then present self-critic training.",
"a portion of tokens (e.g., 15% ) are replaced with a special token [MASK].",
"Let xxx be the sequence after the mask replacement and e ( xxx ) = { e t R d } Tt =1 be the contextual representations computed by the transformer.",
"Let W RV d be the weight matrix of a softmax layer where V is the vocabulary size.",
"The logit or score for token t is s t = W e t RV .",
"Then the log-likelihood of the sequence xxx is, log p ( xxx | xxx ) = T (cid:88) t =1 m t log p ( x t | xxx ) (1) = T (cid:88) t =1 m t log exp( s tv ) (cid:80) Vv (cid:48) =1 exp( s tv (cid:48) ) (2) where m t { 0 , 1 } indicates whether x t is a masked token, [MASK].",
"The loss function for MLM is the negative log-likelihood LMLM ( ) = E p data ( xxx ) log p ( xxx | xxx ) where p data is the empirical data distribution.",
"Besides defining the log-likelihood for MLM training, p ( x t | xxx ) naturally provides a conditional distribution of x t with which we can construct a sampled sequence, xxx = [ x 1 , ..., x T ] , by replacing x t with x t , a token sampled from p ( x t | xxx ) .",
"x t is replaced only if it is masked in xxx (i.e., m t = 1 ).",
"In particular, the replacement token is sampled from a Gumbel-Softmax distribution (Jang et al., 2016).",
"Let = { v } Vv =1 denote p ( x t | xxx ) for notational clarity.",
"Then the probability of sampling the v th token in the vocabulary for x t is, p ( x t | xxx ) = exp[(log v + g v ) / ] (cid:80) Vv (cid:48) =1 exp[(log v (cid:48) + g v (cid:48) ) / ] (3) where { g v (cid:48) } Vv (cid:48) =1 are i.i.d. samples drawn from Gumbel (0 , 1) 1 and is the temperature for sampling.",
"The Gumbel-Softmax distribution approaches one-hot when is small (e.g., = 0 . 1 ) and uniform when is large (e.g., = 10 . 0 ).",
"To apply discriminative training to the model, we derive a discriminator from the existing model and parameters.",
"x t is considered as a positive token if x t = x t , while deemed a negative token if x t (cid:54) = x t .",
"In the MLM training, the last layer defines a V -class classifier with the parameters W .",
"We can augment W with an extra row for computing the score or logit for the negative token class, making it classify V + 1 classes.",
"Denote the augmented weight matrix as W + .",
"Then the classification logits are s + t = W + e t RV +1 .",
"However, it is unnecessary to bring in new parameters and over-parameterization since subtracting an arbitrary function f ( e t ) R from all the logits, s + tv f ( e t ) v = 1 , ..., V + 1 , does not change the softmax output.",
"Thus we fix the last row of W + to all zeros 000 R 1 d .",
"Then we have the logit for the t th token, s + t = W + e t = (cid:40) W e t = s t , for x t { 1 , ..., V } 0 , otherwise.",
"Then the probability of the t th token in xxx being a negative token is, p ( t | xxx ) = 1 (cid:80) Vv (cid:48) =1 exp( s tv (cid:48) ) + 1 (4) while the probability being a positive token is, p ( t + | xxx ) = (cid:80) Vv (cid:48) =1 exp( s tv (cid:48) ) (cid:80) Vv (cid:48) =1 exp( s tv (cid:48) ) + 1 (5) where t and t + indicate x t is a positive token and a negative token, respectively.",
"The generator per se is thus also a critic or discriminator for replaced token detection, giving the name of our model, self-critic.",
"The loss of discriminative training is simply the cross-entropy loss, L Disc ( ) = E p data [ T (cid:88) t =1 1 ( t + ) log p ( t + | xxx )+ 1 ( t ) log p ( t | xxx )] .",
"LMLM ( ) + L Disc ( ) , where is an coefficient determining the strength of discriminative training.",
"The learning of SCRIPT involves two forward passes through a single model, one for MLM with xxx as input, one for discriminative training with xxx as input, and a single backward pass.",
"Figure 1 gives an overview of our model.",
"In the subsequent empirical evaluations, we shall address the following questions: (1) Does the learning as self-critic lead to competitive down-stream task performance?",
"(2) Can we treat the self-critic scores as pseudo-log-likelihoods?",
"(3) Is the sample efficiency improved over state-of-the-art baselines?",
"Hence, we train and evaluate two SCRIPT models small and base with an encoder of the 14M and 110M parameters, respectively.",
"For a direct comparison, the models are trained on the OpenWebText corpus (Gokaslan and Cohen, 2019) with identical pre-processing and optimization procedures as in (Devlin et al., 2018) and (Clark et al., 2020).",
"We refer to the Appendix for details.",
"We evaluate the efficacy of our method on the GLUE natural language understanding benchmark (Wang et al., 2018) and the SQuAD 1.1 and 2.0 question answering dataset (Rajpurkar et al., 2016a).",
"We report mean scores of GLUE tasks over 8 fine-tuning runs with varying random seed.",
"For the evaluation on SQuAD, we re-trained the small models with a sequence length of 512 tokens.",
"Table 1 depicts improved scores across the benchmarks.",
"The task specific GLUE scores are shown in Table 2.",
"In contrast to MLM and ELECTRA pretraining, SCRIPT allows for efficient computation of",
"pseudo-log-likelihood (PLL) for a given sequence xxx ,",
"The PLL allows for the re-ranking of a set of sequences produced by a NMT or ASR system.",
"While language models seem a natural fit for a ranking problem, Salazar et al. (2019) show improved performance when ranking is based on the PLL.",
"However, for a sequence with T tokens, this would require T forward passes as each token has to be masked out.",
"Instead, we propose to recruit (7) as a measure of PLL.",
"Table 3 compares the word error rates (WER) on the LibriSpeech dataset after rescoring.",
"SCRIPT performs competitively while (7) is computed as a single forward pass.",
"Wall-clock time.",
"We compare the number of training steps per second.",
"For direct comparison, we modify the ELECTRA reference code 2 .",
"For TPU v3 with 8 TPU cores, ELECTRA and SCRIPT achieve 31 .",
"3 and 22 .",
"7 training iterations per sec-2 https://github.com/google-research/ electra ond with a mean MXU utilization of 14 .",
"93% and 17 .",
"91% for small models, respectively.",
"GLUE.",
"Figure 2 depicts the improvement in the mean GLUE scores for ELECTRA-small and SCRIPT-small over the number of training steps.",
"While the wall-clock time per computational training step of SCRIPT is increased over ELECTRA, the sample-efficiency of SCRIPT in terms of the mean GLUE score over training steps is higher.",
"Hence, the efficiency of both methods may be comparable, however, SCRIPT achieves improved overall performance on GLUE.",
"This work presents SCRIPT for representation learning.",
"It is a transformer encoder like BERT.",
"In pretraining, it recovers masked tokens, proposes negative samples, and acts as a self-critic, discriminating between sampled and original tokens.",
"The joint MLM and discriminative learning improves sample efficiency in pretraining and enhances representation learning, leading to improved performance over strong baselines on various downstream tasks.",
"It also provides an efficient way for computing pseudo-log-likelihood for scoring tasks and achieves competitive performance."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"objective",
"objective",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Abstract Different Open Information Extraction (OIE) tasks require different types of information, so the OIE field requires strong adaptability of OIE algorithms to meet different task requirements.",
"This paper discusses the adaptability problem in existing OIE systems and designs a new adaptable and efficient OIE system OIE@OIA as a solution.",
"OIE@OIA follows the methodology of Open Information eXpression (OIX): parsing a sentence to an Open Information Annotation (OIA) Graph and then adapting the OIA graph to different OIE tasks with simple rules.",
"As the core of our OIE@OIA system, we implement an end-to-end OIA generator by annotating a dataset (we make it open available) and designing an efficient learning algorithm for the complex OIA graph.",
"We easily adapt the OIE@OIA system to accomplish three popular OIE tasks.",
"The experimental show that our OIE@OIA achieves new SOTA performances on these tasks, showing the great adaptability of our OIE@OIA system.",
"Furthermore, compared to other end-to-end OIE baselines that need millions of samples for training, our OIE@OIA needs much fewer training samples (12K), showing a significant advantage in terms of efficiency.",
"Open Information Extraction (OIE) techniques are gradually attracting more and more attention (Christensen et al., 2011; Mausam et al., 2012; Corro and Gemulla, 2013; Angeli et al., 2015; Bhutani et al., 2016; Cui et al., 2018; Roy et al., 2019; Zhan and Zhao, 2020) as they build a bridge between language to knowledge.",
"OIE tasks are generally designed for its information extraction requirements, which vary from verbal relations between entities (Banko et al., 2007; Etzioni et al., 2004), nominal attributes (Yahya et al., 2014; Pal and Mausam, 2016; Saha et al., 2017), and adverbial components (e.g., time) (Stanovsky et al., 2018).",
"Even for the same type of information, the required facts may still differ.",
"Table 1 shows the required form of facts of three popular OIE tasks: OIE2016 (Stanovsky and Dagan, 2016), Re-OIE2016 (Zhan and Zhao, 2020) and CaRB (Bhardwaj et al., 2019).",
"The diversity of requirements is an essential feature in the field of OIE, which leads to the urgent need for OIE algorithms with strong adaptability.",
"The adaptability problem in the OIE field has not been well addressed.",
"There are two primary methodologies to obtain an OIE system: the rule-based approach and the end-to-end learning-based approach.",
"The rule-based approaches (Christensen et al., 2011; Corro and Gemulla, 2013; Angeli et al., 2015; Bhutani et al., 2016; Gashteovski et al., 2017) use human-written or bootstrap-learned rules to convert linguistic structures of sentences into target facts.",
"The end-to-end learning approaches (Stanovsky et al., 2018; Sun et al., 2018b,a; Roy et al., 2019; Ro et al., 2020; Kolluru et al., 2020; Liu et al., 2020) first build a dataset containing <sentence, facts> pairs and then use end-to-end learning to train a neural network as the OIE system.",
"However, these methodologies develop a specific machine for every single task.",
"When the requirements change, one must rewrite the complex rule system or re-annotate the data and then retrain the model.",
"These methodologies fail to meet the need for strong adaptability in the OIE field.",
"Recently, a concept called Open Information eXpression (OIX) was proposed by Sun et al. (2020) to address the adaptability issue of OIE algorithms.",
"The idea of OIX is to introduce an intermediate layer between the language and OIE, which can express the sentence without information loss and be easily adapted to various OIE tasks.",
"Sun et al. (2020) proposed a standard, called Open Information Annotation (OIA), to implement OIX.",
"The OIA standard defines an annotation criterion of 6213 Fact OIE2016 Re-OIE2016 CaRB <[is], Ms. Lee, headmaster> (cid:55) (cid:51) (cid:51) <is responsible, Ms. Lee, for this> (cid:51) (cid:51) (cid:51) <told, Ms. Lee, Lily, she is responsible for this> (cid:51) (cid:55) (cid:51) <told, Ms. Lee, Jimmy, she is responsible for this> (cid:51) (cid:55) (cid:51) <told, Ms. Lee, Lily and Jimmy, she is responsible for this> (cid:55) (cid:51) (cid:55) Table 1: Facts defined in different OIE tasks, for the expression Ms. Lee, the headmaster, told Lily and Jimmy she is responsible for this.",
"natural language sentences, which aims to express all information of a sentence into a Predicate-Function-Argument structure, represented by a single-rooted DAG graph.",
"In addition, they implemented a rule-based OIA system that generates OIA graphs from Universal Dependency graphs.",
"Following the methodology of OIX/OIA, we implement an adaptable and efficient OIE system -OIE@OIA.",
"The framework of OIE@OIA shown in Figure 1 has two components.",
"The first one is the OIA generator, which converts a sentence into an OIA graph.",
"We annotate a large OIA dataset (containing 12,543 training samples, 2,002 development samples, and 2,077 testing samples), develop an efficient learning algorithm to learn and inference the OIA graph, and finally build an end-to-end OIA graph learner.",
"The second component is a group of adaptors, one for each OIE task.",
"We show three popular OIE tasks focused on in this paper in the figure.",
"Furthermore, one can write new adaptors for new OIE tasks, which are very simple, as shown in the following sections.",
"The OIE@OIA system achieves the SOTA (or comparable) performance on three OIE tasks: OIE2016, Re-OIE2016, and CaRB, verifying the adaptability of our OIE@OIA system.",
"Furthermore, our OIE@OIA only needs 12K samples to train, whereas existing end-to-end OIE methods typically need millions of training samples.",
"This verifies the efficiency of our OIE@OIA system.",
"An adaptable and efficient OIE system OIE@OIA, achieving the SOTA performance on different OIE tasks.",
"The first end-to-end OIA learning pipeline built on a large human-labeled OIA dataset (we make it open available) and an efficient algorithm for the OIA graph; Figure 1: The framework of OIE@OIA.",
"In this section, we introduce the OIE@OIA framework, compare it with existing methodologies, and finally discuss its capability and limitation.",
"The core of the OIE@OIA framework is an end-to-end learned OIA generator.",
"To build the generator, we annotate a large dataset using sentences from English-EWT (version 2.4, containing 16K sentences ) 1 and design a neural-based algorithm for learning the OIA graph from sentences.",
"The data annotation procedure and the learning procedure are detailed in Section 3. The standard OIA graph described in Sun et al. (2020) only defined three node types: Constant, Predicate, and Function.",
"However, users may need more fine-grained type information about nodes, especially the type of predicates, to filter wanted facts.",
"For instance, in building OIE systems based on OIA, we need to recognize verbal nodes, which act as the relationship descriptions of the OIE facts.",
"In addition, in event logic graph construction (Ding et al., 2019), logical predicates are essential.",
"With this consideration, we update the type field to a fine-grained version for nodes in the OIA graph according to its semantic function.",
"The fine-grained node types are listed in Table 2. 1 https://lindat.mff.cuni.cz/ repository/xmlui/handle/11234/1-2988 6214 0 Ms. Lee ((0, 1),) Noun 1 the headmaster ((3, 4),) Noun appos 2 told (6,) Verbal pred.arg.1 3",
"Lee, the headmaster\", \"Lily and Jimmy\", \"she is responsible for this\"> and <\"is responsible\", \"she\", \"for this\"> . VerbalPiP : In the OIA graph, for each verbal node with a prepositional child, we merge the child into the verbal node and apply the Verbal rule on the resultant OIA graph. This produces <\"is responsible for\", \"she\", \"this\"> for the sample in Figure 2 instead of <\"is responsible\", \"she\", \"for this\"> . Appos(be) : All edges like <A, appos, B> in OIA graphs are extracted to form the facts <be, A, B> . CoordSep : The fact tuples with coordination arguments are separated into multiple fact tuples, e.g., <told, ~, Lily and Jimmy, ~> is separated into <told, ~, Lily, ~> and <told, ~, Jimmy, ~> . Then, we implement the adaptors for the three tasks using the combinations of the above rules: Adaptor@OIE 2016 = Verbal + CoordSep ; Adaptor@Re-OIE 2016 = Verbal + Appos ([is]); Adaptor@CaRB = VerbalPiP + Appos (is) + CoordSep . 2.3 Comparisons In this section, we compare OIE@OIA with existing OIE methodologies and show the difference in Table 3. The traditional rule-based OIE methods are generally based on a sentence annotation structure, such as dependency graphs or constituency graphs, and apply rules to convert the annotation structure into the OIE facts. This 6215 Methodology Rule-Based OIE End-to-End OIE OIE@OIA Sentence Annotation Dependency / Constituency OIA OIE Sensitiveness of Annotation No Yes Rule Complexity High Low Training Data 1 Million 12K Training Efficiency Low High Adaptation to New Task Rewrite Rules Relabel and Retrain New Adaptor Adaptation Cost May Be High May Be High Low Table 3: Comparisons among Rule-based OIE, End-to-End OIE , and OIE@OIA. pipeline is similar to our OIE@OIA, where OIE@OIA uses the OIA graph as the sentence annotation structure. However, the differences between the traditional annotation and OIA make the substantial differences between the rule-based OIE and our OIE@OIA. Since the traditional annotation dependency and constituency is not designed for the OIE task, one needs to write a complex rule system (or construct by bootstrapping) to convert those annotation structures into the OIE facts. However, for OIE@OIA, since the OIA is designed for OIE tasks, one can accomplish the conversion with straightforward rules, just like those described in the above section. As for the end-to-end OIE algorithms, existing methods are generally built on OpenIE4 dataset (Zhan and Zhao, 2020), which contains about 1 million training samples. However, differences may exist in the forms between the training dataset and the target task, so the performance may drop when adapting to new tasks. To limit the differences, one may need many new training samples and retrain the model. Our OIE@OIA can adapt to an extensive range of new tasks by introducing new adaptors, so it has much better adaptability. In addition, our experimental studies show that OIE@OIA needs only 12K samples for training to achieve new SOTA OIE performance, so it is much more efficient to implement an OIE system. 2.4 Capability and Limitations Besides the type of facts defined in the three popular tasks, one can extract more types of facts from OIA graphs. For example, since OIA graphs are naturally hierarchical, one can easily extract the nested facts, which can implement the target task of NestIE (Bhutani et al., 2016). One can also extract logical relationships between facts since OIA identifies the logical predicate nodes. We believe OIA can act as a general platform for various OIE tasks and provide better facts for downstream tasks based on OIE (Ding et al., 2019; Zhang et al., 2020). However, the current version of the OIE@OIA pipeline does not separate the compound noun phrase, making it unable to extract nominal relationships between different nominals within a compound noun phrase (Yahya et al., 2014). This is because current OIA graphs are phrase-level graphs and take noun phrases as single nodes. As an example, the president of America \" will form a single node in our OIA graph, and it is not able to identify the relationship between the president \" and America \" based on the graph.",
"We left this problem as one of our future work.",
"Converting a sentence into the OIA graph is the central operation of the OIE@OIA framework.",
"We build an OIA dataset using active-learning-powered human labeling to implement such an operation.",
"Then, we learn equivalent variants of the OIA graphs Word-OIA graphs and convert them back to OIA-graphs, which overcomes the difficulty of learning the structures of the OIA graphs.",
"We annotated sentences of English-EWT (version 2.4) for OIA.",
"The annotation mainly follows the OIA graph definition given by Sun et al. (2020) but with some confusing or special cases being clarified.",
"In addition, we introduce more detailed type information for the node.",
"The obtained dataset contains 12,543 training samples, 2,002 development samples, and 2,077 testing samples.",
"Each sample is a sentence-graph pair.",
"On average, a graph has 7.74 nodes and 6.95 edges, and a node comprises 1.98 words.",
"We make the updated 6216 annotation standard and dataset open available 3 .",
"Auxiliary Annotation System.",
"To improve data annotation efficiency, we generate an initial OIA graph for each input sentence using the existing rule-based OIA system (Sun et al., 2020).",
"For node types initialization, we align the phrases with the POS tags in English-EWT v2.4 and assign heuristic types based on the POS tags of the head words.",
"Then we develop an annotation tool for the annotator to modify the adapted graphs with ease.",
"Active Learning.",
"The samples in the dev set and test set are all human-labeled.",
"For the training set, we first randomly labeled 2,000 samples, then trained a model using the proposed learning method (described in 3.4) and started the active-learning procedure.",
"The data labeling order of unlabeled samples was determined by the difference between the rule-generated results and the predicted results.",
"We labeled 200 samples in each active learning iteration and stopped the iteration when the performance of the trained did not improve on the dev set.",
"As a result, we manually annotated about 74% of the training data and treated the rule-generated results as the true labels of the rest 26% training data.",
"Quality Control.",
"The data annotation was done by three postgraduate/doctoral students of linguistics.",
"Two annotators first label each sample.",
"If there is a disagreement, the third annotator will be involved for discussion and voting.",
"The initial agreement ratio between the two annotators is about 80%, and the final agreement ratio after the discussion (no vote needed) is higher than 93%.",
"The annotation of the rest 7% data is obtained by voting.",
"The node of OIA graphs consists of a sequence of symbols, placeholders, and words.",
"We call such a graph Generalized Phrase Graph ( GGPG = ( P , S ) , where P is the set of generalized nodes and S is the edge set).",
"Directly learning the graph is difficult due to the very large decision space caused by the complexity of OIA nodes.",
"The decision space is large even in the simplest situation that each node consists of consecutive words (a span in the sentence without any symbol or placeholder outside the sentence).",
"Since the target number of nodes is unknown, the number of candidate node 3 https://github.com/sunbelbd/ Open-Information-eXpression sets to be considered is exponential to the number of words in the sentence.",
"Due to this large decision space, it is very difficult to learn a good candidate set of nodes for OIA graphs as the first step task in a stage-wise approach.",
"If one prefers the end-to-end approach to reduce the error propagation between tasks in each stage by jointly leaning the nodes and edges, the decision space will be even much larger.",
"Fortunately, we have the following proposition connecting the Generalized Phrase Graph with Word Graph , where Word Graph is a graph with each graph node corresponding to one and only one word of the sentence:",
"Proposition.",
"For any Generalized Phrase Graph GGPG = ( P , S ) , there is a one-to-one corresponding Word Graph GW = ( W , S (cid:48) ) in the sense that GGPG and GW can convert to each other without loss of information, where the labels of S (cid:48) are independent to word nodes W .",
"Proof.",
"We split each generalized phrase node into a chain of nodes of symbols, placeholders, or words, connected in order with the edge next , and connect all the edges from parents/children to the first node of the chain.",
"In this new graph, each word is a single node.",
"We replace each type of path between two words (or the virtual root) containing only symbols/placeholders into a single edge with a correspondingly designed edge label and remove the original path.",
"The resulting graph is a valid word graph.",
"A constructive procedure to convert a General Phrase Graph into the Word Graph is shown in Appendix A. The OIA graphs are special cases of GPG, and the properties of the OIA graphs can make the procedure much simpler.",
"We discuss these details in Appendix B. With the above procedure, we can convert a complex OIA graph into an equivalent simple Word-OIA graph (as shown in Figure 3), which is a single-rooted DAG where each node is a word in the original sentence.",
"Each node in the Word-OIA graph has one category attribute type (by sharing the type of OIA node it belongs to) and two boolean attributes arg_whether and missing_be (described in Appendix B).",
"(a) The OIA graph for the sentence A. not missing_be=True sure next_word dependents upper_parataxis do parataxis It pred.arg.1 n't next_word I pred.arg.1 know next_word",
"of the Word OIA graph.",
"The node number of the Word-OIA graph is fixed N , so we only need to learn the possible N ( N 1) edges, and thus the learning complexity is significantly reduced.",
"The structure of the Word-OIA graph is similar to that of the dependency graph so that the semantic dependency graph learning procedure can be applied to the learning of the Word-OIA graph.",
"We build our learning procedure based on pretrained BERT models (Devlin et al., 2019).",
"Given a sentence S = [ w 1 , , w N ] , we generate the representation r i of each word by: R = BERT ( S ) , where R = [ r 1 , , r N ] .",
"Then we learn the properties of nodes and the graph structures using these representations.",
"Node Attribute Learning.",
"For node attribute prediction, we build a one-layer MLP classifier above r i to learn each attribute a k for word w i : p ( node ) ki = p ( a k | w i ) = Softmax ( MLP ( node ) k ( r i )) .",
"The loss of node attribute prediction is defined as: L node = 1 N (cid:88) k (cid:88) i (cid:96) CE (cid:16) p ( node ) ki , y ( node ) ki (cid:17) , where y ( node ) ki denotes the target attribute value of the corresponding node of w i and (cid:96) CE denotes the cross-entropy loss function.",
"Edge Learning.",
"Following the protocol of Dozat and Manning (2018), we divide the structure learning into two steps: given two nodes, we firstly determine if there is an edge between them; if so, we then determine the type of the edge.",
"It avoids introducing an empty edge type in the second step, which will overwhelm the other edge types.",
"For the two-step learning, we use the biaffine-based graph learning approach (Dozat and Manning, 2017, 2018).",
"In the first step, for two words w i and w j , we learn the representations of each word as the start and end node of an edge: h ( es ) i = MLP ( es ) ( r i ) , h ( ee ) i = MLP ( ee ) ( r i ) .",
"being an edge e ij between w i and w j is: s ( edge ) ij = Bia ( edge ) ( h ( es ) i , h ( ee ) j ) , p ( edge ) ij = (cid:16) MLP ( edge ) ( s ( edge ) ij ) (cid:17) .",
"Here, the i -th dimension of Bia ( x 1 , x 2 ) is: x (cid:62) 1 U i x 2 + w (cid:62) i ( x 1 x 2 ) + b i , where U i , w i , b i denotes a trainable matrix, vector, and scalar, respectively.",
"The corresponding graph topology loss on S is defined as follows: L topo = 1 N 2 (cid:88) ij (cid:96) CE (cid:16) p ( edge ) ij , y ( edge ) ij (cid:17) .",
"Then we learn the label of each edge.",
"We learn the representations of start-node and end-node of an edge by: h ( ls ) i = MLP ( ls ) ( r i ) , h ( le ) j = MLP ( le ) ( r j ) .",
"The probability of the label l ij is: s ( label ) ij = Bia ( label ) ( h ( ls ) i , h ( le ) j ) , p ( label ) ij = Softmax (cid:16) MLP ( label ) ( s ( label ) ij ) (cid:17) , and the edge prediction loss on S is defined as: L label = (cid:80) ij I ( y ( edge ) ij = 1) (cid:96) CE (cid:16) p ( label ) ij , y ( label ) ij (cid:17) (cid:80) ij I ( y ( edge ) ij = 1) , where I ( ) is the indicator function.",
"is the concatenation operator.",
"Multi-Task Learning.",
"We learn the Word-OIA graph in a multi-task style, that is, optimize a linear combination of the losses: L = L topo + L label + (1 ) L node .",
"In the inference phase, we accomplish the following steps to generate the Word-OIA graph: Node Attribute Prediction.",
"For each node w i , its type t i is predicted by: y ki = arg max p ( node ) ki .",
"However, because the label prediction may be incorrect, constructing the graph using the above predictions may result in an invalid graph (dis-connected graph, edge conflicted, etc.).",
"So in practice, we develop a greedy search strategy to construct the graph step-by-step while maintaining the validness of the graph all the time.",
"First, we select the edge with the highest value of p ( label ) ij for all edges with p ( edge ) ij > 0 .",
"5 .",
"Then, we identify conflicted edges with the selected edges and set their corresponding values in p ( label ) ij to zero.",
"The above process iterates several times until all edge types are set.",
"In addition, the resulting graph may consist of several disconnected sub-graphs.",
"In this case, we iteratively select the edge with the highest p ( label ) ij to connect it to the sub-graph to which the predicted root belongs.",
"It guarantees that the generated Word-OIA graph is valid.",
"For a predicted Word-OIA graph, we reverse the OIA to the Word-OIA procedure to obtain the OIA graph.",
"Specifically, we first collect nodes in Word-OIA graph chained by next_word and related arcs ( prev_arg , pos_arg ) to form the nodes in OIA graph.",
"Then we identify the special structure such as edge upper_parataxis and add special node like Parataxis and Missing to OIA graph.",
"We add (be) to the node span if missing_be is true.",
"The Whether node is added as the parent of current node if arg_whether is true.",
"Last, we connect the nodes using the learned arc labels in Word-OIA.",
"The type of the phrase node in the OIA graph is set as the majority type of its constituted words in the corresponding Word-OIA graph.",
"The experiment is conducted on the PaddlePaddle deep learning platform 4 , and the pre-trained BERT model is provided by the PaddleNLP project 5 .",
"Following Che et al. (2020), the hidden size of MLP ( edge ) and MLP ( label ) is set to 500 and 100, respectively.",
"The hidden sizes of MLPs used in node attribute predictions are set to 500.",
"The model is trained with the classifier's dropout rate being set to 0.1 and Adam optimizer with a learning rate of 10 5 .",
"and in loss function are searched using 4 https://www.paddlepaddle.org.cn 5 https://github.com/PaddlePaddle/ PaddleNLP 6219 Level Metric Performance Node type Acc 0.951 arg_whether Acc 0.998 missing_be Acc 0.998 Edge P/R 0.847 / 0.851 Graph Acc 0.528 Table 4: Performance of Word-OIA prediction, where P/R means Precision/Recall, Acc means Accuracy.",
"Evaluation Metric.",
"In this experiment, the evaluation metrics for all measurements are based on exact match, that is, score 1 if exactly the same, otherwise 0.",
"For nodes, we compare their node expressions.",
"For a Word-OIA node, the node expression is the word index in the sentence; for an OIA node, the node expression is the phrase based on the word indexes it contains.",
"For edges, we compare the triplets of <start_node_expression, edge_label, end_node_expression> .",
"For graphs, we test whether the two graphs' node sets and edge sets are exactly the same.",
"The node precision and recall of Word-OIA are always 1.0 since the nodes correspond to the words in the sentence.",
"The performance of node attribute prediction is illustrated by the upper part of Table 4. The precision/recall of edge prediction is shown in the middle part of Table 4. We also report the accuracy of graph structure in the last part of Table 4. 4.2 Performance on OIA We report the performances of the nodes, edges, and the whole graph structure of the recovered OIA graphs in the lower half of the Table 5. Note that the graph-structure accuracy of the OIA graph is slightly lower than that of the Word-OIA graph.",
"It is because the graph structure of the OIA graph is related to the node attributes arg_whether and missing_be .",
"Since there are tiny proportions of bad cases in predicting these two attributes, the graph-structure accuracy of the OIA graph is lower.",
"As a comparison, we report the performances of the rule baseline (Sun et al., 2020) in upper part of Table 5. We can see that the proposed method achieves significant improvement over the rule baseline, e.g., improving the graph structure Generator Level Metric Performance Rule Node P/R 0.796 / 0.855 Rule Edge P/R 0.530 / 0.585 Rule Graph Acc 0.373 Neural Node P/R 0.893 / 0.877 Neural Edge P/R 0.709 / 0.688 Neural Graph Acc 0.525 Table 5: Performance of OIA graph prediction.",
"accuracy by 15.2%.",
"We believe the learning-based approach solves several problems in the rule-based approach: 1) limitation of expressiveness of Universal Dependency, 2) mistakes in Universal Dependency and Enhanced++ (Schuster and Manning, 2016) parsers, and 3) failure of rules to cover the complex combination of situations.",
"We also evaluate the accuracy of the node type of the recovered OIA graphs.",
"Among the 89.3% correctly identified nodes, 96.4% of them are labeled with the correct node types by the voting of nodes in the Word-OIA graphs.",
"We reviewed the error cases to find the limitations in the graph generation process.",
"We find several common issues that lead to incorrect graphs.",
"Long Tail Words and Edges.",
"About 33% errors are caused by the long tail words and edge labels.",
"The out-of-vocabulary words lead to problematical word representations.",
"Rarely used edge labels (e.g., discourse) tend to be predicted as other frequent edge labels.",
"Granularity Issue.",
"The granularity or boundary of the node may be controversial in prediction results.",
"For example, the phrase turn out to be' can be a predicate, but it also makes sense that turn out' and to be' form a nested relation.",
"Such granularity issues cause about 25% errors in both predicate node and constant node.",
"Mining idioms can further clarify the boundary of expression with refined strategy.",
"This belongs to our future work.",
"Ambiguous Modification.",
"A prepositional phrase can be used to modify either a noun or a verb in its context.",
"This ambiguity leads to about 17% of graph-level errors.",
"For example, in sentence I love all the roles in this play , prepositional phrase in this play is the modifier of all the roles .",
"Thus, they should be in the same noun node of the ground-truth OIA graph.",
"However, in the predicted graph, 6220 Systems OIE2016 Re-OIE2016 CaRB AUC F1 AUC F1 AUC F1 R u l e B a s e d Stanford (Angeli et al., 2015) 7.9 13.6 11.5 16.7 13.4 23.0 OLLIE (Mausam et al., 2012) 20.2 38.6 31.3 49.5 22.4 41.1 NestIE (Bhutani et al., 2016) 37.7 43.8 32.1 42.2 19.4 31.1 PropS (Stanovsky and Dagan, 2016) 32.0 54.4 43.3 64.2 12.6 31.9 MinIE (Gashteovski et al., 2017) 35.0 41.0 45.5 47.8 28.1 41.3 ClausIE (Corro and Gemulla, 2013) 36.4 58.0 46.4 64.2 22.4 44.9 OIE@RuleOIA 37.3 54.6 63.3 75.0 32.4 45.6 L e a r n i n g B a s e d OpenIE4 (Christensen et al., 2011) 40.8 58.8 50.9 68.3 27.2 48.8 BIO (Zhan and Zhao, 2020) 46.2 68.6 71.9 80.3 27.7 46.6 SpanOIE (Zhan and Zhao, 2020) 48.9 68.7 65.8 77.0 30.0 49.4 BiLSTM + BERT (Ro et al., 2020) -72.1 81.3 30.6 50.6 Multi2OIE (BERT) (Ro et al., 2020) -74.6 83.9 32.6 52.3 OIE@OIA (BERT) 54.3 71.6 76.9 85.3 33.9 51.1 Table 6: OIE performance on OIE2016, Re-OIE2016 and CaRB.",
"in this play may become the sub-tree of verb love .",
"It is a common error in parsing tasks that a model may incorrectly choose the headword for a modifier.",
"We believe better language modeling will ameliorate this problem.",
"We further evaluate the applicability of OIA as an intermediate layer between language and OIE, i.e., OIE@OIA, on three tasks: OIE2016, Re-OIE2016, CaRB.",
"We compared OIE@OIA with six rule-based systems and five learning-based systems.",
"Evaluation Metric.",
"The performances of baseline systems on OIE2016 are from Zhan and Zhao (2020) while that on Re-OIE2016 and CaRB are from Ro et al. (2020), except that NestIE (Bhutani et al., 2016) is implemented using the code from the author.",
"We evaluate the extraction results of the proposed methods, OIE based on rule OIA and NestIE with metric AUC and optimal F1 following the setting of the released codes 6 .",
"The performance of our OIE@OIA system is shown in Table 6. We observe that OIE@OIA achieves better performance than most existing baselines, including learning-based methods trained on millions of samples.",
"This result justifies the effectiveness of OIE@OIA for OIE.",
"We think the reason for this phenomenon is three-fold: Firstly , compared with the annotation of OIA, the annotation of a single OIE task is 6 www.github.com/zhanjunlang/Span_OIE , and www.github.com/youngbin-ro/Multi2OIE sparse.",
"Given a sentence, it will only annotate phrases with interesting relationships.",
"For the other phrases and relationships, it will not annotate.",
"In contrast, in OIA, all the phrases and relationships will be annotated.",
"So based on a single sample, the annotation in OIA is much more informative than that in any single OIE task.",
"Secondly , if treating the recognition of a type of facts as a task, learning of OIA can be seen as a multi-task learning scenario, and different tasks can augment each other.",
"Thirdly , OIA is designed as an intermediate layer between language and OIE.",
"Thus, during the standard design of OIA, it has considered the compatibility between OIA and OIE, which makes it easier to adapt OIA to different OIE tasks.",
"We introduce an adaptable and efficient Open Information Extract system called OIE@OIA.",
"To implement OIE@OIA, we annotate and release an OIA dataset containing about 16K sentences, design an efficient learning algorithm, and build an easy-to-implement rule system to adapt OIA graphs to different OIE tasks.",
"Empirical studies on three popular OIE tasks show that our OIE@OIA system can achieve new SOTA performances on these tasks, using only 12K training sentences.",
"It verifies the great advantage of our system in both effectiveness and efficiency over the previous learning-based baselines, which usually require millions of training samples to achieve comparable performance."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"result",
"method",
"abstain",
"objective",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result"
] |
[
"Coarse-grained linguistic information, such as named entities or phrases, facilitates adequately representation learning in pre-training.",
"Previous works mainly focus on extending the objective of BERT's Masked Language Modeling (MLM) from masking individual tokens to contiguous sequences of n tokens.",
"We argue that such contiguously masking method neglects to model the intra-dependencies and inter-relation of coarse-grained linguistic information.",
"As an alternative, we propose ERNIE-Gram, an explicitly n -gram masking method to enhance the integration of coarse-grained information into pre-training.",
"In ERNIE-Gram, n -grams are masked and predicted directly using explicit n -gram identities rather than contiguous sequences of n tokens.",
"Furthermore, ERNIE-Gram employs a generator model to sample plausible n -gram identities as optional n-gram masks and predict them in both coarse-grained and fine-grained manners to enable comprehensive n -gram prediction and relation modeling.",
"We pre-train ERNIE-Gram on English and Chinese text corpora and fine-tune on 19 downstream tasks.",
"Experimental results show that ERNIE-Gram outperforms previous pre-training models like XLNet and RoBERTa by a large margin, and achieves comparable results with state-of-the-art methods.",
"The source codes and pre-trained models have been released at https://github.",
"com/PaddlePaddle/ERNIE .",
"Pre-trained on large-scaled text corpora and fine-tuned on downstream tasks, self-supervised representation models (Radford et al., 2018; Devlin et al., 2019; Liu et al., 2019; Yang et al., 2019; Lan et al., 2020; Clark et al., 2020) have achieved remarkable improvements in natural language understanding (NLU).",
"As one of the most prominent pre-trained models, BERT (Devlin et al., 2019) employs masked language modeling (MLM) to learn representations by masking individual tokens and predicting them based on their bidirectional context.",
"However, BERT's MLM focuses on the representations of fine-grained text units (e.g. words or subwords in English and characters in Chinese), rarely considering the coarse-grained linguistic information (e.g. named entities or phrases in English and words in Chinese) thus incurring inadequate representation learning.",
"Many efforts have been devoted to integrate coarse-grained semantic information by independently masking and predicting contiguous sequences of n tokens, namely n -grams, such as named entities, phrases (Sun et al., 2019b), whole words (Cui et al., 2019) and text spans (Joshi et al., 2020).",
"We argue that such contiguously masking strategies are less effective and reliable since the prediction of tokens in masked n-grams are independent of each other, which neglects the intra-dependencies of n-grams.",
"Specifically, given a masked n -gram w = { x 1 , ..., x n } , x V F , we maximize p ( w ) = (cid:81) ni =1 p ( x i | c ) for n -gram learning, where models learn to recover w in a huge and sparse prediction space F R |V F | n .",
"Note that VF is the fine-grained vocabulary 1 and c is the context.",
"We propose ERNIE-Gram, an explicitly n gram masked language modeling method in which n -grams are masked with single [MASK] symbols, and predicted directly using explicit n gram identities rather than sequences of tokens, as depicted in Figure",
"1(b).",
"The models learn to predict n -gram w in a small and dense prediction space N R |V N | , where VN indicates a prior n gram lexicon 2 and normally |V N | (cid:28) |V F | n .",
"To 1 VF contains 30 K BPE codes in BERT (Devlin et al., 2019) and 50 K subword units in RoBERTa (Liu et al., 2019).",
"2 VN contains 300 K n -grams, where n [2 , 4) in this paper, n -grams are extracted in word-level before tokenization.",
"i i and explicit n -grams respectively.",
"Note that the weights of fine-grained classifier ( WF R h |V F | ) and N-gram classifier ( WN R h |(cid:104)V F , VN (cid:105)| ) are not used in fine-tuning stage, where h is the hidden size and L is the layers.",
"learn the semantic of n -grams more adequately, we adopt a comprehensive n -gram prediction mechanism, simultaneously predicting masked n grams in coarse-grained (explicit n -gram identities) and fine-grained (contained token identities) manners with well-designed attention mask metrics, as shown in Figure",
"1(c).",
"In addition, to model the semantic relationships between n -grams directly, we introduce an enhanced n -gram relation modeling mechanism, masking n -grams with plausible n -grams identities sampled from a generator model, and then recovering them to the original n -grams with the pair relation between plausible and original n -grams.",
"Inspired by ELECTRA (Clark et al., 2020), we incorporate the replaced token detection objective to distinguish original n -grams from plausible ones, which enhances the interactions between explicit n -grams and fine-grained contextual tokens.",
"In this paper, we pre-train ERNIE-Gram on both base-scale and large-scale text corpora (16GB and 160GB respectively) under comparable pre-training setting.",
"Then we fine-tune ERNIE-Gram on 13 English NLU tasks and 6 Chinese NLU tasks.",
"Experimental results show that ERNIE-Gram consistently outperforms previous well-performed pre-training models on various benchmarks by a large margin.",
"Self-supervised pre-training has been used to learn contextualized sentence representations though various training objectives.",
"GPT (Radford et al., 2018) employs unidirectional language modeling (LM) to exploit large-scale corpora.",
"BERT (Devlin et al., 2019) proposes masked language modeling (MLM) to learn bidirectional representations efficiently, which is a representative objective for pre-training and has numerous extensions such as RoBERTa (Liu et al., 2019), UNILM (Dong et al., 2019) and ALBERT (Lan et al., 2020).",
"XLNet (Yang et al., 2019) adopts permutation language modeling (PLM) to model the dependencies among predicted tokens.",
"ELECTRA introduces replaced token detection (RTD) objective to learn all tokens for more compute-efficient pre-training.",
"Coarse-grained linguistic information is indispensable for adequate representation learning.",
"There are lots of studies that implicitly integrate coarse-grained information by extending BERT's MLM to contiguously masking and predicting contiguous sequences of tokens.",
"For example, ERNIE (Sun et al., 2019b) masks named entities and phrases to enhance contextual representations, BERT-wwm (Cui et al., 2019) masks whole Chinese words to achieve better Chinese representations, SpanBERT (Joshi et al., 2020) masks contiguous spans to improve the performance on span selection tasks.",
"A few studies attempt to inject the coarse-grained n -gram representations into fine-grained contextualized representations explicitly, such as ZEN (Diao et al., 2020) and AMBERT (Zhang and Li, 2020), in which additional transformer encoders and computations for explicit n -gram representations are incorporated into both pre-training and fine-tuning.",
"Li et al., 2019 demonstrate that explicit n -gram representations are not sufficiently reliable for NLP tasks because of n -gram data sparsity and the ubiquity of out-of-vocabulary n -grams.",
"Differently, we only incorporate n -gram information by leveraging auxiliary n -gram classifier and embedding weights in pre-training, which will be completely removed during fine-tuning, so our method maintains the same parameters and computations as BERT.",
"In this section, we present the detailed implementation of ERNIE-Gram, including n -gram lexicon",
"VN extraction in Section 3.5, explicitly n -gram MLM pre-training objective in Section 3.2, comprehensive n -gram prediction and relation modeling mechanisms in Section 3.3 and 3.4.",
"To inject n -gram information into pre-training, many works (Sun et al., 2019b; Cui et al., 2019; Joshi et al., 2020) extend BERT's masked language modeling (MLM) from masking individual tokens to contiguous sequences of n tokens.",
"Contiguously MLM.",
"Given input sequence x = { x 1 , ..., x | x | } , x VF and n -gram starting boundaries b = { b 1 , ..., b | b | } , let z = { z 1 , ..., z | b | 1 } to be the sequence of n -grams, where z i = x [ b i : b i +1 ) , MLM samples 15% of starting boundaries from b to mask n -grams, donating M as the indexes of sampled starting boundaries, z M as the contiguously masked tokens, z \\M as the sequence after masking.",
"As shown in Figure",
"1(a), b = { 1 , 2 , 4 , 5 , 6 , 7 } , z = { x 1 , x [2:4) , x 4 , x 5 , x 6 } , M = { 2 , 4 } , z M = { x [2:4) , x 5 } , and z \\M = { x 1 , [M] , [M] , x 4 , [M] , x 6 } .",
"Contiguously MLM is performed by minimizing the negative likelihood: log p ( z M | z \\M ) = (cid:88) z z M (cid:88) x z log p ( x | z \\M ) .",
"Different from contiguously MLM, we employ explicit n -gram identities as pre-training targets to reduce the prediction space for n -grams.",
"To be specific, let y = { y 1 , ..., y | b | 1 } , y (cid:104)V F , VN (cid:105) to be the sequence of explicit n -gram identities, y M to be the target n -gram identities, and z \\M to be the sequence after explicitly masking n -grams.",
"As shown in Figure",
"1(b), y M = { y 2 , y 4 } , and z \\M = { x 1 , [M] , x 4 , [M] , x 6 } .",
"For masked n gram x [2:4) , the prediction space is significantly reduced from R |V F | 2 to R |(cid:104)V F , VN (cid:105)| .",
"Explicitly n gram MLM is performed by minimizing the negative likelihood: log p ( y M | z \\M ) = (cid:88) y y M log p ( y | z \\M ) .",
"We propose to simultaneously predict n -grams in fine-grained and coarse-grained manners corresponding to single mask symbol [M] , which helps to extract comprehensive n -gram semantics,",
"as shown in Figure",
"1(c).",
"Comprehensive n -gram MLM is performed by minimizing the joint negative likelihood: log p ( y M , z M | z \\M ) = (cid:88) y y M log p ( y | z \\M ) (cid:88) z z M (cid:88) x z log p ( x | z \\M ) .",
"(3) where the predictions of explicit n -gram y M and fine-grained tokens x M are conditioned on the same context sequence z \\M .",
"In detail, to predict all tokens contained in a n gram from single [M] other than a consecutive sequence of [M] , we adopt distinctive mask symbols [Mi] , i =1 , ..., n to aggregate contextualized representations for predicting the i -th token in n gram.",
"As shown in Figure",
"2(a), along with the same position as y 2 , symbols [M1] and [M2] are used as queries ( Q ) to aggregate representations from z \\M ( K ) for the predictions of x 2 and x 3 , where Q and K donate the query and key in self-attention operation (Vaswani et al., 2017).",
"As shown in Figure",
"2(b), the self-attention mask metric M controls what context a token can attend to by modifying the attention weight WA = softmax ( QKT d k + M ) , M is assigned as: M ij = (cid:40) 0 , allow to attend , prevent from attending (4) We argue that the length information of n -grams is detrimental to the representations learning, because it will arbitrarily prune a number of semantiL' PredictionDistribution Sampler Sampler the proposed a overhaul of the prime minister public official president 0.05 0.01 0.16 accountant ... not second to nothing short of nothing less than 0.09 0.08 0.26 ... the public official proposed completely a overhaul of the tax system .",
"cally related n -grams with different lengths during predicting.",
"From this viewpoint, for the predictions of n -gram { x 2 , x 3 } , 1) we prevent context z \\M from attending to { [M1] , [M2] } and 2) prevent { [M1] , [M2] } from attending to each other, so that the length information of n -grams will not be leaked in pre-training, as displayed in Figure",
"2(b).",
"To explicitly learn the semantic relationships between n -grams, we jointly pre-train a small generator model (cid:48) with explicitly n -gram MLM objective to sample plausible n -gram identities.",
"Then we employ the generated identities to preform masking and train the standard model to predict the original n -grams from fake ones in coarse-grained and fine-grained manners, as shown in Figure",
"3(a), which is efficient to model the pair relationships between similar n -grams.",
"The generator model (cid:48) will not be used during fine-tuning, where the hidden size H (cid:48) of (cid:48) has H (cid:48) = H / 3 empirically.",
"As shown in Figure",
"3(b), n -grams of different length can be sampled to mask original n -grams according to the prediction distributions of (cid:48) , which is more flexible and sufficient for constructing n gram pairs than previous synonym masking methods (Cui et al., 2020) that require synonyms and original words to be of the same length.",
"Note that our method needs a large embedding layer E R |(cid:104)V F , VN (cid:105)| h to obtain n -gram vectors in pretraining.",
"To keep the number of parameters consistent with that of vanilla BERT, we remove the auxiliary embedding weights of n -grams during fine-tuning ( E E (cid:48) R |V F | h ).",
"Specifically, let y (cid:48)M to be the generated n -gram identities, z (cid:48)M to be the sequence masked by y (cid:48)M , where y (cid:48)M = { y (cid:48) 2 , y (cid:48) 4 } , and z (cid:48)\\M = { x 1 , y (cid:48) 2 , x 4 , y (cid:48) 4 , x 6 } in Figure",
"3(a).",
"The pre-training objective is to jointly minimize the negative likelihood of (cid:48) and : log p (cid:48) ( y M | z \\M ) log p ( y M , z M | z (cid:48)\\M ) .",
"Moreover, we incorporate the replaced token detection objective (RTD) to further distinguish fake n -grams from the mix-grained context z (cid:48)\\M for interactions among explicit n -grams and fine-grained contextual tokens, as shown in the right part of Figure",
"3(a).",
"Formally, we donate z \\M to be the sequence after replacing masked n -grams with target n -gram identities y M , the RTD objective is performed by minimizing the negative likelihood: log p (cid:0) 1 ( z (cid:48)\\M = z \\M ) | z (cid:48)\\M (cid:1) = | z \\M | (cid:88) t =1 log p (cid:0) 1 ( z (cid:48)\\M ,t = z \\M ,t ) | z (cid:48)\\M , t (cid:1) .",
"(6) As the example depicted in Figure",
"N-gram Lexicon Extraction.",
"We employ T-test to extract semantically-complete n -grams statistically from unlabeled text corpora X (Xiao et al., 2020), as described in Algorithm 1.",
"We first calcu-Algorithm 1 N-gram Extraction with T-test Input: Large-scale text corpora X for pre-training Output: Semantic n -gram lexicon VN (cid:46) given initial hypothesis H 0 : a randomly constructed n -gram w = { x 1 , ..., x n } with probability p (cid:48) ( w ) = (cid:81) n i =1 p ( x i ) cannot be a statistically semantic n -gram for l in range (2, n ) do VN l (cid:104)(cid:105) (cid:46) initialize the lexicon for l -grams for l -gram w in X do s ( p ( w ) p (cid:48) ( w )) 2 /N l : t -statistic score (cid:46) where statistical probability p ( w ) = Count ( w ) N l , deviation 2 = p ( w )(1 p ( w )) , N l donates the count of l grams in XVN l .",
"append ( { w , s } ) VN l topk ( VN l , k l ) (cid:46) k l is the number of l -gram VN (cid:104)V N 2 , ..., VN n (cid:105) (cid:46) merge all lexicons return VN late the t -statistic scores of all n -grams appearing in X since the higher the t -statistic score, the more likely it is a semantically-complete n -gram.",
"Then, we select the l -grams with the top k l t -statistic scores to construct the final n -gram lexicon VN .",
"N-gram Boundary Extraction.",
"To incorporate n -gram information into MLM objective, n -gram boundaries are referred to mask whole n -grams for pre-training.",
"Given an input sequence x = { x 1 , ..., x | x | } , we employ maximum matching algorithm to traverse valid n -gram paths B = { b 1 , ..., b |B| } according to VN , then select the shortest paths as the final n -gram boundaries b , where | b | | b i | , i = 1 , ..., |B| .",
"In this section, we first present the pre-training con-figuration of ERNIE-Gram on Chinese and English text corpora.",
"Then we compare ERNIE-Gram with previous works on various downstream tasks.",
"We also conduct several ablation experiments to access the major components of ERNIE-Gram.",
"Base-scale corpora: 16GB uncompressed text from WIKIPEDIA and BOOKSCORPUS (Zhu et al., 2015), which is the original data for BERT.",
"Large-scale corpora: 160GB uncompressed text from WIKIPEDIA , BOOKSCORPUS , OPENWEBTEXT 3 , CC-NEWS (Liu et al., 2019) and STORIES (Trinh and Le, 2018), which is the original data used in RoBERTa.",
"Chinese Pre-training Data.",
"We adopt the same Chinese text corpora used in ERNIE2.0 (Sun et al., 2020) to pre-train ERNIE-Gram.",
"Before pre-training, we first extract 200 K bi-grams and 100 K tri-grams with Algorithm 1 to construct the semantic n -gram lexicon VN for English and Chinese corpora.",
"and we adopt the sub-word dictionary ( 30 K BPE codes) used in BERT and the character dictionary used in ERNIE2.0 as our fine-grained vocabulary VF in English and Chinese.",
"length of the sequence in each batch up to 512 tokens.",
"We add the relative position bias (Raffel et al., 2020) to attention weights and use Adam (Kingma and Ba, 2015) for optimizing.",
"For pre-training on base-scale English corpora, the batch size is set to 256 sequences, the peak learning rate is 1 e 4 for 1 M training steps, which are the same settings as BERTBASE .",
"As for large-scale English corpora, the batch size is 5112 sequences, the peak learning rate is 4 e 4 for 500 K training steps.",
"For pre-training on Chinese corpora, the batch size is 256 sequences, the peak learning rate is 1 e 4 for 3M training steps.",
"All the pre-training hyper-parameters are supplemented in the Appendix A. In fine-tuning, we remove the auxiliary embedding weights of explicit n -grams identities for fair comparison with previous pre-trained models.",
"The General Language Understanding Evaluation (GLUE; Wang et al., 2018) is a multi-task benchmark consisting of various NLU tasks, which contains 1) pairwise classification tasks like language inference (MNLI; Williams et al., 2018, RTE; Dagan et al., 2006), question answering (QNLI; Ra-jpurkar et al., 2016) and paraphrase detection (QQP, MRPC; Dolan and Brockett, 2005), 2) single-sentence classification tasks like linguistic acceptability (CoLA; Warstadt et al., 2019), sentiment analysis (SST-2; Socher et al., 2013) and 3) text similarity task (STS-B; Cer et al., 2017).",
"The fine-tuning results on GLUE of ERNIE-Gram and various strong baselines are presented in Table 1.",
"For fair comparison, the listed models are all in base size and fine-tuned without any data augmentation.",
"Pre-trained with base-scale text corpora, ERNIE-Gram outperforms recent models such as TUPE and F-TFM by 1 .",
"7 and 1 .",
"3 points on average.",
"As for large-scale text corpora, ERNIE-Gram achieves average score increase of 1 .",
"7 and 0 .",
"6 over RoBERTa and ELECTRA, demonstrating the effectiveness of ERNIE-Gram.",
"The Stanford Question Answering (SQuAD) tasks are designed to extract the answer span within the given passage conditioned on the question.",
"We conduct experiments on SQuAD1.1 (Rajpurkar et al., 2016) and SQuAD2.0 (Rajpurkar et al., 2018) by adding a classification layer on the sequence outputs of ERNIE-Gram and predicting whether each token is the start or end position of the answer span.",
"Table 2 presents the results on SQuAD for base-size pre-trained models, ERNIE-Gram achieves better performance than current strong baselines on both base-scale and large-scale pre-training text corpora.",
"The ReAding Comprehension from Examinations (RACE; Lai et al., 2017) dataset collects 88 K long passages from English exams at middle and high schools, the task is to select the correct choice from four given options according to the questions and",
"passages.",
"We also evaluate ERNIE-Gram on two large scaled text classification tasks that involve long text and reasoning, including sentiment analysis datasets IMDb (Maas et al., 2011) and topic classification dataset AG's News (Zhang et al., 2015).",
"The results are reported in Table 3. It can be seen that ERNIE-Gram consistently outperforms previous models, showing the advantage of ERNIE-Gram on tasks involving long text and reasoning.",
"We execute extensive experiments on six Chinese language understanding tasks, including natural language inference (XNLI; Conneau et al., 2018),",
"Results on six Chinese tasks are presented in Table 4. It is observed that ERNIE-Gram significantly outperforms previous models across tasks by a large margin and achieves new state-of-the-art results on these Chinese NLU tasks in base-size model group.",
"Besides, ERNIE-Gram BASE are also better than various large-size models on XNLI, LCQMC and CMRC2018 datasets.",
"Effect of Explicitly N-gram MLM.",
"We compare two models pre-trained with contiguously MLM and explicitly n -gram MLM objectives in the same settings (the size of n -gram lexicon is 300 K).",
"The evaluation results for pre-training and fine-tuning are shown in Figure 4. Compared with contiguously MLM, explicitly n -gram MLM objective facilitates the learning of n -gram semantic information with lower n -gram level perplexity in pre-training and better performance on downstream tasks.",
"This verifies the effectiveness of explicitly n -gram MLM objective for injecting n -gram semantic information into pre-training.",
"Size of N-gram Lexicon.",
"To study the impact of n -gram lexicon size on model performance, we extract n -gram lexicons with size from 100 K to 400 K for pre-training, as shown in Figure 5. As the Figure 4:",
"lexicon size enlarges, performance of contiguously MLM becomes worse, presumably because more n -grams are matched and connected as longer consecutive spans for prediction, which is more difficult for representation learning.",
"Explicitly n -gram MLM with lexicon size being 300 K achieves the best results, while the performance significantly declines when the size of lexicon increasing to 400 K because more low-frequent n -grams are learning unnecessarily.",
"See Appendix C for detailed results of different lexicon choices on GLUE and SQuAD.",
"and Enhanced N-gram Relation Modeling.",
"As shown in Table 5, we compare several ERNIE-Gram variants with previous strong baselines under the BERTBASE setting.",
"After removing comprehensive n -gram prediction ( #2 ), ERNIE-Gram degenerates to a variant with explicitly n -gram MLM and n -gram relation modeling and its performance drops slightly by 0 .",
"3 0 .",
"6 .",
"When removing enhanced n -gram relation modeling ( #3 ), ERNIE-Gram degenerates to a variant with comprehen-# Models MNLI SST-2 SQuAD1.1 SQuAD2.0 m mm Acc EM F1 EM F1 XLNet a 85.6 85.1 93.4 -78.2 81.0 RoBERTa b 84.7 -92.7 -90.6 -79.7 MPNet c 85.6 -93.6 84.0 90.3 79.5 82.2 UNI LMv2 d 85.6 85.5 93.0 85.0 91.5 78.9 81.8 #1 ERNIE-Gram 86.5 86.4 93.2 85.2 91.7 80.8 84.0 #2 CNP 86.2 86.2 92.7 85.0 91.5 80.4 83.4 #3 ENRM 85.7 85.8 93.5 84.7 91.3 79.7 82.7 #4 CNP ENRM 85.6 85.7 92.9 84.5 91.2 79.5 82.4 Table 5: Comparisons between comprehensive n -gram prediction ( CNP ) and enhanced n -gram relation modeling ( ENRM ) methods.",
"sive n -gram MLM and the performance drops by 0 .",
"4 1 .",
"3 .",
"If removing both comprehensive n -gram prediction and relation modeling ( #4 ), ERNIE-Gram degenerates to a variant with explicitly n gram MLM and the performance drops by 0 .",
"7 1 .",
"6 .",
"These results demonstrate the advantage of comprehensive n -gram prediction and n -gram relation modeling methods for efficiently n -gram semantic injecting into pre-training.",
"The detailed results of ablation study are supplemented in Appendix C. 4.8 Case Studies To further understand the effectiveness of our approach for learning n -grams information, we fine-tune ERNIE-Gram, contiguously MLM and lower-cased BERT on CoNLL-2003 named entity recognition task (Sang and De Meulder, 2003) for comparison.",
"We divide the evaluation set into five subsets based on the average length of the named entities in each sentence.",
"As shown in Figure",
"6(a), it is more difficult to recognize whole named entities as the length of them increases, while the performance of ERNIE-Gram declines slower than contiguously MLM and BERT, which implies that ERNIE-Gram models tighter intra-dependencies of n -grams.",
"As shown in Figure 6(b-d), we visualize the attention patterns in the last self-attention layer of fine-tuned models.",
"For contiguously MLM, there are clear diagonal lines in named entities that tokens prefer to attend to themself in named entities.",
"While for ERNIE-Gram, there are bright blocks over named entities that tokens attend to most of tokens in the same entity adequately to construct tight representation, verifying the effectiveness of ERNIE-Gram for n -gram semantic modeling.",
"In this paper, we present ERNIE-Gram, an explicitly n -gram masking and predicting method to eliminate the limitations of previous contiguously masking strategies and incorporate coarse-grained linguistic information into pre-training sufficiently.",
"ERNIE-Gram conducts comprehensive n -gram prediction and relation modeling to further enhance the learning of semantic n -grams for pre-training.",
"Experimental results on various NLU tasks demonstrate that ERNIE-Gram outperforms XLNet and RoBERTa by a large margin, and achieves state-of-the-art results on various benchmarks.",
"Future work includes constructing more comprehensive n -gram lexicon ( n> 3 ) and pre-training ERNIE-Gram with large-size model for more downstream tasks.",
"We would like to thank Zhen Li for his constructive suggestions, and hope everything goes well with his work.",
"We are also indebted to the NAACL-HLT reviewers for their detailed and insightful comments on our work."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Pre-trained language models have been applied to various NLP tasks with considerable performance gains.",
"However, the large model sizes, together with the long inference time, limit the deployment of such models in real-time applications.",
"One line of model compression approaches considers knowledge distillation to distill large teacher models into small student models.",
"Most of these studies focus on single-domain only, which ignores the transferable knowledge from other domains.",
"We notice that training a teacher with transferable knowledge digested across domains can achieve better generalization capability to help knowledge distillation.",
"Hence we propose a Meta-Knowledge Distillation (Meta-KD) framework to build a meta-teacher model that captures transferable knowledge across domains and passes such knowledge to students.",
"Specifically, we explicitly force the meta-teacher to capture transferable knowledge at both instance-level and feature-level from multiple domains, and then propose a meta-distillation algorithm to learn single-domain student models with guidance from the meta-teacher.",
"Experiments on public multi-domain NLP tasks show the effectiveness and superiority of the proposed Meta-KD framework.",
"Further, we also demonstrate the capability of Meta-KD in the settings where the training data is scarce.",
"Pre-trained Language Models (PLM) such as BERT (Devlin et al., 2019) and XLNet (Yang et al., 2019) have achieved significant success with the two-stage pre-training and fine-tuning process.",
"Despite the performance gain achieved in various NLP tasks, the large number of model parameters H. Pan and C. Wang contributed equally to this work.",
"and the long inference time have become the bottleneck for PLMs to be deployed in real-time applications, especially on mobile devices (Jiao et al., 2019; Sun et al., 2020; Iandola et al., 2020).",
"Thus, there are increasing needs for PLMs to reduce the model size and the computational overhead while keeping the prediction accuracy.",
"Knowledge Distillation (KD) (Hinton et al., 2015) is one of the promising ways to distill the knowledge from a large teacher model to a small student model.",
"Recent studies show that KD can be applied to compress PLMs with acceptable performance loss (Sanh et al., 2019; Sun et al., 2019b; Jiao et al., 2019; Turc et al., 2019; Chen et al., 2020a).",
"However, those methods mainly focus on single-domain KD.",
"Hence, student models can only learn from their in-domain teachers, paying little attention to acquiring knowledge from other domains.",
"It has been shown that it is beneficial to consider cross-domain information for KD, by either training a teacher using cross-domain data or multiple teachers from multiple domains (You et al., 2017; Liu et al., 2019; Yang et al., 2020; Peng et al., 2020).",
"Consider an academic scenario in Figure 1.",
"A typical way for a physics student to learn physics equations is to directly learn from his/her physics teacher.",
"If we have a math teacher to teach him/her basic knowledge of equations, the student can obtain a better understanding of physics equations.",
"This knowledge transfer technique in KD has been proved efficient only when two domains are close to each other (Hu et al., 2019).",
"In reality, however, it is highly risky as teachers of other domains may pass non-transferable knowledge to the student model, which is irrelevant to the current domain and hence harms the overall performance (Tan et al., 2017; Wang et al., 2020).",
"Besides, current studies find multi-task fine-tuning of BERT does not necessarily yield better performance across all the tasks (Sun et al., 2019a).",
"To address these issues, we leverage the idea of meta-learning to capture transferable knowledge across domains, as recent studies have shown that meta-learning can improve the model generalization ability across domains (Finn et al., 2017; Javed and White, 2019; Yin, 2020; Ye et al., 2020).",
"We further notice that meta-knowledge is also helpful for cross-domain KD.",
"Re-consider the example in Figure 1.",
"If we have an all-purpose teacher (i.e., the meta-teacher) who has the knowledge of both physics principles and mathematical equations (i.e., the general knowledge of the two courses), the student may learn physics equations better with the teacher, compared to the other two cases.",
"Hence, it is necessary to train an all-purpose teacher model for domain-specific student models to learn.",
"In this paper, we propose the Meta-Knowledge Distillation (Meta-KD) framework, which facilities cross-domain KD.",
"Generally speaking, Meta-KD consists of two parts, meta-teacher learning and meta-distillation .",
"Different from the K-way N-shot problems addressed in traditional meta-learning (Vanschoren, 2018), we propose to train a meta-learner as the meta-teacher, which learns the transferable knowledge across domains so that it can fit new domains easily.",
"The meta-teacher is jointly trained with multi-domain datasets to acquire the instance-level and feature-level meta-knowledge.",
"For each domain, the student model learns to solve the task over a domain-specific dataset with guidance from the meta-teacher.",
"To improve the student's distillation ability, the meta-distillation module minimizes the distillation loss from both intermediate layers, output layers, and transferable knowledge, combined with domain-expertise weighting techniques.",
"To verify the effectiveness of Meta-KD, we conduct extensive experiments on two NLP tasks across multiple domains, namely natural language inference (Williams et al., 2018) and sentiment analysis (Blitzer et al., 2007).",
"Experimental results show the effectiveness and superiority of the proposed Meta-KD framework.",
"Moreover, we find our method performs well especially when",
"i) the in-domain dataset is very small or",
"ii) there is no in-domain dataset during the training of the meta-teacher.",
"In summary, the contributions of this study can be concluded as follows: To the best of our knowledge, this work is the first to explore the idea of meta-teacher learning for PLM compression across domains.",
"We propose the Meta-KD framework to address the task.",
"In Meta-KD, the meta-teacher digests transferable knowledge across domains, and selectively passes the knowledge to student models with different domain expertise degrees.",
"We conduct extensive experiments to demonstrate the superiority of Meta-KD and also explore the capability of this framework in the settings where the training data is scarce.",
"The rest of this paper is summarized as follows.",
"Section 2 describes the related work.",
"The detailed techniques of the Meta-KD framework are presented in Section 3. The experiments are reported in Section 4. Finally, we conclude our work and discuss the future work in Section 5. 1 2 Related Work Our study is close to the following three lines of studies, introduced below.",
"KD was first proposed by (Hinton et al., 2015), aiming to transfer knowledge from an ensemble or a large model into a smaller, distilled model.",
"Most of the KD methods focus on utilizing either the dark knowledge, i.e., predicted outputs (Hinton et al., 2015; Chen et al., 2020b; Furlanello et al., 2018; You et al., 2017) or hints, i.e., the intermediate 1 The experimental code can be found in https: //github.com/alibaba/EasyTransfer/tree/master/scripts/metaKD .",
"representations (Romero et al., 2015; Yim et al., 2017; You et al., 2017) or the relations between layers (Yim et al., 2017; Tarvainen and Valpola, 2017) of teacher models.",
"You et al. (2017) also find that multiple teacher networks together can provide comprehensive guidance that is beneficial for training the student network.",
"Ruder et al. (2017) show that multiple expert teachers can improve the performances of sentiment analysis on unseen domains.",
"Tan et al. (2019) apply the multiple-teachers framework in KD to build a state-of-the-art multilingual machine translation system.",
"Feng et al. (2021) considers to build a model to automatically augment data for KD.",
"Our work is one of the first attempts to learn a meta-teacher model that digest transferable knowledge from multiple domains to benefit KD on the target domain.",
"2.2 PLM Compression Due to the massive number of parameters in PLMs, it is highly necessary to compress PLMs for application deployment.",
"Previous approaches on compressing PLMs such as BERT (Devlin et al., 2019) include KD (Hinton et al., 2015), parameter sharing (Ullrich et al., 2017), pruning (Han et al., 2015) and quantization (Gong et al., 2014).",
"In this work, we mainly focus on KD for PLMs.",
"In the literature, Tang et al. (2019) distill BERT into BiLSTM networks to achieve comparable results with ELMo (Peters et al., 2018).",
"Chen et al. (2021) studies cross-domain KD to facilitate cross-domain knowledge transferring.",
"Zhao et al. (2019) use dual distillation to reduce the vocabulary size and the embedding size.",
"DistillBERT (Sanh et al., 2019) applies KD loss in the pre-training stage, while BERT-PKD (Sun et al., 2019b) distill BERT into shallow Transformers in the fine-tuning stage.",
"TinyBERT (Jiao et al., 2019) further distills BERT with a two-stage KD process for hidden attention matrices and embedding matrices.",
"Ad-aBERT (Chen et al., 2020a) uses neural architecture search to adaptively find small architectures.",
"Our work improves the prediction accuracy of compressed PLMs by leveraging cross-domain knowledge, which is complementary to previous works.",
"TL has been proved to improve the performance on the target domain by leveraging knowledge from related source domains (Pan and Yang, 2010; Mou et al., 2016; Liu et al., 2017; Yang et al., 2017).",
"In most NLP tasks, the shared-private architecture is applied to learn domain-specific representations and domain-invariant features (Mou et al., 2016; Liu et al., 2017; Chen et al., 2018, 2019).",
"Compared to TL, the goal of meta-learning is to train meta-learners that can adapt to a variety of different tasks with little training data (Vanschoren, 2018).",
"A majority of meta-learning methods for include metric-based (Snell et al., 2017; Pan et al., 2019), model-based (Santoro et al., 2016; Bartunov et al., 2020) and model-agnostic approaches (Finn et al., 2017, 2018; Vuorio et al., 2019).",
"Meta-learning can also be applied to KD in some computer vision tasks (Lopes et al., 2017; Jang et al., 2019; Liu et al., 2020; Bai et al., 2020; Li et al., 2020).",
"For example, Lopes et al. (2017) record per-layer meta-data for the teacher model to reconstruct a training set, and then adopts a standard training procedure to obtain the student model.",
"In our work, we use instance-based and feature-based meta-knowledge across domains for the KD process.",
"In this section, we formally introduce the Meta-KD framework.",
"We begin with a brief overview of Meta-KD.",
"After that, the techniques are elaborated.",
"Take text classification as an example.",
"Assume there are K training sets, corresponding to K domains.",
"In the k -th dataset D k = { X ( i ) k , y ( i ) k } N k i =1 , X ( i ) k is the i -th sample 2 and y ( i ) k is the class label of X ( i ) k .",
"N k is the total number of samples in D k .",
"Let M k be the large PLM fine-tuned on D k .",
"Given the K datasets, the goal of Meta-KD is to obtain the K student models S 1 , , SK that are small in size but has similar performance compared to the K large PLMs, i.e., M 1 , , MK .",
"In general, the Meta-KD framework can be divided into the following two stages: Meta-teacher Learning : Learn a meta-teacher M over all domains K (cid:83) k =1 D k .",
"The model digests transferable knowledge from each domain and has better generalization while supervising domain-specific students.",
"2 X ( i ) k can be a sentence, a sentence pair or any other textual units, depending on the task inputs.",
"During the learning process of the meta-teacher, we consider both instance-level and feature-level transferable knowledge.",
"Inspired by prototype-based meta-learning (Snell et al., 2017; Pan et al., 2019), the meta-teacher model should memorize more information about prototypes.",
"Hence, we compute sample-wise prototype scores as the instance-level transferable knowledge.",
"The loss of the meta-teacher is defined as the sum of classification loss across all K domains with prototype-based , instance-specific weighting .",
"Besides, it also learns feature-level transferable knowledge by adding a domain-adversarial loss as an auxiliary loss.",
"By these steps, the meta-teacher is more generalized and digests transferable knowledge before supervising student models.",
"For meta-distillation, each sample is weighted by a domain-expertise score to address the meta-teacher's capability for this sample.",
"The transferable knowledge is also learned for the students from the meta-teacher.",
"The overall meta-distillation loss is a combination of the Mean Squared Error (MSE) loss from intermediate layers of both models (Sun et al., 2019b; Jiao et al., 2019), the soft cross-entropy loss from output layers (Hinton et al., 2015), and the transferable knowledge distillation loss , with instance-specific domain-expertise weighting applied.",
"We take BERT (Devlin et al., 2019) as our base learner for text classification due to its wide popularity.",
"For each sample X ( i ) k , the input is: [CLS] , tok ( i ) k, 1 , tok ( i ) k, 2 , , [SEP] , where tok ( i ) k,n is the n -th token in X ( i ) k .",
"The last hidden outputs of this sequence is denoted as h [ CLS ] , h ( tok ( i ) k, 1 ) , h ( tok ( i ) k, 2 ) ,",
".., h ( tok ( i ) k,N ) , where h ( tok ( i ) k,j ) represents the last layer embedding of the j -th token in X ( i ) k , and N is the maximum sequence length.",
"For simplicity, we define h ( X ( i ) k ) as the average pooling of the token embeddings, i.e., h ( X ( i ) k ) = (cid:80) Nn =1 h ( tok ( i ) k,n ) .",
"Learning Instance-level Transferable Knowledge.",
"To select transferable instances across domains , we compute a prototype score t ( i ) k for each sample X ( i ) k .",
"Here, we treat the prototype representation for the m -th class of the k -th domain: p ( m ) k = 1 |D ( m ) k | (cid:80) X ( i ) k D ( m ) k h ( X ( i ) k ) , where D ( m ) k is the k -th training set with the m -th class label.",
"The prototype score t ( i ) k is: t ( i ) k = cos( p ( m ) k , h ( X ( i ) k )) + K ( k (cid:48) (cid:54) = k ) (cid:88) k (cid:48) =1 cos( p ( m ) k (cid:48) , h ( X ( i ) k )) , where cos is the cosine similarity function, is a pre-defined hyper-parameter and = 1 K 1 .",
"We can see that the definition of the prototype score here is different from previous meta-learning, as we require that an instance X ( i ) k should be close to its class prototype representation in the embedding space (i.e., p ( m ) k ), as well as the prototype representations in out-of-domain datasets (i.e., p ( m ) k (cid:48) with k (cid:48) = 1 , , K, k (cid:48) (cid:54) = k ).",
"This is because the meta-teacher should learn more from instances that are prototypical across domains instead of in-domain only .",
"For the text classification task, the cross-entropy loss of the meta-teacher is defined using the cross-entropy loss with the prototype score as a weight assigned to each instance.",
"Learning Feature-level Transferable Knowledge.",
"Apart from the cross-entropy loss, we propose the domain-adversarial loss to increase the meta-teacher's ability for learning feature-level transferable knowledge.",
"For each sample X ( i ) k , we first learn an | h ( X ( i ) k ) | dimensional domain embedding of the true domain label d ( i ) k by mapping one-hot domain representations to the embeddings, denoted as ED ( X ( i ) k ) .",
"A sub-network is then constructed by: h d ( X ( i ) k )) = tanh(( h ( X ( i ) k ) + ED ( X ( i ) k )) W + b ) , where W and b are sub-network parameters.",
"The domain-adversarial loss for X ( i ) k is defined as: LDA ( X ( i ) k ) = K (cid:88) k =1 1 k = z ( i ) k log ( h d ( X ( i ) k )) , where is the K -way domain classifier, and 1 is the indicator function that returns 1 if k = z ( i ) k , and 0 otherwise.",
"Here, z ( i ) k (cid:54) = d ( i ) k is a false domain label of X ( i ) k 3 .",
"Hence, we deliberately maximize the probability that the meta-teacher makes the wrong 3 For ease of implementation, we shuffle the domain labels of all instances in a mini-batch.",
"predictions of domain labels.",
"We call h d ( X ( i ) k )) as the transferable knowledge for X ( i ) k , which is more insensitive to domain differences.",
"Let LCE ( X ( i ) k ) be the normal cross-entropy loss of the text classification task.",
"The total loss of the meta-teacher LMT is the combination of weighted LCE ( X ( i ) k ) and LDA ( X ( i ) k ) , shown as follows: LMT = (cid:88) X ( i ) k K (cid:83) k =1 D k t ( i ) k LCE ( X ( i ) k ) + 1 LDA ( X ( i ) k ) (cid:80) Kk =1 |D k | , where 1 is the factor to represent how the domain-adversarial loss contributes to the overall loss.",
"We take BERT as our meta-teacher and use smaller BERT models as student models.",
"The distillation framework is shown in Figure 2.",
"In our work, we distill the knowledge in the meta-teacher model considering the following five elements: input em-beddings, hidden states, attention matrices, output logits, and transferable knowledge.",
"The KD process of input embeddings, hidden states and attention matrices follows the common practice (Sun et al., 2019b; Jiao et al., 2019).",
"Recall that M and S k are the meta-teacher and the k -th student model.",
"Let L embd ( M , S k , X ( i ) k ) , L hidn ( M , S k , X ( i ) k ) and L attn ( M , S k , X ( i ) k ) be the sample-wise MSE loss values of input embeddings, hidden states and attention matrices of the two models, respectively.",
"Here, L embd ( M , S k , X ( i ) k ) , L hidn ( M , S k , X ( i ) k ) and L attn ( M , S k , X ( i ) k ) refer to the sum of MSE values among multiple hidden layers.",
"We refer interested readers to Jiao et al. (2019) for more details.",
"L pred ( M , S k , X ( i ) k ) is the cross-entropy loss of softened output logits, parameterized by the temperature (Hinton et al., 2015).",
"A naive approach to formulating the total KD loss L kd is the sum of all previous loss functions, i.e., L kd = (cid:88) X ( i ) k D k (cid:0) L embd ( M , S k , X ( i ) k )+ L hidn ( M , S k , X ( i ) k ) + L attn ( M , S k , X ( i ) k )+ L pred ( M , S k , X ( i ) k ) (cid:1) .",
"However, the above approach does not give special considerations to the transferable knowledge of the meta-teacher.",
"Let h M d ( X ( i ) k ) and h S d ( X ( i ) k ) be the transferable knowledge of the meta-teacher and the student model w.r.t. the input X ( i ) k .",
"We further define the transferable knowledge distillation loss LTKD ( M , S k , X ( i ) k ) as follows: L tkd ( M , S k , X ( i ) k ) = 1 |D k | (cid:88) X ( i ) k D k MSE (cid:0) h M d ( X ( i ) k ) WM d , h S d ( X ( i ) k ) (cid:1) where WM d is a learnable projection matrix to match the dimension between h M d ( X ( i ) k ) and h S d ( X ( i ) k ) , and MSE is the MSE loss function w.r.t. single element.",
"In this way, we encourage student models to learn more transferable knowledge from the meta-teacher.",
"We further notice that although the knowledge of the meta-teacher should be highly transferable, there still exists the domain gap between the meta-teacher and domain-specific student models.",
"In this work, for each sample X ( i ) k , we define the domain expertise weight ( i ) k as follows: ( i ) k = 1 + t ( i ) k exp ( y ( i ) k y ( i ) k ) 2 +1 , where y ( i ) k is the predicted result of X ( i ) k 's class label.",
"Here, the weight ( i ) k is large when the meta-teacher model",
"i) has a large prototype score t ( i ) k and",
"ii) makes correct predictions on the target input, i.e., y ( i ) k = y ( i ) k .",
"We can see that the weight reflects how well the meta-teacher can supervise the student on a specific input.",
"Finally, we derive the complete formulation of the KD loss L (cid:48) kd as follows: L (cid:48) kd = (cid:88) X ( i ) k D k ( i ) k (cid:0) L embd ( M , S k , X ( i ) k )+ L hidn ( M , S k , X ( i ) k ) + L attn ( M , S k , X ( i ) k )+ L pred ( M , S k , X ( i ) k ) (cid:1) + 2 L tkd ( M , S k , X ( i ) k ) (cid:1) , where 2 is the transferable KD factor to represent how the transferable knowledge distillation loss contributes to the overall loss.",
"We evaluate Meta-KD over natural language inference and sentiment analysis, using the following two datasets MNLI and Amazon Reviews.",
"The data statistics are in Table 1.",
"MNLI (Williams et al., 2018) is a large-scale, multi-domain natural language inference dataset for predicting the entailment relation between two sentences, containing five domains (genres).",
"After filtering samples with no labels available, we use the original development set as our test set and randomly sample 10% of the training data as a development set in our setting.",
"Amazon Reviews (Blitzer et al., 2007) is a multi-domain sentiment analysis dataset, widely used in multi-domain text classification tasks.",
"The reviews are annotated as positive or negative.",
"For each domain, there are 2,000 labeled reviews.",
"We randomly split the data into train, development, and test sets.",
"For the teacher side, to evaluate the cross-domain distillation power of the meta-teacher model, we consider the following models as baseline teachers:",
"BERT-single : Train the BERT teacher model on the target distillation domain only.",
"If we have K domains, then we will have K BERT-single teachers.",
"BERT-mix : Train the BERT teacher on a combination of K -domain datasets.",
"Hence, we have one BERT-mix model as the teacher model for all domains.",
"BERT-mtl : Similar to the one-teacher paradigm as in BERT-mix, but the teacher model is generated by the multi-task fine-tuning approach (Sun et al., 2019a).",
"Multi-teachers : It uses K domain-specific BERT-single models to supervise K student models, ignoring the domain difference.",
"For the student side, we follow TinyBERT (Jiao et al., 2019) to use smaller BERT models as our student models.",
"In single-teacher baselines (i.e., BERT-single, BERT-mix and BERT-mtl), we use TinyBERT-KD as our baseline KD approach.",
"In multi-teachers, because TinyBERT-KD does not naturally support distilling from multiple teacher models, we implement a variant of the TinyBERT-KD process based on MTN-KD (You et al., 2017), which uses averaged softened outputs as the incorporation of multiple teacher networks in the output layer.",
"In practice, we first learn the representations of the student models by TinyBERT, then apply MTN-KD for output-layer KD.",
"In the implementation, we use the original BERTB model (L=12, H=768, A=12, Total Parame-ters=110M) as the initialization of all of the teachers, and use the BERTS model (L=4, H=312, A=12, Total Parameters=14.5M) as the initialization of all the students 4 .",
"The hyper-parameter settings of the meta-teacher model are as follows.",
"We train 3-4 epochs with the learning rate to be 2e-5.",
"The batch size and 1 are chosen from { 16, 32, 48 } and { 0.1, 0.2, 0.5 } , respectively.",
"All the hyper-parameters are tuned on the development sets.",
"For meta-distillation, we choose the hidden layers in { 3, 6, 9, 12 } of the teacher models in the baselines and the meta-teacher model in our approach to learn the representations of the student models.",
"Due to domain difference, we train student models in 3-10 epochs, with a learning rate of 5e-5.",
"The batch size and 2 are tuned from { 32, 256 } and { 0.1, 0.2, 0.3, 0.4, 0.5 } for intermediate-layer distillation, respectively.",
"Following Jiao et al. (2019), for prediction-layer distillation, we run the method for 3 epochs, with the batch size and learning rate to be 32 and 3e-5.",
"The experiments are implemented on PyTorch and run on 8 Tsela V100 GPUs.",
"Table 2 and Table 3 show the general testing performance over MNLI and Amazon Reviews of baselines and Meta-KD, in terms of accuracy.",
"From the results, we have the following three major insights: Compared to all the baseline teacher models, using the meta-teacher for KD consistently achieves the highest accuracy in both datasets.",
"Our method can help to significantly reduce model size while preserving similar performance, especially in Amazon review, we reduce the model size to 7.5x smaller with only a minor performance drop (from 89.9 to 89.4).",
"The meta-teacher has similar performance as BERT-mix and BERT-mtl, but shows to be a better teacher for distillation, as Meta-teacher TinyBERT-KD BERTS and Meta-teacher Meta-distillation BERTS have better performance than other methods.",
"This shows the meta-teacher is capable of learning more transferable knowledge to help the student.",
"The fact that Meta-teacher Meta-distillation has better performance than other distillation methods confirms the effectiveness of the proposed Meta-KD method.",
"Meta-KD gains more improvement on the small datasets than large ones, e.g. it improves from 86.7 to 89.4 in Amazon Reviews while 79.3 to 80.4 in MNLI.",
"This motivates us to explore our model performance on domains with few or no training samples 4.5 Ablation Study We further investigate Meta-KD's capability with regards to different portion training data for both of two phases and explore how the transferable knowledge distillation loss contributes to final results.",
"In this set of experiments, we consider a special case where we assume all the fiction domain data in MNLI is unavailable.",
"Here, we train a meta-teacher without the fiction domain dataset and use the distillation method proposed in Jiao et al. (2019) to produce the student model for the fiction domain with in-domain data during distillation.",
"The results are shown in Table 4. We find that KD from the meta-teacher can have large Method Accuracy BERTB -s (fiction) 82.2% Meta-teacher (w/o fiction) 81.6% BERTB -s (fiction) TinyBERT-KD BERTS 78.8% BERTB -s (govern) TinyBERT-KD BERTS 75.3% BERTB -s (telephone) TinyBERT-KD BERTS 75.6% BERTB -s (slate) TinyBERT-KD BERTS 77.1% BERTB -s (travel) TinyBERT-KD BERTS 74.1% Meta-teacher TinyBERT-KD BERTS 78.2% Table 4: Results under the setting where no in-domain data used for meta-teacher learning on MNLI.",
"improvement, compared to KD from other out-domain teachers.",
"Additionally, learning from the out-domain meta-teacher has a similar performance to KD from the in-domain fiction teacher model itself.",
"It shows the Meta-KD framework can be applied in applications for emerging domains.",
"We randomly sample a part of the MNLI dataset as the training data in this setting.",
"The sample rates that we choose include 0.01, 0.02, 0.05, 0.1 and 0.2.",
"The sampled domain datasets are employed for training student models when learning from the in-domain teacher or the meta-teacher.",
"The experimental results are shown in Figure 3, with results reported by the improvement rate in averaged accuracy.",
"The experimental results show that when less data is available, the improvement rate is much larger.",
"For example, when we only have 1% of the original MNLI training data, the accuracy can be 0.2 0.4 0.6 0.8 1.0 Transferable KD factor 2 75 80 85 90 95 A cc u r ac y ( % ) Books DVD Electronics Kitchen Figure 4: Model performance w.r.t. the transferable KD factor 2 increased by approximately 10% when the student tries to learn from the meta-teacher.",
"It shows Meta-KD can be more beneficial when we have fewer in-domain data.",
"Here, we explore how the transferable KD factor 2 affects the distillation performance over the Amazon Reviews dataset.",
"We tune the value of 2 from 0.1 to 1.0, with results are shown in Figure 4. We find that the optimal value of 2 generally lies in the range of 0.2 0.5.",
"The trend of accuracy is different in the domain DVD is different from those of the remaining three domains.",
"This means the benefits from transferable knowledge of the meta-teacher vary across domains.",
"In this work, we propose the Meta-KD framework which consists of meta-teacher learning and meta distillation to distill PLMs across domains.",
"Experiments on two widely-adopted public multi-domain datasets show that Meta-KD can train a meta-teacher to digest knowledge across domains to help better teach in-domain students.",
"Quantitative evaluations confirm the effectiveness of Meta-KD and also show the capability of Meta-KD in the settings where the training data is scarce i.e. there is no or few in-domain data.",
"In the future, we will examine the generalization capability of Meta-KD in other application scenarios and apply other meta-learning techniques to KD for PLMs.",
"This work is supported by Open Research Projects of Zhejiang Lab (No. 2019KD0AD01/004).",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"other",
"other"
] |
[
"Text might contain or invoke multiple emotions with varying intensities.",
"As such, emotion detection, to predict multiple emotions associated with a given text, can be cast into a multi-label classification problem.",
"We would like to go one step further so that a ranked list of relevant emotions are generated where top ranked emotions are more intensely associated with text compared to lower ranked emotions, whereas the rankings of irrelevant emotions are not important.",
"A novel framework of relevant emotion ranking is proposed to tackle the problem.",
"In the framework, the objective loss function is designed elaborately so that both emotion prediction and rankings of only relevant emotions can be achieved.",
"Moreover, we observe that some emotions co-occur more often while other emotions rarely coexist.",
"Such information is incorporated into the framework as constraints to improve the accuracy of emotion detection.",
"Experimental results on two real-world corpora show that the proposed framework can effectively deal with emotion detection and performs remarkably better than the state-of-the-art emotion detection approaches and multi-label learning methods.",
"With the growing prosperity of Web 2.0, people tend to share their feelings, attitudes and opinions through the social platforms such as online news sites, blogs.",
"Detecting emotions from text can enhance the understanding of users' emotional states, which is useful in many downstream applications, such as human-computer interaction and personalized recommendation.",
"Therefore, it is crucial to analyze and predict emotions from text accurately (Picard and Picard, 1997).",
"Research on emotion detection can be roughly categorized into two types: lexicon-based and learning-based approaches.",
"Lexicon-based approaches usually rely on emotion lexicons (Lei et al., 2014; Rao et al., 2012).",
"They cannot deal with text when words can't be found in emotion lexicons.",
"Learning-based approaches can be furthered classified into unsupervised and supervised learning methods.",
"Unsupervised approaches do not require annotated data for training.",
"For example, by adding an emotion layer into traditional topic models, emotion-topic models were constructed to detect users' emotions (Bao et al., 2012, 2009).",
"Supervised learning approaches consider each emotion category as a class label and emotion detection is cast as a classification problem.",
"If only choosing the strongest emotion as the emotion label for a given text, emotion detection is essentially a single-label classification problem (Lin et al., 2008; Quan et al., 2015).",
"To predict multiple emotions simultaneously, emotion detection can be solved in the multi-label classification framework (Bhowmick, 2009).",
"Moreover, to predict both multiple emotions and their intensities, some approaches have been proposed using emotion distribution learning (Zhou et al., 2016).",
"Some lexicon-based approaches such as (Wang and Pal, 2015) can also output multiple emotions with intensities using non-negative matrix factorization.",
"In this paper, we are interested in exploring emotion ranking from either readers' perspective or writers' perspective in two different real-world corpora.",
"In both cases, a given text is associated with multiple emotions.",
"For example, Figure 1 illustrates an online news article crawled from Sina News Society Channel together with readers' emotion votes.",
"It can be observed that when reading the news article, readers expressed different emotions with the majority showed Sadness and Anger .",
"We notice that some emotions such as Touching , Curiosity and Amusement on-561 2-year-old baby found abandoned in garbage heap by his runaway mother and drugtaking father Recently, a netizen seek help for a 2-year-old baby who is alone at home unattended and starving because of his runaway mother and drug-taking father.",
"ly received 1 to 3 votes.",
"In comparison to the total number of votes received, these votes could be considered as outliers or irrelevances.",
"Also, the extremely low emotion votes might be due to readers' clicking errors.",
"Taking into account such emotions during the learning process could introduce bias.",
"Therefore, we aim to differentiate relevant emotions from irrelevant ones and only learn the rankings of relevant emotions while neglecting the irrelevant ones.",
"Our work makes the following contributions: We propose a novel framework based on relevant emotion ranking to identify multiple emotions and produce the rankings of relevant emotions from text.",
"In the framework, the objective emotion loss function is designed elaborately so that both emotion prediction and rankings for only relevant emotions are achieved without being affected by irrelevant ones.",
"To the best of our knowledge, it is the first attempt to perform emotion detection and relevant emotion ranking at the same time.",
"As some emotions co-occur more often while others rarely co-exist, the prior knowledge of emotion relationships is incorporated into the framework as a constraint.",
"Such emotion relationship can provide important cues for emotion detection.",
"Experimental results on two real-world corpora show that the proposed framework can effectively deal with the emotion detection problem and performs better than the state-of-the-art emotion detection methods and multi-label learning methods.",
"Emotion detection is one of the subfields of sentiment analysis where emotions are more fine-grained and expressive.",
"In general, emotion detection approaches can be categorized into two types: lexicon-based and learning-based approaches.",
"Lexicon-based approaches usually rely on emotion lexicons consisting of words and their corresponding emotion labels.",
"For example, Aman and Szpakowicz (2007) classified emotional and non-emotional sentences with a predefined emotion lexicon.",
"Emotional dictionaries could also be constructed from training corpora of news articles and be used to predict the readers' emotion of a new articles (Lei et al., 2014; Rao et al., 2012).",
"Agrawal and An (2012) proposed a context-based approach to detect emotions from text at sentence level.",
"An emotion vector for each potential affect bearing word based on the semantic relation between emotion concepts and words was generated.",
"The emotion vector was then tuned based on the syntactic dependencies within a sentence structure.",
"Other lexicon-based approach such as (Wang and Pal, 2015) can also output multiple emotions with intensities using non-negative matrix factorization with constraints derived based on an emotion lexicon.",
"Learning-based approaches can be further cat-egoried as unsupervised and supervised learning methods.",
"Unsupervised learning approaches do not require labelled data for training.",
"For example, the emotion-topic models (Bao et al., 2012, 2009) were proposed by adding an extra emotion layer into traditional topic models such as Latent Dirichlet Allocation (Blei et al., 2003), thus capturing the generation of both emotion and text at the same time.",
"Supervised learning approaches typically cast emotion detection as a classification problem by considering each emotion category as a class label.",
"If only choosing the strongest emotion as the label for a given text, emotion detection is essentially a single-label classification problem.",
"Lin, Yang and Chen (2008) studied the classification of news articles into different categories based on readers' emotions with various combinations of feature sets.",
"Strapparava and Mihalcea (2008) proposed several knowledge-based and corpus-based methods for emotion classification.",
"Quan et al. (2015) proposed a logistic regression model with emotion dependency for emotion detection.",
"Latent vari-562 ables were introduced to model the latent structure of input text.",
"To predict multiple emotions simultaneously, emotion detection can be solved using multi-label classification.",
"Bhowmick (2009) presented a method for classifying news sentences into multiple emotion categories using an ensemble based multi-label classification technique.",
"Zhou et al. (2016) proposed a novel approach based on emotion distribution learning to predict multiple emotions with different intensities in a single sentence.",
"Assuming a set of T emotions E = { e 1 , e 2 , ...e T } and a set of n instances X = { x 1 , x 2 , x 3 , ..., x n } , each instance x i R d is associated with a ranked list of its relevant emotions R i E and also a list of irrelevant emotions R i = E R i .",
"Relevant emotion ranking aims to learn a score function g ( x i ) = [ g 1 ( x i ) , ..., g T ( x i )] assigning a score g t ( x i ) to each emotion e t , ( t { 1 , ..., T } ) .",
"As mentioned before, it is unnecessary to consider the rankings of irrelevant emotions since they might introduce errors into the model during the learning process.",
"In order to differentiate relevant emotions from irrelevant ones, we need to define a threshold g ( x ) which could be simply set to 0 or learned from data (Furnkranz et al., 2008).",
"Those emotions with scores lower than the threshold will be considered as irrelevant and hence discarded.",
"The identification of relevant emotions and their ranking can be obtained simultaneously according to their scores assigned by the ranking function g .",
"Here, the predicted relevant emotions of instance x i are denoted as R i = { e t E | g t ( x i ) >g ( x i ) } .",
"The goal of relevant emotion ranking is to learn the parameter of the ranking function g .",
"Without loss of generality, we assume that g are linear models, i.e., g t ( x i ) = w | t x i , t { 1 , 2 , 3 , ..., T } { } , where denotes the threshold.",
"Relevant emotion ranking can be regarded as a special case of multi-label learning.",
"Several evaluation criteria typically used in multi-label learning can also be used to measure the ranking function's ability of distinguishing relevant emotions from irrelevant ones, such as hamming loss, one error, coverage, ranking loss, and average precision as suggested in (Zhang and Zhou, 2014).",
"However, these multilabel criteria cannot meet our requirement exactly as none of them considers the ranking among emotions which are considered relevant.",
"Therefore, by incorporating PRO loss (Xu et al., 2013), the loss function for the instance x i is defined as follows: L ( x i , R i , , g ) = X e t R i { } X e s ( e t ) 1 norm t,s l t,s (1) where e t refers to the emotion belonging to relevant emotion set R i or the threshold of instance x i while e s refers to the emotion which is less relevant than e t denoted as .",
"Thus, ( e t , e s ) represents four types of emotion pairs: i.e., (relevant, relevant) , (relevant, irrelevant) , (relevant, threshold) , and (threshold, irrelevant) .",
"The normalization term norm t,s is used to balance those four types of emotion pairs to avoid dominated terms by their respective set sizes.",
"The set sizes of the four different types of emotion pairs mentioned above are | R i | ( | R i | 1) / 2 , | R i | | R i | , | R i | , and | R i | , respectively.",
"Here, l t,s refers to a modi-fied 0-1 error.",
"Specifically, l t,s = 1 , g t ( x i ) < g s ( x i ) 12 , g t ( x i ) = g s ( x i ) 0 , otherwise Note that l t,s is non-convex and difficult to optimize.",
"Thus, a large margin surrogate convex loss (Vapnik and Vapnik, 1998) implemented in hinge form is used instead as follows: L ( x i ,R i , , g ) = X e t R i { } X e s ( e t ) 1 norm t,s (1 + g s ( x i ) g t ( x i )) + (2) where ( u ) + = max { 0 , u } .",
"However, Eq.",
"2 ignores the relationships between different emotions.",
"As mentioned in Introduction section, some emotions often co-occur such as joy and love while some rarely coexist such as joy and anger .",
"Such relationship information among emotions can provide important clues for emotion ranking.",
"Therefore, we incorporate this information into the emotion loss function as constraints.",
"The objective function 563 L ( x i , R i , , g ) can be redefined as: L ( x i , R i , , g ) = X e t R i { } X e s ( e t ) 1 norm t,s (1 + g s ( x i ) g t ( x i ) + ts ( w t w s )) + (3) where the weight ts models the relationship between the t -th emotion and the s -th emotion in the emotion set and can be calculated in multiple ways.",
"Since the Pearson correlation coefficient (Nicewander, 1988) is the most familiar measure of relationship between two variables, we use it to measure the relationship of two emotions using their original emotion scores across each corpus.",
"From the above, it can be observed that the goal of relevant emotion ranking can be achieved through predicting an accurate relevant emotion set as well as the ranking of relevant emotions.",
"After defining an appropriate loss function, we need to define a way to minimize the empirical error measured by the appropriate loss function and at the same time to control the complexity of the resulting model.",
"It can be done by introducing a maximum margin strategy and regularization to deal with emotion ranking data, where a set of linear classifiers are optimized to minimize the emotion loss function mentioned before while having a large margin.",
"We could potentially use an approach based on a label ranking method (Elisseeff and Weston, 2001).",
"It is worth mentioning that the margin of the (relevant, relevant) label pair needs to be dealt with carefully, which is not considered in (Elisseeff and Weston, 2001).",
"The learning procedure of relevant emotion ranking (RER) is illustrated in Figure 2.",
"The big rectangular dash line boxes denoted by x 1 to x n represent n instances in the training set.",
"In each small box, e i , i { 1 , ...T } { } represents an emotion of the instance where the shaded small boxes represent the relevant emotions while the dashed small boxes represent irrelevant ones and the last one e is the threshold.",
"Each emotion's corresponding weight vector is w i .",
"We use m t,s to represents the margin between label e t and e s .",
"There are four types of emotion pairs' margins in total, i.e., (relevant, relevant) , (relevant, irrelevant) , (relevant, threshold) , and (threshold, irrelevant) .",
"Different types of emotion pairs' margins are denoted using different text/line colors.",
"For each training instance x i , margin ( x i ) represents the margin of instance x i which can be obtained by taking the minimum margin of all its possible label pairs m t,s .",
"Similarly, the margin of the learning system margin ( learningsystem ) can be obtained by taking the minimum margin of all the training instances.",
"By maximizing the margin of the learning system, the weight vector of each emotion can be derived from which the predicted emotion set and the ranking of relevant emotions can be obtained.",
"The learning system is composed of T + 1 linear classifiers [ w 1 ; ... ; w T ; w ] with one classifier for each emotion label and the threshold, where w t , t { 1 , ...T } { } is the weight vector for the t -th classifier of emotion e t .",
"For a training instance x i and its corresponding emotion label set E i , the learning system's margin on instance x i is defined as follows by considering its ranking ability on x i 's four types of emotion pairs, i.e., (rel-evant, relevant) , (relevant, irrelevant) , (relevant, threshold) , and (threshold, irrelevant) : min e t R i { } ,e s ( e t ) h w t w s , x i i || w t w s || (4) Here, h u, v i returns the inner product u > v .",
"For each emotion pair ( e t , e s ) , its discrimination boundary corresponds to the hyperplane h w t w s , x i i = 0 .",
"Therefore, Eq.",
"4 returns the minimum value as the margin on instance x i .",
"The margin on the whole training set G can be calculated as follows: min x i G min e t R i { } ,e s ( e t ) h w t w s , x i i || w t w s || (5) If the learning algorithm is capable of properly ranking the four types of label pairs for each training instance, Eq.",
"5 will return a positive margin.",
"In this ideal case, the final goal is to maximize the margin in Eq.",
"5: max w j min x i G min e t R i { } ,e s ( e t ) 1 || w t w s || s.t. h w t w s , x i i 1 , 1 i n, 1 j T + 1 (6) 564 x 1 e 1 e 2 e 4 e 3 w 1 w 2 w 4 w 3 g 1 (x 1 ) g 2 (x 1 ) g 3 (x 1 ) g 4 (x 1 ) x n e 1 e 2 e 4 e 3 w 1 w 2 w 4 w 3 g 1 (x n ) g 2 (x n ) g 3 (x n ) g 4 (x n ) margin(x 1 )=min(m 12 ,m 13 ,m 14 ,m 23 ,m 24 ,m 1 ,m 2 ,m 3 ,m 4 ) margin(x n )=min(m 12 ,m 13 ,m 14 ,m 1 ,m 2 ,m 3 ,m 4 ) margin(learning system)=min(margin(x 1 ),,margin(x n )) max(margin(learning system)) (relevant, relevant) (relevant, irrelevant) (relevant, threshold) (threshold, irrelevant) e w g (x 1 ) e w g (x n ) Figure 2: The overall framework of our proposed Relevant Emotion Ranking (RER) method.",
"Suppose we have sufficient training examples such that for each label pair ( e t , e s ) , there exists x i G satisfying e t R i { } , e s ( e t ) .",
"Thus, the objective in Eq.6 becomes equivalent to max w j min 1 s<t T +1 1 || w t w s || and can be rewritten as min w j max 1 s<t T +1 || w t w s || .",
"Moreover, to overcome the complexity brought in by the max operator, the objective of the optimization problem can be re-written by approximating the max operator with the sum operator.",
"Thus, the objective of Eq.",
"6 can be transformed as: min w j T +1 X t =1 || w t || 2 s.t. h w t w s , x i i 1 , 1 i n, (7) 1 j T + 1 , e t R i { } , e s ( e t ) To accommodate real-world scenarios where constraints in Eq.",
"7 can not be fully satisfied, s-lack variables can be incorporated into the objective function: min w j , its T +1 X t =1 || w t || 2 + n X i =1 X e t R i { } X e s ( e t ) 1 norm t,s its s.t. h w t w s , x i i 1 its , 1 j T + 1 , its 0 (8) Since its does not need to be optimized since it can be easily determined by w t , w s .",
"The final objective function can be reformulated as: min w t , L T +1 X t =1 || w t || 2 + n X i =1 L ( x i , R i , , g ) (9) As can be seen, Eq.9 consists of two parts balanced by the trade-off parameter .",
"Specifically, the first part corresponds to the maximum margin of the learning system and it can also represent the complexity of the learning system, while the second part corresponds to the emotion loss function of the learning system implemented in hinge form.",
"Let w = [ w 1 ; ... ; w T ; w ] , Eq.",
"9 is cast into a general form in SVM-type: min w , 1 2 || w || 2 + C > s.t. A w 1 p , 0 p (10) where p is the total number of label pairs, calculated by P ni =1 P e t R i { } P e s ( e t ) norm t,s and 1 p ( 0 p ) is the p 1 all one (zero) vector.",
"The entries in vector C correspond to the weights of hinge losses, i.e., the normalization term to balance the four kinds of label pairs.",
"The matrix A corresponds to the constraints for instances which reflects the emotion relationships and the margin of the label pairs.",
"does not need to be optimized since it can be easily determined by w .",
"Hence the objective function can be reformulated into the following form without : min w F ( w , G ) = 1 2 || w || 2 + C > (1 p A w ) + (11) 565 Through minimizing the objective function F ( w , G ) , we can finally obtain parameter w and the ranking function g .",
"Eq.",
"11 involves a large scale optimization.",
"To address Eq.",
"11, we consider an efficient Alternating Direction Method of Multipliers (ADMM) solution (Bertsekas and T-sitsiklis, 1989).",
"The basic idea of ADMM is to take the decomposition-coordinate procedure such that the solution of subproblems can be coordinated to find the solution to the original problem.",
"We decompose G into M disjoint subsets, i.e., { G 1 , G 2 , ..., GM } and then Eq.",
"11 is converted into the following form: min w 0 , w 1 , w m MX m =1 F ( w m , G m ) , s.t. w m = w 0 , m = 1 , ..., M (12) The surrogate augmented Lagrangian Function (LF) was introduced into Eq.",
"12 and it was cast into the following form: LF ( { w 0 , w 1 , ..., w m } , { m } Mm =1 , ) = MX m =1 F ( w m , G m ) + MX m =1 ( m ) > ( w m w 0 ) + 2 MX m =1 || w m w 0 || 2 (13) where , are the Lagrange multiplies.",
"Sina Social News (News) was collected from the Sina news Society channel where readers can choose one of the six emotions such as Amusement , Touching , Anger , Sadness , Curiosity , and Shock after reading a news article.",
"As Sina is one of the largest online news sites in China, it is sensible to carry out experiments to explore the readers' emotion (social emotion).",
"News articles with less than 20 votes were discarded since few votes can not be considered as proper representation of social emotion.",
"In total, 5,586 news articles published from January 2014 to July 2016 were kept, together with the readers' emotion votes.",
"Ren-CECps corpus (Blogs) (Quan and Ren, 2010) contains 34,719 sentences selected from blogs in Chinese.",
"Each sentence is annotated with eight basic emotions from writer's perspective, including anger , anxiety , expect , hate , joy , love , sorrow and surprise , together with their emotion scores indicating the level of emotion intensity which range from 0 to 1.",
"Higher scores represents higher emotion intensity.",
"The statistics of the two corpora are shown in Table 1.",
"The two corpora were preprocessed by using word segmentation and filtering.",
"The python jie-ba segmenter is used for the segmentation and a removal of stop words is performed based on a stop word thesaurus.",
"Words appeared only once or appeared in less than two documents were re-566 moved to alleviate data sparsity.",
"We used the single layer long short-term memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) to extract the features of each text.",
"LSTM is one kind of recurrent neural networks, which can capture sequence information from text and can represent meanings of inputs in the reduced dimensional space.",
"It treats text as a sequence of word embed-dings and outputs a state vector over each word, which contains the information of the previous words.",
"The final state vector can be used as the representation of the text.",
"In our experiments, we set the dimension of each text representation to 100.",
"During LSTM model training, we optimized the hyper parameters using a development dataset which is built using external data.",
"We train LSTM using a learning rate of 0.001, a dropout rate of 0.3 and categorical cross-entropy as the loss function.",
"The mini batch (Cotter et al., 2011) size is set to 32.",
"After that, the learned text representations are fed into the proposed system for relevant emotion ranking as has been previously presented in the Methodology section.",
"There are only few baselines which address multiple emotions learning from text.",
"We first compare the proposed framework with two baselines which have previously achieved the state-of-the-art performances on multi-emotion detection.",
"Emotion Distribution Learning (EDL) (Zhou et al., 2016): It learns a mapping function from texts to their emotion distributions describing multiple emotions and their respective intensities based on label distribution learning.",
"Moreover, the relationships of emotions are captured based on the Plutchik's wheel of emotions which are subsequently incorporated into the learning algorithm in order to improve the accuracy of emotion detection.",
"EmoDetect (Wang and Pal, 2015): It outputs the emotion distribution based on a dimensionality reduction method using nonnegative matrix factorization which combines several constraints such as emotions bindings, topic correlations and emotion lexicons in a constraint optimization framework.",
"For each method, 10-fold cross validation is conducted using the same feature construction Name Definition PROLoss 1 n P ni =1 P e t R i { } P e s ( e t ) 1 norm t,s l t,s l t,s isamodified0-1error; norm t,s isthesetsizeoflabelpair ( t,s ) HammingLoss 1 nT P ni =1 | R i 4 R i | RankingLoss 1 n P ni =1 ( P ( e t ,e s ) R i R i [ g t ( x i ) < g s ( x i )]) / ( | R i || R i | ) where istheindicatorfunction.",
"method to get the final performance.",
"Supposing n test instances and T emotion categories, several evaluation criteria as presented in Table 2 typically used in multi-label learning can be used to measure the efficiency of the proposed framework and the baseline approaches.",
"PRO Loss concerning the prediction on all labels as well as the rankings of only relevant labels.",
"Hamming loss evaluates how many times an emotion label is mis-classified.",
"Ranking loss evaluates the fraction of reversely ordered emotion pairs.",
"One-error evaluates the fraction of sentences whose top-ranked emotion is not in the relevant emotion set.",
"Average precision evaluates the average fraction of the relevant emotions ranked higher than a particular emotion.",
"Coverage evaluates how many steps are needed to move down the ranked emotion list so as to cover all the relevant emotions of the example.",
"Subset Accuracy evaluates the fraction of correctly classified examples, i.e. the predicted label set is identical to the ground-truth label set.",
"F 1 exam evaluates the averaged F1 (Manning et al., 2008) over instances.",
"MicroF1 pools each document decisions across categories, and then computes an effectiveness measure on the pooled contingency table.",
"MacroF1 take the average of F1 for all categories.",
"Note that the threshold is removed before evaluation.",
"It should be pointed out that metrics from PRO Loss to F 1 exam work by evaluating the learning systems performance on each test ex-567 ample separately, and then returning the mean value across the test set.",
"MicroF1 and MacroF1 work by evaluating the learning systems performance on each emotion category separately, and then returning the macro/micro-averaged value across all emotion categories.",
"The evaluation results using 10 different evaluation criteria are shown in Table 3.",
"It can be observed that our proposed method Relevant Emotion Ranking(RER) outperforms baseline approaches on all 10 evaluation metrics on both datasets.",
"We have also extended RER by incorporating emotion relationships as constraints into the learning framework, denoted as RERc in Table 3.",
"The correlation of every pair of emotions is calculated based on their respective votes from news articles or scores from blogs.",
"It can be observed from Table 3 that in almost all cases, incorporating the constraints gives better performance.",
"It should be pointed out that the results of the baseline approach EDL are slightly different from those reported in (Zhou et al., 2016) since we employ LSTM for feature construction instead of recursive autoencoders.",
"Since relevant emotion ranking can be seen as an extension of multi-label learning, the proposed framework is also compared with 8 widely used multi-label learning methods using the threshold which is initialized as 0.15 after normalization, such as ML-KNN (Zhang and Zhou, 2007), LIFT (Zhang, 2011), CLR (Furnkranz et al., 2008), Rank-SVM (Zhang and Zhou, 2014) , MLLOC (Huang and Zhou, 2012), BP-MLL (Zhang and Zhou, 2006), ECC (Read et al., 2009) and ML-RBF (Zhang, 2009).",
"ML-KNN is based on the traditional k -nearest neighbor (KNN) algorithm.",
"LIFT constructs features specific to each emotion by conducting clustering analysis on its positive or negative instances.",
"CLR transforms MLL into a label ranking problem by pairwise comparison which considers each label pairs and rank all the labels without recognizing that only the rankings of relevant ones are crucial.",
"Rank-SVM focuses on distinguishing relevant from irrelevant while neglecting the rankings of relevant ones.",
"MLLOC tries to exploit emotion correlations in the expression data locally.",
"The global discrimination fitting and local correlation sensitivity are incorporated into a unified framework.",
"BP-MLL is derived from the back propagation algorithm through employing a novel error function capturing the characteristics of multi-label learning.",
"ECC applies classifier chains in an ensemble framework.",
"ML-RBF gets the multi-label neural networks adopted from the traditional Radial Basis Function (RBF) neural networks.",
"For the MLL methods, the value of k is set to 8 in ML-KNN, ratio is 0.02 and is 2 in ML-RBF.",
"Linear kernel is used in LIFT.",
"Rank-SVM uses the RBF kernel with the width equals to 1.",
"The CR4.5 is used as the base classifier for CLR and ECC.",
"The evaluation results of the proposed 568 Datasets Evaluation Criterion Methods RERc ML-KNN LIFT CLR Rank-SVM MLLOC BP-MLL ECC ML-RBF News PRO loss( ) 0.1913 0.2551 0.2426 0.2487 0.2670 0.3429 0.2603 0.2823 0.2658 Hamming Loss( ) 0.2277 0.2876 0.3118 0.3023 0.3127 0.3241 0.3040 0.3079 0.3599 Ranking Loss( ) 0.1405 0.1898 0.1987 0.2142 0.2271 0.3234 0.1897 0.2563 0.1949 One-error( ) 0.1562 0.2366 0.1881 0.2242 0.2258 0.2025 0.2043 0.2151 0.2240 Average Precision( ) 0.8789 0.8095 0.7945 0.7916 0.8001 0.7545 0.8044 0.6245 0.8106 Coverage( ) 2.1316 2.3602 2.4641 2.3453 2.6093 3.1272 2.4032 2.4122 2.4390 Subset Accuracy( ) 0.1822 0.1916 0.1857 0.2386 0.1839 0.2107 0.2765 0.2222 0.2609 F 1 exam ( ) 0.7143 0.6215 0.6262 0.6032 0.6244 0.5193 0.5879 0.5108 0.6147 MicroF1( ) 0.7171 0.6280 0.6131 0.6177 0.6268 0.5389 0.6231 0.5699 0.6160 MacroF1( ) 0.6291 0.5587 0.5593 0.5658 0.5613 0.4913 0.5563 0.4573 0.5543 Blogs PRO loss( ) 0.2321 0.3036 0.2912 0.3041 0.2869 0.3523 0.3429 0.2867 0.2922 Hamming Loss( ) 0.2014 0.2409 0.2242 0.2162 0.2585 0.2156 0.2241 0.2301 0.2204 Ranking Loss( ) 0.2102 0.2928 0.2881 0.2947 0.3024 0.4532 0.3234 0.3345 0.2364 One-error( ) 0.4550 0.5543 0.5152 0.5229 0.5606 0.6143 0.4625 0.6635 0.4679 Average Precision( ) 0.6803 0.5897 0.5963 0.6370 0.5832 0.4532 0.5545 0.5256 0.6412 Coverage( ) 2.1268 2.4448 2.4356 2.2671 2.5962 3.5634 3.1272 2.7756 2.5067 Subset Accuracy( ) 0.1663 0.1978 0.2116 0.1938 0.2321 0.2251 0.2107 0.2236 0.1803 F 1 exam ( ) 0.5114 0.4616 0.4620 0.4509 0.4832 0.4931 0.5093 0.4986 0.4997 MicroF1( ) 0.5116 0.4720 0.4552 0.4859 0.4962 0.4902 0.4889 0.5003 0.5051 MacroF1( ) 0.4161 0.3632 0.3656 0.4056 0.3965 0.3853 0.3813 0.3957 0.4086 Table 4: Comparison with Multi-Label Learning (MLL)",
"approach in comparison to all MLL baselines are presented in Table 4.",
"RERc performs the best on all evaluation measures.",
"It verifies the advantage of RERc due to its consideration of varying intensities of the emotion labels and the ignorance of irrelevant ones during the learning of the relevant emotion ranking model.",
"We also observe that, in most cases, the performance on the News dataset is better than that in the Blogs dataset.",
"This may due to different types of text observed in these two platforms.",
"News articles are more formal while bogs typically contain informal language and are more colloquial.",
"To fully understand the emotion detection results, we generate the top 10 most frequent words in the test set for each emotion label from Blogs dataset presented in Figure 3.",
"Intuitively, we can find that most top words are quite expressive of their associated emotions.",
"For example, the word happy delivers the emotion of Joy and the word tears tells Sorrow , etc.",
"Moreover, we also observe that there are some common words across multiple emotion categories.",
"For instance, friend appears in both Joy and Love .",
"The results demonstrate that the proposed framework can learn emotions from text precisely.",
"In this paper, we have proposed a novel framework to detect multiple emotions from text based on relevant emotion ranking.",
"Moreover, the relationships between emotions are incorporated into the learning framework as constraints.",
"Experimental results on both News and Blogs datasets show that the proposed framework is able to detect multiple emotions as well as generating rankings of relevant emotions.",
"It performs remarkably better than the state-of-the-art baselines on multi-emotion detection and also outperforms several different methods used for multi-label learning.",
"In the future, we will explore the possibility of extending the current framework by detecting emotions at more fine-grained level, for example, emotions associated with specific events reported in text.",
"The work was supported by the National Key R&D Program of China (No. 2017YFB1002801), the National Natural Science Foundation of China (61772132), the Natural Science Foundation of Jiangsu Province of China (BK20161430) and Innovate UK (103652)."
] | [
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"other"
] |
[
"Disfluencies in spontaneous speech are known to be associated with prosodic disruptions.",
"However, most algorithms for disfluency detection use only word transcripts.",
"Integrating prosodic cues has proved difficult because of the many sources of variability affecting the acoustic correlates.",
"This paper introduces a new approach to extracting acoustic-prosodic cues using text-based distributional prediction of acoustic cues to derive vector z-score features (innovations).",
"We explore both early and late fusion techniques for integrating text and prosody, showing gains over a high-accuracy text-only model.",
"Speech disfluencies are frequent events in spontaneous speech.",
"The rate of disfluencies varies with the speaker and context; one study observed disfluencies once in every 20 words, affecting up to one third of utterances (Shriberg, 1994).",
"Disfluencies are important to account for, both because of the challenge that the disrupted grammatical flow poses for natural language processing of spoken transcripts and because of the information that they provide about the speaker.",
"Most work on disfluency detection builds on the framework that annotates a disfluency in terms of a reparandum followed by an interruption point (+), an optional interregnum ( { } ), and then the repair, if any.",
"A few simple examples are given below: [ it's + {uh} it's] almost... [ was it, + {I mean} , did you ] put... [I just + I] enjoy working... [By + ] it was attached to...",
"Based on the similarity/differences between the reparandum and the repair, disfluencies are often categorized into three types: repetition (the first example), rephrase (the next example), and restart (the last example).",
"The interruption point is associated with a disruption in the realization of a prosodic phrase, which could involve cutting words off or elongation associated with hesitation, followed by a prosodic reset at the start of the repair.",
"There may also be emphasis in the repair to highlight the correction.",
"Researchers have been working on automatic disfluency detection for many years (Lickley, 1994; Shriberg et al., 1997; Charniak and Johnson, 2001; Johnson and Charniak, 2004; Lease et al., 2006; Qian and Liu, 2013; Zayats et al., 2016), motivated in part by early work on parsing speech that assumed reliable detection of the interruption point (Nakatani and Hirschberg, 1994; Shriberg and Stolcke, 1997; Liu et al., 2006).",
"The first efforts to integrate prosody with word cues for disfluency detection (Baron et al., 2002; Snover et al., 2004) found gains from using prosody, but word cues played the primary role.",
"In subsequent work (Qian and Liu, 2013; Honnibal and Johnson, 2014; Wang et al., 2017), more effective models of word transcripts have been the main source of performance gains.",
"The success of recent neural network systems raises the question of what the role is for prosody in future work.",
"In the next section, we hypothesize where prosody might help and look at the relative frequency of these cases and the performance of a high accuracy disfluency detection algorithm in these contexts.",
"With the premise that there is a potential for prosody to benefit disfluency detection, we then propose a new approach to extracting prosodic features.",
"A major challenge for all efforts to incorporate prosodic cues in spoken language understanding is the substantial variability in the acoustic correlates of prosody.",
"For example, duration cues are expected to be useful disfluencies are often associated with duration lengthening related to hesitation.",
"However, duration varies with phonetic context, word function, prosodic phrase structure, speaking rate, etc.",
"To account for some of this variability, various feature normalization techniques are used, but typically these account for only limited contexts, e.g. phonetic context for duration or speaker pitch range for fundamental frequency.",
"In our work, we introduce a mechanism for normalization using the full sentence context.",
"We train a sequential neural prediction model to estimate distributions of acoustic features for each word, given the word sequence of a sentence.",
"Then, the actual observed acoustic feature is used to find the prediction error, normalized by the estimated variance.",
"We refer to the resulting features as innovations, which can be thought of as a non-linear version of the innovations in a Kalman filter.",
"The innovations will be large when the acoustic cues do not reflect the expected prosodic structure, such as during hesitations, disfluencies, and contrastive or emphatic stress.",
"The idea is to provide prosodic cues that are less redundant with the textual cues.",
"We assess the new prosodic features in experiments on disfluency detection using the Switchboard corpus, exploring both early and late fusion techniques to integrate innovations with text features.",
"Our analysis shows that prosody does help with detecting some of the more difficult types of disfluencies.",
"This paper has three main contributions.",
"First, our analysis of a high performance disfluency detection algorithm confirms hypotheses about contexts where text-only models have high error rates.",
"Second, we introduce a novel representation of prosodic cues, i.e. the innovation vector resulting from predicting prosodic cues given the whole sentence context.",
"Analyses of the innovation distributions show expected patterns of prosodic cues at interruption points.",
"Finally, we demonstrate improved disfluency detection performance on Switchboard by integrating prosody and text-based features in a neural network architecture, while comparing early and late fusion approaches.",
"Disfluency detection algorithms based on text alone rely on the fact that disfluencies often involve parallel syntactic structure in the reparandum and the repair, as illustrated in the previous examples.",
"In these cases, pattern match provides a strong cue to the disfluency.",
"In addition, ungrammatical function word sequences are frequently Reparandum Length % in Type 1-2 3-5 6-8 8 + type repetition 1894 419 12 1 46% rephrase 794 585 66 28% restart 196 14 4% nested* 149 262 158 118 13% Table 1: Total word counts associated with reparanda of different lengths and types of disfluencies.",
"associated with disfluencies, and these are relatively easy for a text-based model to learn.",
"In some cases, an interregnum word (or words) provides a word cue to the interruption point.",
"In the Switchboard corpus, only 15% of interruption points are followed by an interregnum, but it can provide a good cue when present.",
"Prosody mainly serves to help identify the interruption point.",
"Thus, for these types of disfluencies, it makes sense that prosodic cues would not really be needed.",
"Because disfluencies with a parallel syntactic structure do represent a substantial fraction of disfluencies in spontaneous speech, text-based algorithms have been relatively effective.",
"The best models achieve F-scores of 86-91% 1 (Lou and Johnson, 2017; Zayats and Ostendorf, 2018; Wang et al., 2017, 2018).",
"We hypothesize that many er-1 It is difficult to directly compare published results, because there are different approaches to tokenization that have a non-trivial impact on performance but are not well documented in the literature.",
"Those differences include handling of fragment words, turn boundaries, and tokenization.",
"For example, some studies use fragment features explicitly, while others omit them because speech recognition systems often miss them.",
"Turn boundaries that do not end with a slash unit pose an ambiguity during speaker overlap: cross-turn 'sen-tences' can either be combined into a longer sentence or separated based on the turn boundary, which impacts what can be detected.",
"Lastly, there are differences in whether contractions and possessives are split into two tokens, and whether conversational terms such as you know are combined into a single token.",
"rors are associated with contexts where we expect that prosodic cues are useful, specifically the five cases below, with examples from the development set.",
"Restarts: Some disfluencies have no repair; the speaker simply restarts the sentence with no obvious parallel phrase.",
"[ it would be + ] I think it's clear... well [the +] uh i think what changed...",
"Long disfluencies: These include distant pattern match or substantial rephrasing.",
"[there is + for people who don't want to do the military service it would be neat if there were] [what they're basically trying to do + i don't know up here in massachusetts anyhow what they're basically trying to do] Complex (nested) disfluencies: Disfluencies can occur within other disfluencies.",
"[really + [[i + i] + we were really]... [[to + to try to] + for two people who don't really have a budget to] ]...",
"Non-trivial rephrasing: Rephrasing does not always involve a simple rough copy of a repair.",
"[can + still has the option of]... to keep them [in + uh quiet ]...",
"Fluent repetitions: Contexts with fluent repetitions often include expressing a strong stance.",
"a long long time ago... she has very very black and white...",
"In order to confirm that there is potential for prosody to help in these contexts, we first categorize the disfluencies.",
"To avoid hand-labeling of categories, we distinguished disfluencies based on surface forms (repetition, rephrase, restart) and length of the disfluency reparandum.",
"Word counts for the different categories are given in Table 1.",
"high accuracy text-based disfluency detection system that is the baseline for this study (Zayats and Ostendorf, 2018).",
"For this model, trained on Switchboard, the performance is 87.4 F-score (P=93.3, R=82.2) on the development set and 87.5 (P=93.1, R=82.5) on the test set.",
"For each class, we measured the disfluency detection recall (rel-ative frequency of reparandum tokens that were predicted correctly), as well as the percentage of tokens associated with each class.",
"The results in Table 2 confirm that error rates are higher for restarts, longer rephrasings, and complex disfluencies.",
"Rephrase disfluencies include both short lexical access errors, as well as non-trivial reword-ings, which tend to be longer and involve content words.",
"Table 3 breaks down performance for different lengths and word class to explore this difference.",
"We found that rephrase disfluencies that contain content words are harder for the model to detect, compared to rephrases with function words only, and error increases for longer disfluencies.",
"Finally, the relative frequency of false positives in fluent repetitions is 0.35.",
"Since fluent repetitions account for only 4% of all repetitions, the impact on overall performance is small.",
"The ultimate goal of a disfluency detection system is to perform well in domains other than Switchboard.",
"Other datasets are likely to have different distributions of disfluencies, often with a higher frequency of those that are hard to detect, such as restarts and repairs (Zayats et al., 2014).",
"In addition, due to the differences in vocabulary, disfluencies with content words are more likely to get misdetected if there is a domain mismatch.",
"Thus, we hypothesize that prosody features can have a greater impact in a domain transfer scenario.",
"Integrating prosodic cues has proved difficult because of the many sources of variability affecting the acoustic correlates, while systems that only use text achieve high performance.",
"In this work, we propose a new approach that operates on differences in information found in text and prosody.",
"In order to calculate such differences, we introduce innovation features, similar to the concept of innovations in Kalman filters.",
"The key idea is to predict prosodic features based on text information, and then use the difference between the predicted and observed prosodic signal (innovations)",
"as a new feature that is additionally used to predict disfluencies.",
"Let a prosody cue, p i at time i be an observation associated with a sentence transcript containing n word tokens, x 0 . . . x n .",
"This observation can be modeled as a function of the sentence context H ( x 0 . . . x n ) perturbed with Gaussian noise v i N (0 , 2 i ) : p i = H ( x 0 . . . x n ) + v i (1) v i can be viewed as a difference in information found between text and prosody.",
"This difference can be measured using a z-score, which is a measure of how many standard deviations below or above the population mean an observation is.",
"This framework can be viewed as a non-linear extension of a Kalman filter, where both H and 2 i are parametrized using a neural network.",
"Since disfluencies are irregularities in spoken language, they can be considered anomalies to fluent speech flow.",
"A prosody flow that is unusual for a given word sequence, such as one that happens at interruption points, will likely have higher deviation from the predicted distribution.",
"This anomaly in speech flow provides a strong signal when extracted using innovations, which is complementary to the text cues.",
"In the next sections we give more details about the neural network architecture for text encoding, prosodic cues and innovation features, as well as an overview of the whole system.",
"We use both context around a word as well as subword information in text encoding for prosody prediction.",
"Our text encoding consists of two bidirectional LSTMs: one on the token level and another on the phone level.",
"First, we use pre-trained word embeddings (Levy and Goldberg, 2014), part-of-speech tags embeddings, and identity features (whether the word is a filled pause, discourse marker, or incomplete) as inputs to a word-level bidirectional LSTM.",
"Then, for each phone in a word we concatenate the phone embedding, its stress embedding, and the hidden state of the word-level LSTM for the corresponding token.",
"The resulting phone feature vector is used as input to the second bidirectional LSTM.",
"The last hidden state h i of this second LSTM for token i summarizes the phone, stress and context information of that token, which we use to predict word-level prosodic cues.",
"We use 3 categories of stress features in our experiments: primary, secondary and a non-stress phone.",
"cues are scaled as follows:",
"Pause information is extracted on a word-level using Mississippi State (MsState) time alignments (more details on data preprocessing in Section 4.1.) We use scaled real-valued pause information.",
"Scaling pause lengths this way, including the threshold for pauses longer than 1 sec (which are rare), makes the pause distribution less skewed.",
"Word Duration.",
"Similar to pause information, we extract word duration information using MsState time alignments.",
"We do not need to do the standard word-based duration normalization, since the idea behind the innovation model is to normalize prosodic features using a richer context representation.",
"Fundamental frequency (F0) and Energy (E).",
"Similar to Tran et al. (2018), we use three F0 features and three energy features.",
"The three F0 features include normalized cross correlation function (NCCF), log-pitch weighted by probability of voicing (POV), and the estimated delta of log pitch.",
"The three energy features include the log of total energy, the log of total energy from lower 20 mel-frequency bands and the log of total energy from higher 20 mel-frequency bands.",
"The contour features are extracted from 25-ms frames with 10-ms hops using Kaldi (Povey et al., 2011).",
"Our model is trained to predict the mean of these features across the frames in a word.",
"MFCCs.",
"In addition to features used in Tran et al. (2018), we also use 13 mel-frequency cepstral co-efficients, averaged at the word level, similar to F0 and energy features as described above.",
"Given a word-level text encoding h i , for each token in a sentence we predict each of the k prosodic cues (cid:101) p ik listed above.",
"We assume that the predicted prosody cues conditioned on text have a Gaussian distribution: (cid:101) p ik | h i N ( i,k , 2 i,k ) i,k = f ( W k 1 h i + b k 1 ) 2 i,k = softplus ( W k 2 h i + b k 2 ) (3) W k 1 , b k 1 , W k 2 , b k 2 are learnable parameters; the activation function softplus ( x ) = log(1 + exp( x )) ensures that the variance is always positive; f is an activation function, which is softplus for pauses and durations, and tanh for the rest of the prosodic cues.",
"The objective function is a sum of the negative log-likelihood of prosodic cues (cid:101) p ik given text encoding.",
"Then, given the predicted i,k , 2 i,k and true values of prosodic cues (cid:101) p ik , we calculate z-scores for each of the cues, which should have high absolute value for tokens with unusual prosodic behaviour: z ki = (cid:101) p ik i,k i,k (4) The prosody prediction module is illustrated in Figure 1a.",
"These z-scores, or innovations , are used as additional features in our disfluency detection model.",
"We train the prosody prediction model only on sentences that do not contain any disfluencies.",
"Any unusual behaviours in disfluency regions, therefore, should have large innovation values predicted by our model.",
"Following (Zayats and Ostendorf, 2018), we use a bidirectional LSTM-CRF model as our disfluency detection framework.",
"This framework uses a BIO tagging approach, where we predict whether each token is a part of a reparandum, repair or both.",
"Following previous studies, the overall performance is measured in F-score of correctly predicted disfluencies in the reparandum.",
"Previous work used textual features only.",
"Here, we evaluate the importance of innovation cues with two types of multimodal fusion early and late fusion.",
"In early fusion, we concatenate innovations and/or prosody features with the rest of the textual features used in the framework at the input to LSTM layer.",
"In late fusion, we create two separate models one with only textual features and another with innovations and/or prosody features.",
"Then we do a linear interpolation of the states of two models just before feeding the result to the CRF layer: u sharedi = u prosodyi + (1 ) u texti (5) We tune the interpolation weight and report the best in our experiments section.",
"We train our model jointly, optimizing both prosodic cues prediction and disfluency detection.",
"The schematic view of the late fusion system is presented in Figure 1b.",
"In our experiments we evaluate the usefulness of innovation features, and compare it to baselines with text-only or raw prosodic cues.",
"For each model configuration, we run 10 experiments with different random seeds.",
"This alleviates the potential of making wrong conclusions due to lucky/unlucky random seeds.",
"We report both the mean and best scores among the 10 runs.",
"Switchboard (Godfrey et al., 1992) is a collection of telephone conversations between strangers,",
"containing 1126 files hand-annotated with disfluencies.",
"Because human transcribers are imperfect, the original transcripts contained errors.",
"MsState researchers ran a clean-up project which hand-corrected the transcripts and word alignments (Deshmukh et al., 1998).",
"In this work, we use the MsState version of the word alignments, which allows us to extract more reliable prosodic features.",
"Since the corrected version of Switchboard does not contain updated disfluency annotations, we corrected the annotations using a semiautomated approach: we used a text-based disfluency detection algorithm to re-annotate tokens that were corrected by MsState, while keeping the rest of the original disfluency annotations.",
"The result is referred to as a silver annotation.",
"Most of the corrected tokens are repetitions and restarts.",
"To assess the quality of the automatic mapping of disfluencies, we hand-annotated a subset (6.6k tokens, 453 sentences) of the test data and evaluated the performance of the silver annotation against the gold annotation, which has an F1 score of 90.1 (Prec 90.1, Rec 90.1).",
"Comparing the performance estimates from gold and silver annotations on this subset, we find that the silver annotations give somewhat lower F1 scores (2-3% absolute), both due to lower precision and recall scores.",
"Our experiments evaluate the use of innovations with two popular multimodal fusion approaches: early fusion and late fusion.",
"Our baselines include models with text-only, prosody cues only (raw), and innovation features only as inputs.",
"Since innovations require both text and raw prosodic cues, this baseline is multimodal.",
"In addition, for the late fusion experiments, we show the optimal value of , the interpolation weight from Equation",
"5. All experiment results are presented in Table",
"4. We found that innovations are helpful in both early and late fusion frameworks, while late fusion performs better on average.",
"The interpolation weight for the late fusion experiments is high when innovations are used, which further indicates that innovation features are useful in overall prediction.",
"Interestingly, innovation features alone perform surprisingly well.",
"We also take a closer look at the importance of joint training of the disfluency detection system with prosody prediction.",
"To do this, we pretrain the prosody pre-i like to run [about + oh about ] [two + two and a half ] miles the old-timers even the people who are technologists do n't know how to operate i do n't know whether that 's because they you know sort of give up hope it must be really challenging to um try to juggle a job Table 6: Examples of the sentences where prosody innovations hurt.",
"diction part of the model first.",
"Then, we train the full model with innovation inputs while freezing the part of the network responsible for predicting prosodic cues.",
"The mean F-score of this disjointly trained model is 49.27% on the dev set, compared to 80.86% for the jointly trained model.",
"This result suggests that training the system end-to-end in a multitask setup is very important.",
"In order to better understand the impact of the prosody innovations, we perform an error analysis where we compare the predictions of two models: a late fusion model that uses both text and innovation features, and a baseline model that uses text only.",
"All of the analysis is done on the dev set with the model that has the median performance out of 10 that were trained.",
"First, we extract all the sentences where the number of disfluency detection errors using the innovation model is lower than when using the text-only model (168 sentences).",
"Examples of such sentences are presented in Table",
"5. By looking at the sentences where the model with innovations performs better, we see fluent repetitions and other ambiguous cases where audio is useful for correctly identifying disfluencies.",
"On the other hand, in Table 6, we have examples of sentences that have a higher number of errors when prosody is used (143 sentences).",
"In the first example, the labeling of two as fluent by the model with prosody is arguably correct, with the repetition indicating a range rather than a correction.",
"The next involves a parenthetical phrase, the start of which may be confused with an interruption point.",
"In the last two cases, there is a prosodic disruption and an interegnum, but no correction.",
"In order to understand whether incorporating prosody through our model supports the hypotheses in Section 2, we compare the performance of two models for different categories of disfluen-Figure 2: Histogram of innovations for word duration and energy features for words preceding an interruption point vs. fluent words.",
"cies.",
"We found that using prosody innovations improves detection of: non-repetition disfluencies (from 68.2% to 73.7%), particularly for disfluencies with content words (65.2% to 71.0%); long repairs (64.0% to 72.7% and 40.0% to 64.6% for disfluencies with length of repair greater than 3 and 5 correspondingly); and restarts (from 36.0% to 37.4%).",
"Prosodic innovations also help decrease the rate of false positives for fluent repetitions: the false positives rate decreased from 46.5% to 38.4%.",
"However, the prosody model increases the false positives in other contexts, such as in the examples in Table",
"6. 5.2 Innovation Predictors In order to understand what the model actually learns with respect to innovations, we look at innovation distributions for words preceding interruption points compared to fluent words.",
"The histograms are presented in Figure 2.",
"As expected, we see that words preceding interruption points have atypically longer duration and lower energy.",
"The intonation features did not show substantial distribution differences, probably due to the overly simplistic word-level averaging strategy.",
"Most work on disfluency detection falls into three main categories: sequence tagging, noisy-channel and parsing-based approaches.",
"Sequence tagging approaches rely on BIO tagging with recurrent neural networks (Hough and Schlangen, 2015; Zayats et al., 2016; Wang et al., 2016; Zayats and Ostendorf, 2018; Lou et al., 2018).",
"Noisy channel models operate on a relationship between the reparandum and repair for identifying disfluencies (Charniak and Johnson, 2001; Zwarts et al., 2010).",
"Lou and Johnson (2017) used a neural language model to rerank sentences using the noisy channel model.",
"Another line of work combined parsing and disfluency removal tasks (Ra-sooli and Tetreault, 2013; Honnibal and Johnson, 2014; Tran et al., 2018).",
"Recently a transition-based neural model architecture was proposed for disfluency detection (Wang et al., 2017).",
"The current state of the art in disfluency detection (Wang et al., 2018) uses a neural machine translation framework with a transformer architecture and additional simulated data.",
"All of the models mentioned above rely heavily on pattern match features, hand-crafted or automatically extracted, that help to identify repetitions and disfluencies with parallel syntactic structure.",
"While prosodic features are useful for detecting interruption points (Nakatani and Hirschberg, 1994; Shriberg and Stolcke, 1997; Shriberg, 1999; Liu et al., 2006), recent methods on disfluency detection predominantly rely on lexical information exclusively.",
"An exception is (Ferguson et al., 2015), which showed some gains using a simple concatenation of pause and word duration features.",
"Similar to disfluency detection, parsing has seen little use of prosody in recent studies.",
"However, Tran et al. (2018) recently demonstrated that that a neural model using pause, word and rhyme duration, f0 and energy helps in spoken language parsing, specifically in the regions that contain disfluencies.",
"Early fusion and late fusion are the two most popular types of modality fusion techniques.",
"In recent years, more interesting modality fusion approaches were introduced, most of them where the fusion happens inside the model (Zadeh et al., 2017; Chen et al., 2017; Zadeh et al., 2018).",
"Those methods usually require the model to learn interactions between modalities implicitly, by backpropa-gating the errors based on the main objective function with respect to the task.",
"Other multimodal representation learning approaches learn a shared representation between multiple modalities (An-drew et al., 2013; Ryan Kiros, 2014; Xu et al., 2015; Suzuki et al., 2016), often targeting unsupervised translation from one modality to the other.",
"In our work we use innovations as a novel representation learning approach, where our emphasis is on looking into complementary cues rather than similarity between multiple modalities.",
"In this paper, we introduce a novel approach to extracting acoustic-prosodic cues with the goal of improving disfluency detection, but also with the intention of impacting spoken language processing more generally.",
"Our initial analysis of a text-only disfluency detection system shows that despite high performance of such models, there exists a big gap in the performance of text-based approaches for some types of disfluencies, such as restarts and non-trivial or long rephrases.",
"Thus, prosody cues, which can be indicative of interruption points, have a potential to contribute towards detection of more difficult types of disfluencies.",
"Since the acoustic-prosodic cues carry information related to multiple phenomena, it can be difficult to isolate the cues that are relevant to specific events, such as interruption points.",
"In this work, we introduce a novel approach where we extract relevant acoustic-prosodic information using text-based distributional prediction of acoustic cues to derive vector z-score features, or innovations.",
"The innovations point to irregularities in prosody flow that are not predicted by the text, helping to better isolate signals relevant to disfluency detection that are not simply redundant with textual cues.",
"We explore both early and late fusion approaches to combine innovations with text-based features.",
"Our experiments show that innovation features are better predictors of disfluencies compared to the original acoustic cues.",
"Our analysis of the errors and of the innovation features point to a limitation of the current work, which is in the modeling of F0 features.",
"The current model obtains word-based F0 (and energy) features by simply averaging the values over the duration of the word, which loses any distinctions between rising and falling F0.",
"By leveraging polynomial contour models, we expect to improve both intonation and energy features, which we hope will reduce some of the false detections associated with emphasis and unexpected fluent phrase boundaries.",
"An important next step is to test the system using ASR rather than hand transcripts.",
"It is possible that errors in the transcripts could hurt the residual prediction, but if prosody is used to refine the recognition hypothesis, this could actually lead to improved recognition.",
"Finally, we expect that the innovation model of prosody can benefit other NLP tasks, such as sarcasm and intent detection, as well as detecting paralinguist information."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"result",
"objective",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain"
] |
[
"Meetings are a key component of human collaboration.",
"As increasing numbers of meetings are recorded and transcribed, meeting summaries have become essential to remind those who may or may not have attended the meetings about the key decisions made and the tasks to be completed.",
"However, it is hard to create a single short summary that covers all the content of a long meeting involving multiple people and topics.",
"In order to satisfy the needs of different types of users, we define a new query-based multi-domain meeting summarization task, where models have to select and summarize relevant spans of meetings in response to a query, and we introduce QMSum, a new benchmark for this task.",
"QMSum consists of 1,808 query-summary pairs over 232 meetings in multiple domains.",
"Besides, we investigate a locate-then-summarize method and evaluate a set of strong summarization baselines on the task.",
"Experimental results and manual analysis reveal that QMSum presents significant challenges in long meeting summarization for future research.",
"Dataset is available at https://github.com/Yale-LILY/ QMSum .",
"Meetings remain the go-to tool for collaboration, with 11 million meetings taking place each day in the USA and employees spending six hours a week, on average, in meetings (Mroz et al., 2018).",
"The emerging landscape of remote work is making meetings even more important and simultaneously taking a toll on our productivity and wellbeing (Spataro, 2020).",
"The proliferation of meetings makes it hard to stay on top of this sheer volume of information and increases the need for automated methods for accessing key information exchanged during them.",
"Meeting summarization (Wang and Cardie, 2013; Shang et al., 2018; These two authors contributed equally. The order of authorship decided by the flip of a coin. Meeting Transcript Summarize the whole meeting. The meeting was mainly related to ...... Turn 0: Project Manager: We have been provided with some technical tools to communicate. ...... ...... ...... ...... Turn 316: Project Manager: Thanks. Have a nice day! Summarize the discussion about the trends of current remote controls. The group discussed different trends based on different ages of people. ...... Finally they decided to add LCD screen. What did User Interface Designer think of surface design when discussing user interface? User Interface Designer said the remote should perform standard features right out-of-the-box ...... Turn 16: Marketing: This is just a presentation on the trends that we're gonna use to make the product stand out from ...... ...... Turn 78: Marketing: Young people like that things with cool appearance. Turn 85: Marketing: What do you think of adding an LCD? ...... Turn 89: Project Manager: Okay, we'll include it to make the appearance attractive to young people. Turn 121: User Interface Designer: The idea of having a remote is you have different keys and different structures. ...... Turn 162: Project Manager: Sure. Let's push forward the interface design. Figure 1: Examples of query-based meeting summarization task. Users are interested in different facets of the meeting. In this task, a model is required to summarize the contents that users are interested in and query. Li et al., 2019; Zhu et al., 2020) is a task where summarization models are leveraged to generate summaries of entire meetings based on meeting transcripts.",
"The resulting summaries distill the core contents of a meeting that helps people efficiently catch up to meetings.",
"Most existing work and datasets on meeting summarization (Janin et al., 2003; Carletta et al., 2005) pose the problem as a single document summarization task where a single summary is generated for the whole meeting.",
"Unlike news articles where people may be satisfied with a high-level summary, they are more likely to seek more detailed information when it comes to meeting summaries such as topics (Li et al., 2019), opinions, actions, and decisions (Wang and Cardie, 2013).",
"This poses the question of whether a single paragraph is enough to summarize the content of an entire meeting?",
"Figure 1 shows an example of a meeting about remote control design.",
"The discussions in the meeting are multi-faceted and hence different users might be interested in different facets.",
"For example, someone may be interested in learning about the new trends that may lead to the new product standing out, while others may be more interested in what other attendees thought about different elements of the design.",
"It is challenging to compress or compose a short summary that contains all the salient information.",
"Alternatively, summarization systems should adopt a more flexible and interactive approach that allows people to express their interests and caters to their diverse intents when generating summaries (Dang, 2005, 2006; Litvak and Vanetik, 2017; Baumel et al., 2018).",
"With comprehensive consideration of the multi-granularity meeting contents, we propose a new task, query-based meeting summarization.",
"To enable research in this area, we also create a high-quality multi-domain summarization dataset.",
"In this task, as shown in Figure 1, given a query and a meeting transcript, a model is required to generate the corresponding summary.",
"The query-based approach is a flexible setup that enables the system to satisfy different intents and different levels of granularity.",
"Besides the annotated queries and corresponding gold summaries at different levels of granularity, our new dataset contains a rich set of annotations that include the main topics of each meeting and the ranges of relevant text spans for the annotated topics and each query.",
"We adopt a hierarchical annotation structure that could not only assist people to find information faster, but also strengthen the models' summarization capacity.",
"In this paper, we employ a two-stage meeting summarization approach: locate-then-summarize .",
"Specifically, given a query, a model called Locator is used to locate the relevant utterances in the meeting transcripts, and then these extracted spans are used as an input to another model called Summarizer to generate a query-based summary.",
"We present and evaluate several strong baselines based on state-of-the-art summarization models on QMSum.",
"Our results and analysis from different perspectives reveal that the existing models struggle in solving this task, highlighting the challenges the models face when generating query-based meeting summaries.",
"We are releasing our dataset and baselines to support additional research in query-focused meeting summarization.",
"Overall, our contributions are listed as follows: 1) We propose a new task, query-based multi-domain meeting summarization, and build a new benchmark QMSum with a hierarchical annotation structure.",
"2) We design a locate-then-summarize model and conduct comprehensive experiments on its strong variants and different training settings.",
"3) By human evaluation, we further pose the challenges of the new task, including the impact of different query types and factuality errors.",
"Most prior work in text summarization (Rush et al., 2015; Chopra et al., 2016; Nallapati et al., 2016; See et al., 2017; Celikyilmaz et al., 2018; Chen and Bansal, 2018; Zhong et al., 2019a; Xu and Durrett, 2019; Liu and Lapata, 2019; Lebanoff et al., 2019; Cho et al., 2019; Zhong et al., 2020; Wang et al., 2020; Xu et al., 2019; Jia et al., 2020) investigate how to generate better summaries on news article data, such as CNN/DailyMail (Hermann et al., 2015), Newsroom (Grusky et al., 2018), etc.",
"Scientific paper summarization is another important branch (Cohan et al., 2018; Yasunaga et al., 2019; An et al., 2021).",
"Our paper mainly focuses on meeting summarization, a more challenging task compared to news summarization.",
"With the burst of demand for meeting summarization, this task attracts more and more interests from academia (Wang and Cardie, 2013; Oya et al., 2014; Shang et al., 2018; Zhu et al., 2020) and becomes an emerging branch of text summarization area.",
"Query-based summarization aims to generate a brief summary according to a source document and a given query.",
"There are works studying this task (Daum III and Marcu, 2006; Otterbacher et al., 2009; Wang et al., 2016; Litvak and Vanetik, 2017; Nema et al., 2017; Baumel et al., 2018; Ishi-gaki et al., 2020; Kulkarni et al., 2020; Laskar et al., 2020).",
"However, the models focus on news (Dang, 2005, 2006), debate (Nema et al., 2017), and Wikipedia (Zhu et al., 2019).",
"Meeting is also a genre of discourses where query-based summarization could be applied, but to our best knowledge, there are no works studying this direction.",
"Meeting summarization has attracted a lot of interest recently (Chen and Metze, 2012; Wang and Cardie, 2013; Mehdad et al., 2013; Oya et al., 2014; Shang et al., 2018; Li et al., 2019; Zhu et al., 2020; Koay et al., 2020).",
"Specifically, Mehdad et al. (2013) leverage entailment graphs and ranking strategy to generate meeting summaries.",
"Wang and Cardie (2013) attempt to make use of decisions, action items and progress to generate the whole meeting summaries.",
"Oya et al. (2014) leverages the relationship between summaries and the meeting transcripts to extract templates and generate summaries with the guidance of the templates.",
"Shang et al. (2018) utilize multi-sentence compression techniques to generate summaries under an unsupervised setting.",
"Li et al. (2019) attempt to incorporate multi-modal information to facilitate the meeting summarization.",
"Zhu et al. (2020) propose a model which builds a hierarchical structure on word-level and turn-level information and uses news summary data to alleviate the inadequacy of meeting data.",
"Unlike previous works, instead of merely generating summaries for the complete meeting, we propose a novel task where we focus on summarizing multi-granularity contents which cater to different people's need for the entire meetings, and help people comprehensively understand meetings.",
"In this section, we show how we collected meeting data from three different domains: academic meetings, product meetings, and committee meetings.",
"In addition, we show how we annotated the three types of meeting data while ensuring annotation quality for query-based meeting summarization.",
"Product Meetings AMI 1 (Carletta et al., 2005) is a dataset of meetings about product design in an industrial setting.",
"It consists of 137 meetings about how to design a new remote control, from kick-off to completion over the course of a day.",
"It contains meeting transcripts and their corresponding meeting summaries.",
"Academic Meetings ICSI 2 (Janin et al., 2003) dataset is an academic meeting dataset composed of 59 weekly group meetings at International Computer Science Institute (ICSI) in Berkeley, and their summaries.",
"Different from AMI, the contents of ICSI meetings are specific to the discussions about research among students.",
"Committee Meetings Parliamentary committee meeting is another important domain of meetings.",
"These meetings focus on the formal discussions on a wide range of issues (e.g., the reform of the education system, public health, etc.) Also, committee meetings are publicly available, which enables us to access large quantities of meetings.",
"We include 25 committee meetings of the Welsh Parliament 3 and 11 from the Parliament of Canada 4 in our dataset.",
"After collecting meeting transcripts, we recruited annotators and required them to annotate by following annotation instruction.",
"As illustrated in Figure 2, the annotation process is composed by three stages: topic segmentation, query generation, and query-based summarization.",
"Topic Segmentation Meeting transcripts are usually long and contain discussions about multiple topics.",
"To assist further annotations, we asked annotators to write down the main topics discussed in the meetings, and their relevant text spans, which makes the meeting structure clear.",
"As shown in Figure 2, scope of the project and team building is one of the annotated main topics, and its relevant text spans of the topic are ( Turn 25 50, Turn 73 -89).",
"More details are listed in Appendix A.2.1.",
"Query Generation Towards the query-based task, we further asked annotators to design queries by themselves.",
"To cater to the need for multi-granularity contents, we categorized two types of queries: queries related to general information (e.g., the contents of whole meetings, etc.) are called general queries ; queries focusing on relatively detailed information (e.g., the discussion about certain topics, etc.) are called specific queries .",
"To alleviate the influence of extremely hard queries and focus on the evaluation of query-based summarization capacity, rather than designing queries in an unconstrained way, we asked annotators to generate queries according to the schema.",
"Details of the query schema list are shown in Appendix A.1.",
"The list consists of important facets people might be interested in, including overall contents of discussions, speakers' opinions, the reasons why a speaker proposed an idea, etc., which cover the most common queries over meetings involving multiple people discussing several topics.",
"To query multi-granularity meeting contents, we further divided the query schema list into general and specific ones, and asked annotators to design queries towards general and specific meeting contents, respectively.",
"In terms of general query generation , the annotators were asked to design 1 2 general queries according to the general schema list.",
"For specific query generation , annotators were asked to first select 2 4 main topics and their relevant text spans, and then design around 3 specific queries based on the specific schema list for each main topic.",
"To ensure the task to be summarization instead of question answering , we asked annotators to design queries of which the relevant text spans are more than 10 turns or 200 words.",
"Therefore, our proposed task would differ from question answering tasks where models merely need to extract phrases or generate answers based on short text spans, and focus on how to summarize based on large stretches of texts.",
"Additional details are in Appendix A.2.2.",
"Query-based Summarization According to the designed queries and meeting transcripts, annotators were asked to do faithful summarization.",
"Being accorded with the meeting transcripts and queries is the most important criterion.",
"We also required annotators to write informative summarization.",
"For example, they could add more details about the reasons why the group/committee made such decisions, and which important ideas the group/committee members proposed, etc.",
"Besides, the annotated summaries should be abstractive, fluent and concise.",
"We set word limits for the answers of general queries (50 150 words) and specific queries (20 100 words) to keep conciseness.",
"More details are shown in Appendix A.2.3.",
"In the end, we organize all the meeting data after accomplishing the three annotation stages.",
"Detailed annotations of one product meeting and one committee meeting are shown in Appendix A.4.",
"Each meeting transcript is accompanied with annotated main topics, queries, their corresponding summaries, and relevant text span information.",
"Annotator Recruitment To guarantee annotation quality given the complexity of the task, instead of employing tasks on Amazon Mechanical Turker, we anonymously recruited undergraduate students who are fluent in English.",
"The annotation team consists of 2 native speakers and 10 nonnative speakers majoring in English literature.",
"trained in a pre-annotation process.",
"Annotations were reviewed across all stages in our data collection process by expert of this annotation task.",
"More details of review standards could be found in Appendix A.3.",
"Statistics of the final QMSum dataset is shown Table",
"1. There are several advantages of QMSum dataset, compared with the previous datasets.",
"Number of Meetings and Summaries QMSum includes 232 meetings, which is the largest meeting summarization dataset to our best knowledge.",
"For each query, there is a manual annotation of corresponding text span in the original meeting, so there are a total of 1,808 question-summary pairs in QMSum.",
"Following the previous work, we randomly select about 15% of the meetings as the validation set, and another 15% as the test set.",
"Briefty The average length of summaries in QMSum 69.6 is much shorter than that of previous AMI and ICSI datasets.",
"It is because our dataset also focuses on specific contents of the meetings, and the length of their corresponding summaries would not be long.",
"It leaves a challenge about how to precisely capture the related information and compress it into a brief summary.",
"Multi-domain Setting Previous datasets are specified to one domain.",
"However, the model trained on the summarization data of a single domain usually has poor generalization ability (Wang et al., 2019; Zhong et al., 2019b; Chen et al., 2020).",
"Therefore, QMSum contains meetings across multiple domains: Product, Academic and Committee meetings.",
"We expect that our dataset could provide a venue to evaluate the model's generalization ability on meetings of different domains and help create more robust models.",
"In this section, we first define the task of query-based meeting summarization, then describe our two-stage locate-then-summarize solution in detail.",
"Existing meeting summarization methods define the task as a sequence-to-sequence problem.",
"Specifically, each meeting transcript X = ( x 1 , x 2 , , x n ) consists of n turns, and each turn x i represents the utterance u i and its speaker s i , that is, x i = ( u i , s i ) .",
"Additionally, each utterance contains l i words u i = ( w 1 , , w l i ) .",
"The object is to generate a target summary Y = ( y 1 , y 2 , , y m ) by modeling the conditional distribution p ( y 1 , y 2 , , y m | ( u 1 , s 1 ) , , ( u n , s n )) .",
"However, meetings are usually long conversations involving multiple topics and including important decisions on many different matters, so it is necessary and practical to use queries to summarize a certain part of the meeting.",
"Formally, we introduce a query Q = ( w 1 , , w | Q | ) for meeting summarization task, the objective is to generate a summary Y by modeling p ( y 1 , y 2 , , y m | Q, ( u 1 , s 1 ) , , ( u n , s n )) .",
"In our two-stage pipeline, the first step requires a model to locate the relevant text spans in the meeting according to the queries, and we call this model a Locator.",
"The reason why we need a Locator here is, most existing abstractive models cannot process long texts such as meeting transcripts.",
"So we need to extract shorter, query-related paragraphs as input to the following Summarizer.",
"We mainly utilize two methods to instantiate our Locator: Pointer Network (Vinyals et al., 2015) and a hierarchical ranking-based model.",
"Pointer Fixed Pre-trained BERT CNN Fixed Pre-trained BERT CNN Transformer Layers Word Embedding Turn Embedding Role Embedding Query Embedding Positional Encoding 1st utterance n-th utterance Figure 3: Hierarchical ranking-based locator structure.",
"Network has achieved widespread success in extractive QA tasks (Wang and Jiang, 2017).",
"For each question, it will point to the <start, end> pair in the source document, and the span is the predicted answer.",
"Specific to our task, Pointer Network will point to the start turn and the end turn for each query.",
"It is worth noting that one query can correspond to multiple spans in our dataset, so we always extract three spans as the corresponding text for each query when we use Pointer Network as Locator in the experiments.",
"In addition, we design a hierarchical ranking-based model structure as the Locator.",
"As shown in Figure 3, we first input the tokens in each turn to a feature-based BERT to obtain the word embedding, where feature-based means we fix the parameters of BERT, so it is actually an embedding layer.",
"Next, CNN (Kim, 2014) is applied as a turn-level encoder to capture the local features such as bigram, trigram and so on in each turn.",
"Here we do not use Transformer because previous work (Kedzie et al., 2018) shows that this component does not matter too much for the final performance.",
"We combine different features to represent the utterance u i in each turn, and concatenate the speaker embedding s i as the turn-level representation: x i = [ u i ; s i ] , where [; ] denotes concatenation and s i is a vector randomly initialized to represent the speaking style of meeting participants.",
"Then these turn representations will be contextu-alized by a document-level Transformer (Vaswani et al., 2017) encoder.",
"Next, we introduce query embedding q which is obtained by a CNN (shared parameters with CNN in turn-level encoder) and use MLP to score each turn.",
"We use binary cross-entropy loss to train our Locator.",
"Finally, turns with the highest scores are selected as the relevant text spans of each query and will be inputted to the subsequent Summarizer.",
"Given the relevant paragraphs, our goal in the second stage is to summarize the selected text spans based on the query.",
"We instantiate our Summarizer with the current powerful abstractive models to explore whether the query-based meeting summarization task on our dataset is challenging.",
"To be more specific, we choose the following three models: Pointer-Generator Network (See et al., 2017) is a popular sequence-to-sequence model with copy mechanism and coverage loss, and it acts as a baseline system in many generation tasks.",
"The input to Pointer-Generator Network (PGNet) is: <s> Query </s> Relevant Text Spans </s>.",
"BART (Lewis et al., 2020) is a denoising pretrained model for language generation, translation and comprehension.",
"It has achieved new state-of-the-art results on many generation tasks, including summarization and abstractive question answering.",
"The input to BART is the same as PGNet.",
"HMNet (Zhu et al., 2020) is the state-of-the-art meeting summarization model.",
"It contains a hierarchical structure to process long meeting transcripts and a role vector to depict the difference among speakers.",
"Besides, a cross-domain pretraining process is also included in this strong model.",
"We add a turn representing the query at the beginning of the meeting as the input of HMNet.",
"In this section, we introduce the implementation details, effectiveness of Locator, experimental results and multi-domain experiments on QMSum.",
"For our ranking-based Locator, the dimension of speaking embedding is 128 and the dimension of turn and query embedding is 512.",
"Notably, we find that removing Transformers in Locator has little impact on performance, so the Locator without Transformer is used in all the experiments.",
"To reduce the burden of the abstractive models, we utilize Locator to extract 1/6 of the original text and input them to Summarizer.",
"The hyperparameters used by PGNet and HMNet are consistent with the original paper.",
"Due to the limitation of computing resources, we use the base version of pre-trained models (including feature-based BERT and BART) Models Extracted Length 1 / 6 1 / 5 1 / 4 1 / 3 Random 58.86 63.20 67.56 73.81 Similarity 55.97 59.24 63.45 70.12 Pointer 61.27 65.84 70.13 75.96 Our Locator 72.51 75.23 79.08 84.04 Table 2: ROUGE-L Recall score between the predicted spans and the gold spans.",
"in this paper.",
"We use fairseq library 5 to implement BART model.",
"For PGNet and BART, we truncate the input text to 2,048 tokens, and remove the turns whose lengths are less than",
"5. All results reported in this paper are averages of three runs.",
"First, we need to verify the effectiveness of the Locator to ensure that it can extract spans related to the query.",
"Instead of the accuracy of capturing relevant text spans, we focus on the extent of overlap between the selected text spans and the gold relevant text spans.",
"It is because whether the summarization process is built on similar contexts with references or not is essential for Summarizer.",
"Therefore, we use ROUGE-L recall to evaluate the performance of different models under the setting of extracting the same number of turns.",
"We introduce two additional baselines: Random and Similarity.",
"The former refers to randomly extracting a fixed number of turns from the meeting content, while the latter denotes that we obtain turn embedding and query embedding through a feature-based BERT, and then extract the most similar turns by cosine similarity.",
"As shown in Table 2, because there are usually a large number of repeated conversations in the meetings, Random can get a good ROUGE-L recall score, which can be used as a baseline to measure the performance of the model.",
"Similarity performs badly, even worse than Random, which may be due to the great difference in style between the BERT pre-trained corpus and meeting transcripts.",
"Pointer Network is only slightly better than Random.",
"We think this is because in the text of with an average of more than 500 turns, only three <start, end> pairs are given as supervision signals, which is not very informative and therefore is not conducive to model learning.",
"On the contrary, our hierarchical ranking-based Locator always greatly exceeds the random score, which demonstrates that it can indeed extract more relevant spans in the meeting.",
"Even if 1/6 of the original text is extracted, it can reach a 72.51 ROUGE-L recall score, which significantly reduces the burden of subsequent Summarizer processing long text while ensuring the amount of information.",
"For comparison, we introduce two basic baselines: Random and Extractive Oracle.",
"We randomly sample 10 turns of the original meeting for each query as an answer and this is the Random baseline in Table",
"3. Besides, we implement the Extractive Oracle, which is a greedy algorithm for extracting the highest-scoring sentences, usually regarded as the the upper bound of the extractive method (Nallapati et al., 2017).",
"An unsupervised method, TextRank is also included in our experiment.",
"We treat each turn as a node and add a query node to fully connect all nodes.",
"Finally, the 10 turns with the highest scores are selected as the summary.",
"Table 3 shows that the performance of three typical neural network models is significantly better than Random and TextRank.",
"When equipped with our Locator, both PGNet and BART have brought evident performance improvements (PGNet: 28.74 -> 31.37 R-1, BART: 29.20 -> 31.74 R-1).",
"Compared to PGNet , the advantage of BART lies in the ROUGE-L score (1.13 improvement), which indicates that it can generate more fluent sentences.",
"The current state-of-the-art meeting summarization model HMNet achieves the best performance, which may be attributed to its cross-domain pretraining process making HMNet more familiar with the style of meeting transcripts.",
"In addition, we also use the gold text spans as the input of different models to measure the performance loss caused by Locator.",
"Surprisingly, for models (PGNet and BART) that need to truncate the input text, although Locator is an approximate solution, the models equipped with it can achieve comparable results with the models based on gold span inputs.",
"Therefore, in this case, our two-stage pipeline is a simple but effective method in the meeting domain.",
"However, for some models (HM-Net) that use a hierarchical structure to process long text, inputting gold text spans can still bring huge performance improvements.",
"In addition, we also conduct multi-domain and cross-domain experiments.",
"First, we perform in-domain and out-domain tests in the three domains of QMSum dataset.",
"In Table 4, we can conclude that there are obvious differences between these three domains.",
"For instance, the models trained on the Academic and Committee domains perform poorly when tested directly on the Product domain, with only the ROUGE-L scores of 24.09 and 22.17 respectively.",
"However, the model trained on the single domain of Product can achieve a ROUGE-L score of 31.37, which illustrates although these domains are all in the form of meeting transcript, they still have visible domain bias.",
"On the other hand, when we train all the domains together, we can obtain a robust summarization model.",
"Compared with models trained on a single domain, models trained on QMSum can always achieve comparable results.",
"In the Academic domain, the model with multi-domain train-Opin.",
"ing can even get higher ROUGE-2 (5.05 vs 4.32) and ROUGE-L (23.01 vs 22.58) scores.",
"These results show that the multi-domain setting in meeting summarization task is apparently necessary and meaningful.",
"Meeting transcripts cover various fields, making the transfer of models particularly difficult.",
"Therefore, we need to introduce multi-domain training to make the model more robust, so it can be applied to more practical scenarios.",
"In this section, we conduct comprehensive analysis of query types and errors in the model output.",
"We manually divide the query in QMSum into five aspects: personal opinion, multi-person interaction, conclusion or decision, reason, and overall content.",
"For example, Summarize the whole meeting. requires a summary of the overall content and Why did A disagree with B? requires a summary of some reasons.",
"The questions we are concerned about are: what is the distribution of different types of queries in QMSum?",
"Are there differences in the difficulty of different types of queries?",
"To figure out the above issues, we randomly sample 100 queries from the test set, count the number of each type, and score the difficulty of each query.",
"Table 5 illustrates that answering 40% of queries requires summarizing the interaction of multiple people, and the queries that focus on personal opinions and different aspects of conclusions or decisions account for almost 20% each.",
"Besides, queries about a specific reason are less frequent in the meetings.",
"We also perform a human evaluation of the difficulty of various query types.",
"For each query, the relevant text spans and query-summary pair are shown to annotators.",
"Annotators are asked to score the difficulty of this query in two dimensions: 1) the difficulty of locating relevant information in the original text; 2) the difficulty of organizing content to form a summary.",
"For each dimension, they can choose an integer between 1 and 3 as the score, where 1 means easy and 3 means difficult.",
"As we can see from Table 5, query about reasons is the most difficult to locate key information in related paragraphs, and this type of query is also challenging to organize and summarize reasonably.",
"Queries about multi-person interaction and overall content are relatively easy under human evaluation scores.",
"The relevant paragraphs of the former contain multi-person conversations, which are usually redundant, so the effective information is easier to find; the latter only needs to organize the statements in the chronological order of the meeting to write a summary, so it has the lowest Diff.",
"2 score.",
"The model performance also confirms this point, BART can get more than 30 R-L score on these two types of queries, but performs poorly on the rest.",
"Therefore, the remaining three types of queries in QMSum are still very challenging even for powerful pre-trained models, and further research is urgently needed to change this situation.",
"Although ROUGE score can measure the degree of overlap between the generated summary and the gold summary, it cannot reflect the factual consistency between them or the relevance between the predicted summary and the query.",
"Therefore, in order to better understand the model performance and the difficulty of the proposed task, we sample 100 generated summaries for error analysis.",
"Specifically, we ask 10 graduate students to do error analysis on the sampled summaries.",
"Each summary is viewed by two people.",
"They discuss and agree on whether the sample is consistent with the original facts and whether it is related to the query.",
"contain factual errors.",
"This problem is even more serious on QMSum: we find inconsistent facts in 74% of the samples, which may be because the existing models are not good at generating multi-granularity summaries.",
"Although BART can achieve state-of-the-art performance in the single-document summarization task, it does not seem to be able to truly understand the different aspects of the meeting, thus create factual errors.",
"What's worse, 31% summaries are completely unrelated to the given query.",
"This not only encourages us to design more powerful models or introduce more prior knowledge to overcome this challenge, but also shows better metrics are needed to evaluate model performance in generating multi-granularity summaries.",
"We propose a new benchmark, QMSum, for query-based meeting summarization task.",
"We build a locate-then-summarize pipeline as a baseline and further investigate variants of our model with different Locators and Summarizers, adopt different training settings including cross-domain and multi-domain experiments to evaluate generalizability, and analyze the task difficulty with respect to query types.",
"The new task and benchmark leave several open research directions to explore: 1) how to process the long meeting discourses; 2) how to make a meeting summarization model generalize well; 3) how to generate summaries consistent with both meeting transcripts and queries.",
"4) how to reduce the annotation cost for meeting summarization.",
"The Language, Information, and Learning lab at Yale LILY) would like to acknowledge the research grant from Microsoft Research.",
"We would also like to thank the annotators for their hard work and anonymous reviewers for their valuable comments.",
"We propose a novel query-based meeting summarization task, accompanying with a high-quality dataset QMSum.",
"Since the paper involves a new dataset and NLP application, this section is further divided into the following two parts.",
"property and privacy rights of the original authors: both of the collected meeting transcripts and recruited annotators.",
"We ensure that the dataset construction process is consistent with the intellectual property and privacy rights of the original authors of the meetings.",
"All the meeting transcripts we collected are public and open to use according to the regulation 6 7 8 9 .",
"The annotation process is consistent with the intellectual property and privacy rights of the recruited annotators as well.",
"Compensation for Annotators We estimated the time for annotating one meeting is around 1 2 hours.",
"Therefore, we paid annotators around $14 for each product and academic meeting and $28 for each committee meeting.",
"To further encourage annotators to work on annotations, we proposed bonus mechanism: the bonus of each of the 5th to 8th meetings would be $4; the bonus of each of the 9th to 12th meetings would be $5, and so on.",
"Some of the authors also did annotations and they were paid as well.",
"Steps Taken to Avoid Potential Problems The most possible problems which may exist in the dataset is bias problem and the inconsistency among queries, annotated summaries and original meeting contents.",
"With regard to bias problem, we find that product meeting dataset rarely contains any explicit gender information, but annotators still tended to use he' as pronoun.",
"To avoid the gender bias caused by the usage of pronouns, we required annotators to replace pronouns with speaker information like Project Manager', Marketing' to avoid the problem.",
"Also, when designing queries based on query schema list, we found that annotators usually used the same query schema, which might lead to bias towards a certain type of query.",
"Therefore, we asked the annotators to use different schemas as much as possible.",
"For the inconsistency problem, each annotation step was strictly under supervision by experts' which are good at annotation and could be responsible for reviewing.",
"6 http://groups.inf.ed.ac.uk/ami/ corpus/license.shtml 7 http://groups.inf.ed.ac.uk/ami/icsi/ license.shtml 8 https://senedd.wales/en/help/ our-information/Pages/Open-data.aspx 9 https://www.ourcommons.ca/en/ important-notices 7.2 NLP Applications Intended Use The query-based meeting summarization application is aiming at summarizing meetings according to queries from users.",
"We could foresee that the trained model could be applied in companies to further improve the efficiency of workers, and help the staff comprehensively understand the meeting contents.",
"The annotated QMSum dataset could be used as a benchmark for researchers to study how to improve the performance of summarization on such long texts and how to make models more generalizable on the meetings of different domains.",
"Failure Mode The current baseline models still tend to generate ungrammatical and factually inconsistent summaries.",
"If a trained baseline model was directly applied in companies, the misinformation would negatively affect comprehension and further decision making.",
"Further efforts are needed to generate high-quality summaries which are fluent and faithful to the meeting transcripts and queries.",
"Bias Training and test data are often biased in ways that limit system accuracy on domains with small data size or new domains, potentially causing distribution mismatch issues.",
"In the data collection process, we control for the gender bias caused by pronouns such as he' and she' as much as possible.",
"Also, we attempt to control the bias towards a certain type of query schema by requiring annotators to use diverse schemas as much as possible.",
"However, we admit that there might be other types of bias, such as political bias in committee meetings.",
"Thus, the summarization models trained on the dataset might be biased as well.",
"and We will include warnings in our dataset.",
"Misuse Potential We emphasize that the application should be used with careful consideration, since the generated summaries are not reliable enough.",
"It is necessary for researchers to develop better models to improve the quality of summaries.",
"Besides, if the model is trained on internal meeting data, with the consideration of intellectual property and privacy rights, the trained model should be used under strict supervision.",
"Collecting Data from Users Future projects have to be aware of the fact that some meeting transcripts are intended for internal use only.",
"Thus, researchers should be aware of the privacy issues about meeting data before training the model."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"method",
"objective",
"method",
"objective",
"objective",
"objective",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"method",
"result",
"method",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"The task of Fine-grained Entity Type Classification (FETC) consists of assigning types from a hierarchy to entity mentions in text.",
"Existing methods rely on distant supervision and are thus susceptible to noisy labels that can be out-of-context or overly-specific for the training sentence.",
"Previous methods that attempt to address these issues do so with heuristics or with the help of hand-crafted features.",
"Instead, we propose an end-to-end solution with a neural network model that uses a variant of cross-entropy loss function to handle out-of-context labels, and hierarchical loss normalization to cope with overly-specific ones.",
"Also, previous work solve FETC a multi-label classification followed by ad-hoc post-processing.",
"In contrast, our solution is more elegant: we use pub-lic word embeddings to train a single-label that jointly learns representations for entity mentions and their context.",
"We show experimentally that our approach is robust against noise and consistently outperforms the state-of-the-art on established benchmarks for the task.",
"Fine-grained Entity Type Classification (FETC) aims at labeling entity mentions in context with one or more specific types organized in a hierarchy (e.g., actor as a subtype of artist , which in turn is a subtype of person ).",
"Fine-grained types help in many applications, including relation extraction (Mintz et al., 2009), question answering (Li and Roth, 2002), entity linking (Lin et al., 2012), knowledge base completion (Dong et al., 2014) and entity recommendation (Yu et al., 2014).",
"Because of the high cost in labeling large training corpora with fine-grained types, current FETC systems resort to distant supervision (Mintz et al., 2009) and annotate mentions in the training corpus with all types associated with the entity in a knowledge graph.",
"This is illustrated in Figure 1, with three training sentences about entity Steve Kerr .",
"Note that while the entity belongs to three fine-grained types ( person , athlete , and coach ), some sentences provide evidence of only some of the types: person and coach from S1 , person and athlete from S2 , and just person for S3 .",
"Clearly, direct distant supervision leads to noisy training data which can hurt the accuracy of the FETC model.",
"One kind of noise introduced by distant supervision is assigning labels that are out-of-context ( athlete in S1 and coach in S2 ) for the sentence.",
"Current FETC systems sidestep the issue by either ignoring out-of-context labels or using simple pruning heuristics like discarding training examples with entities assigned to multiple types in the knowledge graph.",
"However, both strategies are inelegant and hurt accuracy.",
"Another source of noise introduced by distant supervision is when the type is overly-specific for the context.",
"For instance, example S3 does not support the inference that Mr. Kerr is either an athlete or a coach .",
"Since existing knowledge graphs give more attention to notable entities with more specific types, overly-specific labels bias the model towards popular subtypes instead of generic ones, i.e. , preferring athlete over person .",
"Instead of correcting for this bias, most existing FETC systems ignore the problem and treat each type equally and independently, ignoring that many types are semantically related.",
"Besides failing to handle noisy training data there are two other limitations of previous FETC approaches we seek to address.",
"First, they rely on hand-crafted features derived from various NLP tools; therefore, the inevitable errors introduced by these tools propagate to the FETC systems via the training data.",
"Second, previous systems treat FETC as a multi-label classification problem: during type inference they predict a plausibility score for each type, and, then, either classify types 16 Figure 1: With distant supervision, all the three mentions of Steve Kerr shown are labeled with the same types in oval boxes in the target type hierarchy.",
"with scores above a threshold (Mintz et al., 2009; Gillick et al., 2014; Shimaoka et al., 2017) or perform a top-down search in the given type hierarchy (Ren et al., 2016a; Abhishek et al., 2017).",
"Contributions: We propose a neural network based model to overcome the drawbacks of existing FETC systems mentioned above.",
"With publicly available word embeddings as input, we learn two different entity representations and use bidirectional long-short term memory (LSTM) with attention to learn the context representation.",
"We propose a variant of cross entropy loss function to handle out-of-context labels automatically during the training phase.",
"Also, we introduce hierarchical loss normalization to adjust the penalties for correlated types, allowing our model to understand the type hierarchy and alleviate the negative effect of overly-specific labels.",
"Moreover, in order to simplify the problem and take advantage of previous research on hierarchical classification, we transform the multi-label classification problem to a single-label classification problem.",
"Based on the assumption that each mention can only have one type-path depending on the context, we leverage the fact that type hierarchies are forests, and represent each type-path uniquely by the terminal type (which might not be a leaf node).",
"For Example, type-path root-person-coach can be represented as just coach , while root-person can be unambiguously represented as the non-leaf person .",
"benchmarks that shows that our model can adapt to noise in training data and consistently outperform previous methods.",
"In summary, we describe a single, much simpler and more elegant neural network model that attempts FETC end-to-end without post-processing or ad-hoc features and improves on the state-of-the-art for the task.",
"Fine-Grained Entity Type Classification : The first work to use distant supervision (Mintz et al., 2009) to induce a large but noisy training set and manually label a significantly smaller dataset to evaluate their FETC system, was Ling and Weld (2012) who introduced both a training and evaluation dataset FIGER (GOLD).",
"They used a linear classifier perceptron for multi-label classification.",
"While initial work largely assumed that mention assignments could be done independently of the mention context, Gillick et al. (2014) introduced the concept of context-dependent FETC where the types of a mention are constrained to what can be deduced from its context and introduced a new OntoNotes-derived (Weischedel et al., 2011) manually annotated evaluation dataset.",
"In addition, they addressed the problem of label noise induced by distant supervision and proposed three label cleaning heuristics.",
"Yogatama et al. (2015) proposed an embedding-based model where user-defined features and labels were embedded into a low dimensional feature space to facilitate information sharing among labels.",
"Ma et al. (2016) presented a label embedding method that incor-17 Attentive AFET LNR AAA NFETC no hand-crafted features uses attentive neural network adopts single label setting handles out-of-context noise handles overly-specifc noise Table 1: Summary comparison to related FETC work.",
"porates prototypical and hierarchical information to learn pre-trained label embeddings and adpated a zero-shot framework that can predict both seen and previously unseen entity types.",
"Shimaoka et al. (2016) proposed an attentive neural network model that used LSTMs to encode the context of an entity mention and used an attention mechanism to allow the model to focus on relevant expressions in such context.",
"Shimaoka et al. (2017) summarizes many neural architectures for FETC task.",
"These models ignore the out-of-context noise , that is, they assume that all labels obtained via distant supervision are correct and appropriate for every context in the training corpus.",
"In our paper, a simple yet effective variant of cross entropy loss function is proposed to handle the problem of out-of-context noise .",
"Ren et al. (2016a) have proposed AFET, an FETC system, that separates the loss function for clean and noisy entity mentions and uses label-label correlation information obtained by given data in its parametric loss function.",
"Considering the noise reduction aspects for FETC systems, Ren et al. (2016b) introduced a method called LNR to reduce label noise without data loss, leading to significant performance gains on both the evaluation dataset of FIGER(GOLD) and OntoNotes.",
"Although these works consider both out-of-context noise and overly-specific noise , they rely on handcrafted features which become an impediment to further improvement of the model performance.",
"For LNR, because the noise reduction step is separated from the FETC model, the inevitable errors introduced by the noise reduction will be propagated into the FETC model which is undesirable.",
"In our FETC system, we handle the problem induced from irrelevant noise and overly-specific noise seamlessly inside the model and avoid the usage of hand-crafted features.",
"Most recently, following the idea from AFET, Abhishek et al. (2017) proposed a simple neural network model which incorporates noisy label information using a variant of non-parametric hinge loss function and gain great performance improvement on FIGER(GOLD).",
"However, their work overlooks the effect of overly-specific noise , treating each type label equally and independently when learning the classifiers and ignores possible correlations among types.",
"Hierarchical Loss Function : Due to the intrinsic type hierarchy existing in the task of FETC, it is natural to adopt the idea of hierarchical loss function to adjust the penalties for FETC mistakes depending on how far they are in the hierarchy.",
"The penalty for predicting person instead of athlete should less than the penalty for predicting organization .",
"To the best of our knowledge, the first use of a hierarchical loss function was originally introduced in the context of document categorization with support vector machines (Cai and Hofmann, 2004).",
"However, that work assumed that weights to control the hierarchical loss would be solicited from domain experts, which is inapplicable for FETC.",
"Instead, we propose a method called hierarchical loss normalization which can overcome the above limitations and be incorporated with cross entropy loss used in our neural architecture.",
"Table 1 provides a summary comparison of our work against the previous state-of-the-art in fine grained entity typing.",
"Our task is to automatically reveal the type information for entity mentions in context.",
"The input is a knowledge graph with schema Y , whose types are organized into a type hierarchy Y , and an automatically labeled training corpus D obtained by distant supervision with Y .",
"The output is a type-path in Y for each named entity mentioned in a test sentence from a corpus D t .",
"More precisely, a labeled corpus for entity type classification consists of a set of extracted entity mentions { m i } Ni =1 ( i.e. , token spans representing entities in text), the context ( e.g., sentence, paragraph) of each mention { c i } Ni =1 , and the candidate 18 type sets {Y i } Ni =1 automatically generated for each mention.",
"We represent the training corpus using a set of mention-based triples D = { ( m i , c i , Y i ) } Ni =1 .",
"If Y i is free of out-of-context noise , the type labels for each m i should form a single type-path in Y i .",
"However, Y i may contain type-paths that are irrelevant to m i in c i if there exists out-of-context noise .",
"We denote the type set including all terminal types for each type-path as the target type set Y ti .",
"In the example type hierarchy shown in Figure 1, if Y i contains types person , athlete , coach , Y ti should contain athlete , coach , but not person .",
"In order to understand the trade-off between the effect of out-of-context noise and the size of the training set, we report on experiments with two different training sets: D filtered only with triples whose Y i form a single type-path in D , and D raw with all triples.",
"Definition 1 Given an entity mention m i = ( w p , . . . , w t ) ( p, t [1 , T ] , p t ) and its context c i = ( w 1 , . . . , w T ) where T is the context length, our task is to predict its most specific type y i depending on the context.",
"In practice, c i is generated by truncating the original context with words beyond the context window size C both to the left and to the right of m i .",
"Specifically, we compute a probability distribution over all the K = |Y| types in the target type hierarchy Y .",
"The type with the highest probability is classified as the predicted type y i which is the terminal type of the predicted type-path .",
"This section details our Neural Fine-Grained Entity Type Classification (NFETC) model.",
"As stated in Section 3, the input is an entity mention m i with its context c i .",
"First, we transform each word in the context c i into a real-valued vector to provide lexical-semantic features.",
"Given a word embedding matrix W wrd of size d w | V | , where V is the input vocabulary and d w is the size of word embedding, we map every w i to a column vector w di R d w .",
"To additionally capture information about the relationship to the target entities, we incorporate word position embeddings (Zeng et al., 2014) to reflect relative distances between the i -th word to the entity mention.",
"Every relative distance is mapped to a randomly initialized position vector in R d p , where d p is the size of position embedding.",
"For a given word, we obtain the position vector w pi .",
"The overall embedding for the i -th word is w Ei = [( w di ) > , ( w pi ) > ] > .",
"For the context c i , we want to apply a non-linear transformation to the vector representation of c i to derive a context feature vector h i = f ( c i ; ) given a set of parameters .",
"In this paper, we adopt bidirectional LSTM with d s hidden units as f ( c i ; ) .",
"The network contains two sub-networks for the forward pass and the backward pass respectively.",
"Here, we use element-wise sum to combine the forward and backward pass outputs.",
"The output of the i -th word in shown in the following equation: h i = [ h i h i ] (1) Following Zhou et al. (2016), we employ word-level attention mechanism, which makes our model able to softly select the most informative words during training.",
"Let H be a matrix consisting of output vectors [ h 1 , h 2 , . . . , h T ] that the LSTM produced.",
"The context representation r is formed by a weighted sum of these output vectors: G = tanh( H ) (2) = softmax ( w > G ) (3) r c = H > (4) where H R d s T , w is a trained parameter vector.",
"The dimension of w, , r c are d s , T, d s respectively.",
"Averaging encoder: Given the entity mention m i = ( w p , . . . , w t ) and its length L = t p + 1 , the averaging encoder computes the average word embedding of the words in m i .",
"Formally, the averaging representation r a of the mention is computed as follows: r a = 1 L t X i = p w di (5) 19 Figure 2: The architecture of the NFETC model.",
"This relatively simple method for composing the mention representation is motivated by it being less prone to overfitting (Shimaoka et al., 2017).",
"LSTM encoder: In order to capture more semantic information from the mentions, we add one token before and another after the target entity to the mention.",
"The extended mention can be represented as m i = ( w p 1 , w p , . . . , w t , w t +1 ) .",
"The standard LSTM is applied to the mention sequence from left to right and produces the outputs h p 1 , . . . , h t +1 .",
"The last output h t +1 then serves as the LSTM representation r l of the mention.",
"We concatenate context representation and two mention representations together to form the overall feature representation of the input R = [ r c , r a , r l ] .",
"Then we use a softmax classifier to predict y i from a discrete set of classes for a entity mention m and its context c with R as input: p ( y | m, c ) = softmax ( W R + b ) (6) y = arg max y p ( y | m, c ) (7) where W can be treated as the learned type embeddings and b is the bias.",
"The traditional cross-entropy loss function is represented as follows: J ( ) = 1 NNX i =1 log( p ( y i | m i , c i )) + k k 2 (8) where y i is the only element in Y ti and ( m i , c i , Y i ) D filtered .",
"is an L2 regularization hyperparameter and denotes all parameters of the considered model.",
"In order to handle data with out-of-context noise (in other words, with multiple labeled types) and take full advantage of them, we introduce a simple yet effective variant of the cross-entropy loss: J ( ) = 1 NNX i =1 log( p ( y i | m i , c i )) + k k 2 (9) where y i = arg max y Y ti p ( y | m i , c i ) and ( m i , c i , Y i ) D raw .",
"With this loss function, we assume that the type with the highest probability among Y ti during training as the correct type.",
"If there is only one element in Y ti , this loss function is equivalent to the cross-entropy loss function.",
"Wherever there are multiple elements, it can filter the less probable types based on the local context automatically.",
"Since the fine-grained types tend to form a for-est of type hierarchies, it is unreasonable to treat every type equally.",
"Intuitively, it is better to predict an ancestor type of the true type than some other unrelated type.",
"For instance, if one example is labeled as athlete , it is reasonable to predict its type as person .",
"However, predicting other high level types like location or organization would be inappropriate.",
"In other words, we want the loss function to penalize less the cases where types are related.",
"Based on the above idea, we adjust the estimated probability as follows: p ( y | m, c ) = p ( y | m, c ) + X t p ( t | m, c ) (10) where is the set of ancestor types along the type-path of y , is a hyperparameter to tune the penalty.",
"Afterwards, we re-normalize it back to a probability distribution, a process which we denote as hierarchical loss normalization .",
"As discussed in Section 1, there exists overly-specific noise in the automatically labeled training sets which hurt the model performance severely.",
"With hierarchical loss normalization , the model will get less penalty when it predicts the actual type for one example with overly-specific noise .",
"Hence, it can alleviate the negative effect of overly-specific noise effectively.",
"Generally, hierarchical loss normalization can make the model somewhat understand the given type hierarchy and learn to detect those overly-specific cases.",
"During classification, it will make the models prefer generic types unless there is a strong indicator for a more specific type in the context.",
"Dropout, proposed by Hinton et al. (2012), prevents co-adaptation of hidden units by randomly omitting feature detectors from the network during forward propagation.",
"We employ both input and output dropout on LSTM layers.",
"In addition, we constrain L2-norms for the weight vectors as shown in Equations 8, 9 and use early stopping to decide when to stop training.",
"This section reports an experimental evaluation of our NFETC approach using the previous state-of-the-art as baselines.",
"We evaluate the proposed model on two standard and publicly available datasets, provided in a preprocessed tokenized format by Shimaoka et al. (2017).",
"Table 2 shows statistics about the benchmarks.",
"The details are as follows: FIGER(GOLD): The training data consists of Wikipedia sentences and was automatically generated with distant supervision, by mapping Wikipedia identifiers to Freebase ones.",
"The test data, mainly consisting of sentences from news reports, was manually annotated as described by Ling and Weld (2012).",
"OntoNotes: The OntoNotes dataset consists of sentences from newswire documents present in the OntoNotes text corpus (Weischedel et al., 2013).",
"DBpedia spotlight (Daiber et al., 2013) was used to automatically link entity mention in sentences to Freebase.",
"Manually annotated test data was shared by Gillick et al. (2014).",
"Because the type hierarchy can be somewhat understood by our proposed model, the quality of the type hierarchy can also be a key factor to the performance of our model.",
"We find that the type hierarchy for FIGER(GOLD) dataset following Freebase has some flaws.",
"For example, software is not a subtype of product and government is not a subtype of organization .",
"Following the proposed type hierarchy of Ling and Weld (2012), we refine the Freebase-based type hierarchy.",
"The process is a one-to-one mapping for types in the original dataset and we didn't add or drop any type or sentence in the original dataset.",
"As a result, we can directly compare the results of our proposed model with or without this refinement.",
"Aside from the advantages brought by adopting the single label classification setting, we can see one disadvantage of this setting based on Table 2.",
"That is, the performance upper bounds of 21 our proposed model are no longer 100% : for example, the best strict accuracy we can get in this setting is 88 .",
"28% for FIGER(GOLD) .",
"However, as the strict accuracy of state-of-the-art methods are still nowhere near 80% (Table 3), the evaluation we perform is still informative.",
"We compared the proposed model with state-of-the-art FETC systems 1 : (1) Attentive (Shimaoka et al., 2017); (2) AFET (Ren et al., 2016a); (3) LNR+FIGER (Ren et al., 2016b); (4) AAA (Ab-hishek et al., 2017).",
"We compare these baselines with variants of our proposed model: (1) NFETC(f) : basic neural model trained on D filtered (recall Section 4.4); (2) NFETC-hier(f) : neural model with hierarich-cal loss normalization trained on D filtered .",
"(3) NFETC(r) : neural model with proposed variant of cross-entropy loss trained on D raw ; (4) NFETC-hier(r) : neural model with proposed variant of cross-entropy loss and hierarchical loss normalization trained on D raw .",
"For evaluation metrics, we adopt the same criteria as Ling and Weld (2012), that is, we evaluate the model performance by strict accuracy, loose macro, and loose micro F-scores.",
"These measures are widely used in existing FETC systems (Shi-maoka et al., 2017; Ren et al., 2016b,a; Abhishek et al., 2017).",
"We use pre-trained word embeddings that were not updated during training to help the model generalize to words not appearing in the training set.",
"For this purpose, we used the freely available 300-dimensional cased word embedding trained on 840 billion tokens from the Common Crawl supplied by Pennington et al. (2014).",
"For both datasets, we randomly sampled 10% of the test set as a development set, on which we do the hyperparameters tuning.",
"The remaining 90% is used for final evaluation.",
"We run each model with the well-tuned hyperparameter setting five times and report their average strict accuracy, macro F1 and micro F1 on the test set.",
"The proposed model was implemented using the TensorFlow framework.",
"2 1 The results of the baselines are all as reported in their corresponding papers.",
"In this paper, we search different hyperparameter settings for FIGER(GOLD) and OntoNotes separately, considering the differences between the two datasets.",
"The hyperparameters include the learning rate lr for Adam Optimizer, size of word position embeddings (WPE) d p , state size for LSTM layers d s , input dropout keep probability p i and output dropout keep probability p o for LSTM layers 3 , L2 regularization parameter and parameter to tune hierarchical loss normalization .",
"The values of these hyperparameters, obtained by evaluating the model performance on the development set, for each dataset can be found in Table 4.",
"Table 3 compares our models with other state-of-the-art FETC systems on FIGER(GOLD) and OntoNotes.",
"The proposed model performs better than the existing FETC systems, consistently on both datasets.",
"This indicates benefits of the proposed representation scheme, loss function and hierarchical loss normalization.",
"Discussion about Out-of-context Noise : For dataset FIGER(GOLD), the performance of our model with the proposed variant of cross-entropy loss trained on D raw is significantly better than the basic neural model trained on D filtered , suggesting that the proposed variant of the cross-entropy loss function can make use of the data with out-of-context noise effectively.",
"On the other hand, the improvement introduced by our proposed variant of cross-entropy loss is not as significant for the OntoNotes benchmark.",
"This may be caused by the fact that OntoNotes is much smaller than FIGER(GOLD) and proportion of examples without out-of-context noise are also higher, as shown in Table 2.",
"3 Following TensorFlow terminology.",
"Investigations on Overly-Specific Noise : With hierarchical loss normalization, the performance of our models are consistently better no matter whether trained on D raw or D filtered on both datasets, demonstrating the effectiveness of this hierarchical loss normalization and showing that overly-specific noise has a potentially significant influence on the performance of FETC systems.",
"By visualizing the learned type embeddings (Fig-ure 3), we can observe that the parent types are mixed with their subtypes and forms clear distinct clusters without hierarchical loss normalization, making it hard for the model to distinguish subtypes like actor or athlete from their parent types person .",
"This also biases the model towards the most popular subtype.",
"While the parent types tend to cluster together and the general pattern is more complicated with hierarchical loss normalization.",
"Although it's not as easy to interpret, it hints that our model can learn rather subtle intricacies and correlations among types latent in the data with the help of hierarchical loss normalization, instead of sticking to a pre-defined hierarchy.",
"all the test examples of all variants of our model.",
"Table 5 shows 5 examples of test sentence.",
"Without hierarchical loss normalization, our model will make too aggressive predictions for S1 with Politician and for S2 with Software .",
"This kind of mistakes are very common and can be effectively reduced by introducing hierarchical loss normalization leading to significant improvements on the model performance.",
"Using the changed loss function to handle multi-label (noisy) training data can help the model distinguish ambiguous cases.",
"For example, our model trained on D filtered will misclassify S5 as Title , while the model trained on D raw can make the correct prediction.",
"However, there are still some errors that can't be fixed with our model.",
"For example, our model cannot make correct predictions for S3 and S4 due to the fact that our model doesn't know that UW is an abbreviation of University of Washington and Washington state is the name of a province.",
"In addition, the influence of overly-specific noise can only be alleviated but not eliminated.",
"Sometimes, our model will still make too aggressive or conser-vative predictions.",
"Also, mixing up very ambiguous entity names is inevitable in this task.",
"In this paper, we studied two kinds of noise, namely out-of-context noise and overly-specific noise , for noisy type labels and investigate their effects on FETC systems.",
"We proposed a neural network based model which jointly learns representations for entity mentions and their context.",
"A variant of cross-entropy loss function was used to handle out-of-context noise .",
"Hierarchical loss normalization was introduced into our model to alleviate the effect of overly-specific noise .",
"Experimental results on two publicly available datasets demonstrate that the proposed model is robust to these two kind of noise and outperforms previous state-of-the-art methods significantly.",
"More work can be done to further develop hierarchical loss normalization since currently it's very simple.",
"Considering type information is valuable in various NLP tasks, we can incorporate results produced by our FETC system to other tasks, such as relation extraction, to check our model's effectiveness and help improve other tasks' performance.",
"In addition, tasks like relation extraction are complementary to the task of FETC and therefore may have potentials to be digged to help improve the performance of our system in return.",
"This work was supported in part by the Natural Sciences and Engineering Research Council Canada (NSERC)."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"method",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"other",
"method",
"method",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other"
] |
[
"Multilingual models, such as M-BERT and XLM-R, have gained increasing popularity, due to their zero-shot cross-lingual transfer learning capabilities.",
"However, their generalization ability is still inconsistent for typologically diverse languages and across different benchmarks.",
"Recently, meta-learning has garnered attention as a promising technique for enhancing transfer learning under low-resource scenarios: particularly for cross-lingual transfer in Natural Language Understanding (NLU).",
"In this work, we propose X-METRA-ADA , a cross -lingual ME taTRA nsfer learning ADA ptation approach for NLU.",
"Our approach adapts MAML, an optimization-based meta-learning approach, to learn to adapt to new languages.",
"We extensively evaluate our framework on two challenging cross-lingual NLU tasks: multilingual task-oriented dialog and typologically diverse question answering.",
"We show that our approach outperforms naive fine-tuning, reaching competitive performance on both tasks for most languages.",
"Our analysis reveals that X-METRA-ADA can leverage limited data for faster adaptation.",
"Cross-lingual transfer learning is a technique used to adapt a model trained on a downstream task in a source language to directly generalize to the task in new languages.",
"It aims to come up with common cross-lingual representations and leverages them to bridge the divide between resources to make any NLP application scale to multiple languages.",
"This is particularly useful for data-scarce scenarios, as it reduces the need for API calls implied by machine translation or costly task-specific annotation for new languages.",
"Transformer-based contextualized embeddings and their multilingual counterparts such as M-BERT (Devlin et al., 2019) have become popular as off-the-shelf representations for cross-lingual transfer learning.",
"While these multilingual representations exhibit some cross-lingual capability even for languages with low lexical overlap with English, the transfer quality is reduced for languages that exhibit different typological characteristics (Pires et al., 2019).",
"The generalization of such representations has been extensively evaluated on traditional tasks such as Part-of-Speech (POS) tagging, Named Entity Recognition (NER) and Cross-lingual Document Classification (CLDC) (Ahmad et al., 2019; Wu and Dredze, 2019; Bari et al., 2020a; Schwenk and Li, 2018), with ever-growing open community annotation efforts like Universal Dependencies (Nivre et al., 2020) and CoNLL shared tasks (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003).",
"On the other hand, cross-lingual Natural Language Understanding (NLU) 3618 tasks have gained less attention, with smaller benchmark datasets that cover a handful of languages and don't truly model linguistic variety (Conneau et al., 2018; Artetxe et al., 2020).",
"Natural Language Understanding tasks are critical for dialog systems, as they make up an integral part of the dialog pipeline.",
"Understanding and improving the mechanism behind cross-lingual transfer for natural language understanding in dialog systems require evaluations on more challenging and typologically diverse benchmarks.",
"Numerous approaches have attempted to build stronger cross-lingual representations on top of those multilingual models; however, most require parallel corpora (Wang et al., 2019; Lample and Conneau, 2019) and are biased towards high-resource and balanced setups.",
"This fuels the need for a method that doesn't require explicit cross-lingual alignment for faster adaptation to low-resource setups.",
"Meta-learning, a method for learning to learn, has found favor especially among the computer vision and speech recognition communities (Nichol et al., 2018; Triantafillou et al., 2020; Winata et al., 2020).",
"Meta-learning has been used for machine translation (Gu et al., 2018), few-shot relation classification (Gao et al., 2019), and on a variety of GLUE tasks (Dou et al., 2019).",
"Recently, Nooralahzadeh et al. (2020) apply the MAML (Finn et al., 2017) algorithm to cross-lingual transfer learning for XNLI (Conneau et al., 2018) and MLQA (Lewis et al., 2020), NLU tasks that are naturally biased towards machine translation-based solutions.",
"Nooralahzadeh et al. are able to show improvement over strong multilingual models, including M-BERT.",
"However, they mainly show the effects of meta-learning as a first step in a framework that relies on supervised fine-tuning, making it difficult to properly compare and contrast both approaches.",
"We study cross-lingual meta-transfer learning from a different perspective.",
"We distinguish between meta-learning and fine-tuning and design systematic experiments to analyze the added value of meta-learning compared to naive fine-tuning.",
"We also build our analysis in terms of more typologically diverse cross-lingual NLU tasks: Multilingual Task-Oriented Dialogue System (MTOD) (Schuster et al., 2019) and Typologically Diverse Question Answering (Ty-DiQA) (Clark et al., 2020).",
"While XNLI is a classification task, MTOD is a joint classification and sequence labelling task and is more typologically diverse.",
"TyDiQA is not a classification task, but we show how meta-learning can be applied usefully to it.",
"We also show greater performance improvements from meta-learning than fine-tuning on transfer between typologically diverse languages.",
"To the best of our knowledge, we are the first to conduct an extensive analysis applied to MTOD and TyDiQA to evaluate the quality of cross-lingual meta-transfer.",
"Our contributions are three-fold: Proposing X-METRA-ADA, 1 a language-agnostic meta-learning framework (Figure 1), and extensively evaluating it.",
"Applying X-METRA-ADA to two challenging cross-lingual and typologically diverse task-oriented dialog and QA tasks, which includes recipes for constructing appropriate meta-tasks (Section 2.3).",
"Analyzing the importance of different components in cross-lingual transfer and the scalability of our approach across different k-shot and downsampling configurations (Section 4.2).",
"We make use of optimization-based meta-learning on top of pre-trained models with two levels of adaptation to reduce the risk of over-fitting to the target language:",
"(i) meta-training from the source language to the target language(s)",
"(ii) meta-adaptation on the same target language(s) for more language-specific adaptation (Figure 1).",
"We apply our approach to two cross-lingual downstream tasks: MTOD (Section 2.1) and TyDiQA (Section 2.2).",
"We start by describing the base architectures for both tasks, before explaining how they are incorporated into our meta-learning pipeline.",
"Applying meta-learning to a task requires the construction of multiple pseudo-tasks', which are instantiated as pairs of datasets.",
"We describe this construction for our downstream tasks in Section 2.3.",
"Finally, we present our X-METRA-ADA algorithm (Section 2.4).",
"Similar to the architecture in Castellucci et al. (2019), we model MTOD's intent classification and slot filling subtasks jointly.",
"For that purpose, we 1 We release our code at: github.com/ meryemmhamdi1/meta_cross_nlu_qa .",
"use a joint text classification and sequence labeling framework with feature representation based on Transformer (Vaswani et al., 2017).",
"More specifically, given a multilingual pre-trained model, we use it to initialize the word-piece embeddings layer.",
"Then, we add on top of it a text classifier to predict the intent from the [ CLS ] token representation and a sequence labeling layer in the form of a linear layer to predict the slot spans (in BIO annotation), as shown in Figure 2.",
"We optimize parameters using the sum of both intent and CRF based slot losses.",
"Inspired by Hu et al. (2020), we apply to TyDiQA the same architecture as the original BERT fine-tuning procedure for question answering on SQuAD (Devlin et al., 2019).",
"Specifically, the input question (after prepending it with a [ CLS ] token) and the context are concatenated as a single packed sequence separated by a [ SEP ] token.",
"Then, the embeddings of the context are fed to a linear layer plus a softmax to compute the probability that each token is the START or END of the answer.",
"The whole architecture is fine-tuned by optimizing for the joint loss over the START and END predictions.",
"Any START and END positions that are outside of the scope of the context end up being truncated because of Transformer-based embeddings length limitations and are ignored during training.",
"Figure 3 illustrates the architecture.",
"Meta-learning is distinguished from fine-tuning in that the former seeks an initialization point that is maximally useful to multiple downstream learning tasks, while the latter seeks to directly optimize a downstream child' task from the initialization point of a parent' task.",
"To apply meta-learning to data scenarios that more closely fit fine-tuning, we construct multiple pseudo-tasks' by subsampling from parent and child task datasets.",
"A pseudo-task is defined as a tuple T = ( S, Q ) , where each of S and Q are labeled samples.",
"In the inner loops of meta-learning, the loss on Q from a model trained on S is used to adapt the initialization point (where Q and S are referred to as the query and support in meta-learning literature).",
"Pseudo-tasks are constructed in such a way as to make them balanced and non-overlapping.",
"We describe our approach for each task below.",
"MTOD labeled data consists of a sentence from a dialogue along with a sentence-level intent label and subsequence slot labels.",
"From the available data, we draw a number of task sets T ; each T = ( S, Q ) T consists of k intent and slot-labeled items per intent class in S and q items per class in Q .",
"Although carefully arranged to have the same number of items per class per task in each of the support and the query sets, the same task splits are used for slot prediction as well.",
"During meta-training and meta-adaptation, task batches are sampled randomly from T .",
"Unlike MTOD, QA is not a standard classification task with fixed classes; thus, it is not directly amenable to class distribution balancing across pseudo-task query and support sets.",
"To construct pseudo-tasks for QA from the available (question, context, answer) span triplet data, we use the following procedure: We draw a task T = ( S, Q ) , by first randomly drawing q triplets, forming Q .",
"For each triplet t in Q , we draw the k/q most similar triplets to t from the remaining available data, 3620 thus forming S .",
"2 For two triplets t 1 , t 2 we de-fine similarity as cos( f ( t 1 ) , f ( t 2 )) , where f ( . ) is a representation of the concatenation of the triplet elements delimited by a space; we use a cross-lingual extension to SBERT's pre-trained model (Reimers and Gurevych, 2019, 2020).",
"In the original MAML (Finn et al., 2017), in every iteration we sample a task set T from a single distribution D , and the support and query sets in a single task T would be drawn from a common space.",
"We distinguish between the distributions D meta-train and D meta-adapt , which correspond to the two levels of adaptation introduced in Section 2 and explained below in Section 2.4.",
"To enable cross-lingual transfer, we draw data for the support set of tasks in D meta-train from task data in the high-resource base language (English, in our experiments).",
"For the query set in D meta-train and for both support and query sets in D meta-adapt , we sample from task data in the language to be evaluated.",
"Following the notation described in the above sections, we present our algorithm X-METRA-ADA, our adaptation of MAML to cross-lingual transfer learning in two stages.",
"In each stage we use the procedure outlined in Algorithm 1.",
"We start by sampling a batch of tasks from distribution D .",
"For every task T j = ( S j , Q j ) , we update j over n steps using batches drawn from S j .",
"At the end of this inner loop, we compute the gradients with respect to the loss of j on Q j .",
"At the end of all tasks of each batch, we sum over all pre-computed gradients and update , thus completing one outer loop.",
"The difference between meta-train and meta-adapt stages comes down to the parameters and hyperparameters passed into Algorithm 1.",
"Meta-train : This stage is similar to classical MAML.",
"Task sets are sampled from D meta-train , which uses high-resource (typically English) data in support sets and low-resource data in the query sets.",
"The input model B is typically a pretrained multilingual downstream base model, and we use hyperparameters n = 5 , = 1e 3 and = 1e 2 for MTOD and = = 3e 5 for QA.",
"Algorithm 1 X-METRA-ADA Require: Task set distribution D , pre-trained learner B with parameters B , meta-learner M with parameters ( , , , n ) 1: Initialize B 2: while not done do 3: Sample batch of tasks T = { T 1 , T 2 , . . . T b } D 4: for all T j = ( S j , Q j ) in T do 5: Initialize j 6: for t = 1 . . . n do 7: Evaluate B j / j = j LS j T j ( B j ) 8: Update j = j B j / j 9: end for 10: Evaluate query loss LQ j T j ( B j ) and save it for outer loop 11: end for 12: Update (cid:80) bj =1 LQ j T j ( B j ) 13: end while Meta-adapt : During this stage, we ensure the model knows how to learn from examples within the target language under a low-resource regime.",
"Task sets are sampled from D meta-adapt , which uses low-resource data in both support and query sets.",
"The input model is the optimization resulting from meta-train, and we use hyperparameters n = 5 , = 1e 3 and = 1e 2 for MTOD and = = 3e 5 for QA.",
"For dialogue intent prediction, we use the Multilingual Task-Oriented Dialogue (MTOD) (Schuster et al., 2019) dataset.",
"MTOD covers 3 languages (English, Spanish, and Thai), 3 domains (alarm, reminder, and weather), 12 intent types, and 11 slot types.",
"3 We train models with the English training data ( T rain ) but for the other languages we use the provided development sets ( Dev ) to further our goals to analyze methods of few-shot transfer.",
"We evaluate on the provided test sets.",
"Moreover, we evaluate on an in-house dataset of 7 languages.",
"4 For QA, we use the Typologically Diverse QA (TyDiQA-GoldP) (Clark et al., 2020) dataset.",
"TyDiQA is a typologically diverse question answering dataset covering 11 languages.",
"Like Hu et al. (2020), we use a simplified version of the primary task.",
"Specifically, we discard questions that don't have an answer and use only the gold passage as context, keeping only the short answer and its spans.",
"This makes the task similar to XQuAD and MLQA, 3 We follow the same pre-processing and evaluation as Liu et al. (2020).",
"although unlike these tasks, the questions are written without looking at the answers and without machine translation.",
"As with MTOD, we use the English training data as T rain .",
"Since development sets are not specified for MTOD, we instead reserve 10% of the training data in each of the other languages as Dev .",
"We report on the provided test sets.",
"Statistics of datasets for both tasks can be found in Appendix A. 3.2 Evaluation In order to fairly and consistently evaluate our approach to few-shot transfer learning via meta-learning and to ablate components of the method, we design a series of experiments based on both internal and external baselines.",
"Our internal baselines ablate the effect of the X-METRA-ADA algorithm vs. conventional fine-tuning from a model trained on a high-resource language by keeping the data sets used for training constant.",
"As our specific data conditions are not reproduced in any externally reported results on these tasks, we instead compare to other reported results using English-only or entirely zero-shot training data.",
"fine-tuning/few-shot schemes: PRE : An initial model is fine-tuned on the T rain split of English only and then evaluated on new languages with no further tuning or adaptation.",
"This strawman baseline has exposure to English task data only.",
"MONO : An initial model is fine-tuned on the Dev split of the target language.",
"This baseline serves as a comparison for standard fine-tuning (FT, below), which shows the value of combining MONO and PRE.",
"FT : We fine-tune the PRE model on the Dev split of the target language.",
"This is a standard transfer learning approach that combines PRE and MONO.",
"FT w/EN : Like FT, except both the Dev split of the target language and the T rain split of English are used for fine-tuning.",
"This is used for dataset equivalence with X-METRA-ADA (below).",
"X-METRA : We use the PRE model as B for meta-train, the T rain split from English to form support sets in D meta train , and all of the Dev split of the target language to form query sets in D meta train .",
"X-METRA-ADA : We use the PRE model as B for meta-train, the T rain split from English to form support sets in D meta-train .",
"For MTOD, we use 75% of the Dev split of the target language to form query sets in D meta-train .",
"We use the remaining 25% of the Dev split of the target language for both the support and query sets of D meta-adapt .",
"For QA, we use ratios of 60% for D meta-train and 40% for D meta-adapt .",
"All models are ultimately fine-tuned versions of BERT and all have access to the same task training data relevant for their variant.",
"That is, X-METRA-ADA and PRE both see the same English T rain data and MONO, FT, and X-METRA-ADA see the same target language Dev data.",
"However, since X-METRA-ADA uses both T rain and Dev to improve upon PRE, and FT only uses Dev , we make an apples-to-apples comparison, data-wise, by including FT w/EN experiments as well.",
"External Baselines We focus mainly on transfer learning baselines from contextualized embeddings for a coherent external comparison; supervised experiments on target language data such as those reported in Schuster et al. (2019) are inappropriate for comparison because they use much more in-language labeled data to train.",
"The experiments we compare to are zero-shot in the sense that they are not trained directly on the language-specific task data.",
"However, most of these external baselines involve some strong cross-lingual supervision either through cross-lingual alignment or mixed-language training.",
"We also include machine translation baselines, which are often competitive and hard to beat.",
"Our work, by contrast, uses no parallel language data or resources beyond pretrained multilingual language models, labeled English data, and few-shot labeled target language data.",
"To the best of our knowledge, we are the first to explore cross-lingual meta-transfer learning for those benchmarks, so we only report on our X-METRA-ADA approach in addition to those baselines.",
"Cross-lingual alignment-based approaches: We use MCoVe, a multilingual version of contextualized word vectors with an autoencoder objective as reported by Schuster et al. (2019) in addition to M-BERT (Liu et al., 2020).",
"We also include XLM trained on Translation Language Modeling (TLM) + Masked Language Modeling (MLM) 3622 (Lample and Conneau, 2019) as enhanced by Transformer and mixed-training as reported by Liu et al. (2020).",
"Mixed-language training approaches : We use M-BERT + Transformer + mixed training using data from the dialogue domain: from",
"(a) human-based word selection (MLTH ) and",
"(b) attention-based word selection (MLTA ), both are reported by Liu et al. (2020).",
"Translation-based approaches : We use the zero-shot version of MMTE, the massively multilingual translation encoder by Siddhant et al. (2020) fine-tuned on intent classification.",
"We also include Translate Train (TTrain) (Schuster et al., 2019), which translates English training data into target languages to train on them in addition to the target language training data.",
"For TyDiQA-GoldP, out of the already mentioned baselines, we use M-BERT, XLM, MMTE, and TTrain (which unlike (Schuster et al., 2019) only translates English to the target language to train on it without data augmentation).",
"In addition to that we also include XLM-R as reported by Hu et al. (2020).",
"We use M-BERT (bert-base-multilingual-cased) 5 with 12 layers as initial models for MTOD and TyDiQA-GoldP in our internal evaluation.",
"We use xlm-r-distilroberta-base-paraphrase-v1 6 model for computing similarities when constructing the QA meta-dataset (Section 2.3.2).",
"Our implementation of X-METRA-ADA from scratch uses learn2learn (Arnold et al., 2020) for differentiation and update rules in the inner loop.",
"7 We use the first-order approximation option in learn2learn for updating the outer loop, also introduced in Finn et al. (2017).",
"For each model, we run for 3 to 4 different random initializations (for some experiments like PRE for TyDiQA-GoldP we use only 2 seeds respectively) and report the average and standard deviation of the best model for the few-shot language for each run.",
"We use training loss convergence as a criteria for stopping.",
"For the FT and MONO baselines, we don't have the luxury of Dev performance, since those baselines use the 5 github.com/huggingface/transformers version 3.4.0 pre-trained on 104 languages, including all languages evaluated on in this paper.",
"Dev dataset for training.",
"8 The Dev set is chosen to simulate a low-resource setup.",
"More details on the hyperparameters used can be found in Appendix B. 4 Results and Discussion 4.1 Zero-shot and Few-shot Cross-Lingual NLU and QA Model Spanish Thai Intent Acc Slot F1 Intent Acc Slot F1 External Baselines MCoVe 53.9 19.3 70.7 35.6 M-BERT 73.7 51.7 28.1 10.6 MLT H 82.9 74.9 53.8 26.1 MLT A 87.9 73.9 73.5 27.1 XLM 87.5 68.5 72.6 27.9 MMTE + 93.6 -89.6 TTrain 85.4 72.9 95.9 55.4 Zero-shot Learning PRE 70.2 38.2 45.4 12.5 Few-shot Learning MONO 82.4 6 .",
"Table 1 shows the results for cross-lingual transfer learning on MTOD comparing different baselines.",
"9 In general, PRE model performs worse than other baselines.",
"It performs less than the simplest baseline, MCoVe, when transferring to Thai with a decrease of 25 .",
"3% and 23 .",
"1% and an average cross-lingual relative loss of 4 .",
"5% and 2 .",
"1% for intent classification and slot filling respectively.",
"8 All experiments are run using Pytorch version 1.6.0, 1 GeForce RTX P8 GPU of 11MB of memory CUDA version 10.1.",
"The runtime depends on the size of the dev data but most MTOD models take around 3 hours to converge and TyDiQA models take a maximum of 10 hours training (including evaluation at checkpoints).",
"9 More results on our in-house NLU dataset can be found in Appendix C. 3623 Model Test on Arabic Bengali Finnish Indonesian Russian Swahili Telugu External Baselines M-BERT 62.2 49.3 59.7 64.8 60.0 57.5 49.6 XLM 59.4 27.2 58.2 62.5 49.2 39.4 15.5 XLM-R 67.6 64.0 70.5 77.4 67.0 66.1 70.1 MMTE 63.1 55.8 53.9 60.9 58.9 63.1 54.2 TTrain 61.5 31.9 62.6 68.6 53.1 61.9 27.4 Zero-shot Learning PRE 62.4 2 .",
"The results confirm the positive effects of cross-lingual fine-tuning; although PRE is not a very effective cross-lingual learner, fine-tuning with in-language data on top of PRE (i.e. FT) adds value over the MONO baseline.",
"Adding English data to fine-tuning (FT w/EN) is slightly harmful.",
"However, the meta-learning approach appears to make the most effective use of this data in almost all cases (Spanish slot filling is an exception).",
"We perform a pairwise two-sample t-test (assuming unequal variance) and find the results of X-METRA-ADA compared to FT on intent classification to be statistically significant with p-values of 1 .",
"5% and 2 .",
"4% for Spanish and Thai respectively, rejecting the null hypothesis with 95% confidence.",
"(a) Intent Accuracy on Spanish",
"(b) Intent Accuracy on Thai Figure 4: Ablation of the role of adaptation in X-METRA-ADA compared to X-METRA (X-METRA-ADA with the meta-training stage only).",
"X-METRA-ADA converges faster than X-METRA which in turn is better than FT for both languages.",
"More plots can be found in Appendix E. This suggests that zero-shot fine-tuning M-BERT on English only is over-fitting on English and its similar languages.",
"Using MLTA which adds more dialogue-specific mixed training helps reduce that gap for Thai on intent accuracy mainly, but not with the same degree on slot filling.",
"X-METRA-ADA outperforms all previous external baselines and fine-tuning models for both Spanish and Thai (except for slot filling on Spanish).",
"We achieve the best overall performance with an average cross-lingual cross-task increase of 3 .",
"2% over the FT baseline, 6 .",
"9% over FT w/EN, and 12 .",
"6% over MONO.",
"Among all models, MONO has the least stability as suggested by higher average standard deviation.",
"There is a tendency for X-METRA-ADA to work better for languages like Thai compared to Spanish as Thai is a truly low-resource language.",
"This suggests that pre-training on English only learns an unsuitable initialization, impeding its generalization to other languages.",
"As expected, fine-tuning on small amounts of the Dev data does not help the model generalize to new languages.",
"MONO baselines exhibit less stability than 3624 X-METRA-ADA.",
"On the other hand, X-METRA-ADA learns a more stable and successful adaptation to that language even on top of a model pre-trained on English with less over-fitting.",
"Table 2 shows a comparison of methods for TyDiQA-GoldP across seven language, evaluating using F1.",
"10 The benefits of fine-tuning and improvements from X-METRA-ADA observed in Table 1 are confirmed.",
"We also compare X-METRA-ADA to X-METRA, which is equivalent to X-METRA-ADA without the meta-adaptation phase.",
"On average, X-METRA increases by 10 .",
"8% and 1 .",
"5% over the best external and fine-tuning baseline respectively, whereas MONO results lag behind.",
"X-METRA-ADA outperforms X-METRA on average and is especially helpful on languages like Bengali and Telugu.",
"We compare X-METRA and X-METRA-ADA in more depth in Section 4.2.",
"Meta-learning significantly and consistently outperforms fine-tuning.",
"In Appendix D, we report zero-shot results for QA and notice improvements using X-METRA-ADA over FT for some languages.",
"However, we cannot claim that there is a direct correlation between the degree to which the language is low-resource and the gain in performance of X-METRA-ADA over fine-tuning.",
"Other factors like similarities of grammatical and morphological structure, and shared vocabulary in addition to consistency of annotation may play a role in the observed cross-lingual benefits.",
"Studying such correlations is beyond the scope of this paper.",
"Meta-Adaptation Role The learning curves in Figure 4 compare X-METRA-ADA, X-METRA (i.e. meta-training but no meta-adaptation), and fine-tuning, both with English and with target language data only, for both Spanish and Thai intent detection in MTOD.",
"In general, including English data in with in-language fine-tuning data lags behind language-specific training for all models, languages, and sub-tasks.",
"With the exception of slot filling on Spanish, there is a clear gap between naive fine-tuning and meta-learning, with a gain in the favor of X-METRA-ADA especially for Thai.",
"Naive fine-tuning, X-METRA, and X-METRA-ADA all start from the same checkpoint fine-tuned on English.",
"All model variants are sampled from 10 Full results using Exact Match scores too can be found in Appendix D. the same data.",
"For Spanish, continuing to use English in naive fine-tuning to Spanish reaches better performance than both variants of meta-learning for Slot filling on Spanish (see Appendix E).",
"This could be due to the typological similarity of Spanish and English, which makes optimization fairly easy for naive fine-tuning compared to Thai, which is both typologically distant and low-resource.",
"K-Shot Analysis We perform a k-shot analysis by treating the number of instances seen per class (i.e. shots') as a hyper-parameter to determine at which level few-shot meta-learning starts to outperform the fine-tuning and monolingual baselines.",
"As shown in Figure 5, it seems that while even one shot for X-METRA-ADA is better than fine-tuning on intent classification, k = q = 9 shot and k = q = 6 shot are at the same level of stability with very slightly better results for 6 shot showing that more shots beyond this level will not improve the performance.",
"While 1 shot performance is slightly below our monolingual baseline, it starts approaching the same level of performance as 3 shot upon convergence.",
"3625",
"To circumvent imperfect alignments in the cross-lingual representations, Liu et al. (2019) propose a latent variable model combined with cross-lingual refinement with a small bilingual dictionary related to the dialogue domain.",
"Liu et al. (2020) enhance Transformer-based embeddings with mixed language training to learn inter-lingual semantics across languages.",
"helps more than increasing k .",
"The gap is bigger between k = 6 q = 3 and k = 6 q = 6 especially for languages like Bengali and Telugu.",
"We can also see that k = 6 q = 3 is at the same level of performance to FT for those languages.",
"Downsampling Analysis We perform a downsampling analysis, where we gradually decrease the proportion of the overall set from which the target language is sampled used for few-shot learning in X-METRA-ADA and FT.",
"Figure 7 shows a comparison between intent accuracies and slot F1 scores between the main models X-METRA-ADA and FT on Thai.",
"We notice that as the percentage of query data increases, the gap between X-METRA-ADA and FT increases slightly, whereas the gain effect on slots is steadier.",
"This suggests that X-METRA-ADA is at the same level of effectiveness even for lower percentages.",
"Cross-lingual transfer learning Recent efforts apply cross-lingual transfer to downstream applications such as information retrieval (Jiang et al., 2020); information extraction (M'hamdi et al., 2019, Bari et al., 2020b), and chatbot applications (Lin et al., 2020, Abbet et al., 2018).",
"Upadhyay et al. (2018) and Schuster et al. (2019) propose the first real attempts at cross-lingual task-oriented dialog using transfer learning.",
"Although they show that cross-lingual joint training outperforms monolingual training, their zero-shot model lags behind machine translation for other languages.",
"However, although these approaches show promising zero-shot performance for Spanish, their learned refined alignments are not good enough to surpass machine translation baselines on Thai.",
"More recently, Hu et al. (2020) and Liang et al. (2020) introduce XTREME and XGLUE benchmarks for the large-scale evaluation of cross-lingual capabilities of pre-trained models across a diverse set of understanding and generation tasks.",
"In addition to M-BERT, they analyze models like XLM (Lample and Conneau, 2019) and Uni-coder (Huang et al., 2019).",
"Although the latter two models slightly outperform M-BERT, they need a large amount of parallel data to be pre-trained.",
"It is also not clear the extent to which massive cross-lingual supervision helps to bridge the gap to linguistically distant languages.",
"Meta-learning for NLP Previous work in meta-learning for NLP is focused on the application of first-order MAML (Finn et al., 2017).",
"Earlier work by Gu et al. (2018) extends MAML to improve low-resource languages for neural machine translation.",
"Dou et al. (2019) apply MAML to NLU tasks in the GLUE benchmark.",
"They show that meta-learning is a better alternative to multi-task learning, but they only validate their approach on English.",
"Wu et al. (2020) also use MAML for cross-lingual NER with a slight enhancement to the loss function.",
"More recently, Nooralahzadeh et al. (2020) also directly leverage MAML on top of M-BERT and XLM-R for zero-shot and few-shot XNLI and MLQA datasets.",
"Although their attempt shows that cross-lingual transfer using MAML outperforms other baselines, the degree of typological commonalities among languages plays a significant role in that effect.",
"In addition to that, their approach is an oversimplification of the n-way k-shot setup, with a one-fit-all sampling of data points for support and query and additional supervised fine-tuning.",
"In this paper, we adapt a meta-learning approach for cross-lingual transfer learning in Natural Language Understanding tasks.",
"Our experiments cover two challenging cross-lingual benchmarks: task-oriented dialog and natural questions including an extensive set of low-resource and typologically diverse languages.",
"X-METRA-ADA reaches better convergence stability on top of fine-tuning, reaching a new state of the art for most languages.",
"This work was started while the first author was a research intern at Adobe Research (Summer 2020).",
"This material is partially based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.",
"We thank the anonymous reviewers for their detailed comments."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"result",
"result",
"objective",
"objective",
"objective",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other"
] |
[
"Retrieval-based dialogue systems display an outstanding performance when pre-trained language models are used, which includes bidirectional encoder representations from transformers (BERT).",
"During the multi-turn response selection, BERT focuses on training the relationship between the context with multiple utterances and the response.",
"However, this method of training is insufficient when considering the relations between each utterance in the context.",
"This leads to a problem of not completely understanding the context flow that is required to select a response.",
"To address this issue, we propose a new fine-grained post-training method that reflects the characteristics of the multi-turn dialogue.",
"Specifically, the model learns the utterance level interactions by training every short context-response pair in a dialogue session.",
"Furthermore, by using a new training objective, the utterance relevance classification, the model understands the semantic relevance and coherence between the dialogue utterances.",
"Experimental results show that our model achieves new state-of-the-art with significant margins on three benchmark datasets.",
"This suggests that the fine-grained post-training method is highly effective for the response selection task.",
"1 1 Introduction Constructing a dialogue system that can naturally and consistently interact with humans is currently a popular research topic.",
"There are two approaches for the implementation of a dialogue system: generation-based and retrieval-based methods.",
"The latter approach aims to select the correct response among the response candidates.",
"In the initial multi-turn response selection, Lowe et al. (2015) proposed leveraging RNN to match the dialogue context with a response.",
"Later, with the advent of 1 https://github.com/hanjanghoon/BERT_ FP the attention mechanism (Bahdanau et al., 2015; Luong et al., 2015; Vaswani et al., 2017), multiturn response selection models that use the attention mechanism have been proposed (Zhou et al., 2018).",
"Recently, pre-trained language models, such as bidirectional encoder representations from transformers, BERT, have been applied to a variety of response selection models (Vig and Ramea, 2019; Lu et al., 2020; Gu et al., 2020), and they have shown excellent performance.",
"Recently, pre-trained language models have been widely used in several natural language processing areas, such as question answering and dialogue systems.",
"One of the best pre-trained language models, BERT (Devlin et al., 2019), is initially pre-trained on a large and general domain corpus, and then it is fine-tuned to adapt to specific tasks.",
"Since BERT is pre-trained with general data, its performance can be improved by post-training to adapt to domain-specific data.",
"Some previous studies (Xu et al., 2019; Whang et al., 2020) proposed a post-training method that can learn domain data before fine-tuning for a task.",
"In the previous studies, the models were post-trained using domain-specific task data with the same pre-training objectives as BERT, masked language model (MLM) and next sentence prediction (NSP).",
"To develop a new post-training method that is suitable for dialogue, we propose a simple but powerful fine-grained post-training method.",
"The new post-training method has two learning strategies.",
"The first is to train the model by dividing the entire dialogue into multiple short contextresponse pairs.",
"The second is to train the model with a new objective called utterance relevance classification, which classifies the relation between given utterances and the target utterance into more fine-grained labels.",
"The dialogue consists of a context that includes multiple utterances and a response with one utterance.",
"There are two advantages to learning the dialogue by dividing it into multiple new short context-response pairs, rather than learning with the entire contextresponse pair during post-training.",
"First, the model can learn the interaction between internal utterances, which is overlooked in the previous training methods.",
"The previous multi-turn response selection models focus on identifying the associated information between a context with multiple utterances and the response.",
"To understand the associated information, BERT takes the whole context as input to represent the relationship between the context and the response, instead of gradually expanding and learning the relationship between the utterances inside the context.",
"The relationship between the entire context and response can be learned through self-attention.",
"However, the relationship between the utterances in the dialogue is easily overlooked.",
"To address this issue, we divide the entire dialogue into multiple short contextresponse pairs.",
"Since each pair consists of internal utterances, the model can learn the utterance-level interactions.",
"The second advantage is that the model can capture the relationship between the utterances more accurately.",
"In general, the utterances that are related to the response are located close to the response.",
"As short context-response pairs consist only of utterances that are close to the response, more fine-grained training is possible.",
"Another strategy of fine-grained post-training is that it involves using a new training objective that is called the utterance relevance classification (URC).",
"In the case of the NSP used in BERT, the model distinguishes whether the target utterance is random or the next.",
"As mentioned by Lan et al. (2020), the model trained with the NSP can easily learn the topic prediction that distinguishes the semantic meaning of the utterances.",
"However, it lacks coherence prediction that distinguishes whether the selected utterance is consecutive.",
"In the case of sentence ordering prediction (SOP) used in Lan et al. (2020), the coherence between the utterances is well learned because the order of the two sequences is trained.",
"However, topic prediction is relatively insufficient because the two sequences are semantically similar.",
"As it is important to distinguish between semantically similar utterances in the multi-turn dialogue and determine whether the selected utterances are consecutive, we propose URC, which classifies the target utterance into three categories (random, semantically similar, next) to learn the topics and coherence.",
"1. Through short context-response pair training during fine-grained post-training, the model effectively learns the interactions between internal utterances, which can be easily overlooked in the existing methods.",
"This significantly improves the performance of response selection.",
"2. By devising the new training objective, URC, we enhance the model's capability to measure both the semantic relevance and coherence between utterances, improving the model to select the appropriate response.",
"We achieved state-of-the-art performance with a significant improvement for three benchmarks (Ubuntu, Douban, E-commerce).",
"Specifically, our model achieved an absolute improvement in R 10 @1 by 2.7%p, 0.6%p, and 9.4%p on Ubuntu Corpus V1, Douban Corpus, and E-commerce Corpus, respectively, in comparison to previous state-of-the-art methods.",
"The results indicate the effectiveness and generality of the proposed method.",
"The existing methods for building dialogue systems can be categorized into two groups: those with a retrieval-based approach (Chaudhuri et al., 2018; Tao et al., 2019; Yuan et al., 2019) and those with a generation-based approach (Wu et al., 2018; Zhou et al., 2018; Hosseini-Asl et al., 2020; Ham et al., 2020).",
"Recent studies have focused on the multi-turn retrieval dialogue system where the system selects the most appropriate response when a multi-turn dialogue context is provided.",
"Lowe et al. (2015) proposed a new benchmark dataset called the Ubuntu internet relay chat (IRC) Corpus V1 and a RNN-based baseline model.",
"Kadlec et al. (2015) suggested a dual encoder-based model that attempts to effectively encode the context and response by using LSTM and CNN as encoder.",
"With the advent of the attention mechanism (Bah-danau et al., 2015; Luong et al., 2015; Vaswani et al., 2017), models such as the deep attention matching network (Zhou et al., 2018), which applied the attention mechanism to the response selection dialogue system, have been proposed.",
"Chen and Wang (2019) adapted the natural language inference model to the response selection task.",
"Tao et al. (2019) performed a deep interaction between the context and the response through multiple interaction blocks.",
"Yuan et al. (2019) improved the performance by controlling the dialogue context information with a multi-hop selector.",
"The pre-trained language models have shown an impressive performance in the response selection (Lu et al., 2020; Gu et al., 2020; Whang et al., 2021; Xu et al., 2021).",
"One of those, BERT, is a bidirectional transformer-based encoder that has multiple layers.",
"We use the publicly opened BERT base model in which the number of layers, attention head, and size of the hidden state are 12, 12, and 768, respectively.",
"There are a variety of training objectives for the pre-trained language models.",
"BERT uses two training objectives: MLM and NSP.",
"The former randomly masks 15% of the tokens that are predicted by the model.",
"This method of training aims for the model to learn the overall contextual representation of a given text.",
"In the latter method, the model is given two sequences of text: A and B. The model is trained to determine if sequence B is the next sequence after sequence A. The model takes the input, sequences A and B, separated by the special token SEP.",
"The model uses the segment embed-dings of 0 for sequence A and 1 for sequence B. Then, by using the CLS token, the model predicts the relationship between sequences A and B. AL-BERT (Lan et al., 2020) uses sentence ordering prediction (SOP) instead of NSP as the training objectives.",
"The SOP distinguishes whether the order of sequences A and B is correct or if they have been swapped.",
"The post-training method, which helps the model understand a certain domain, was introduced in the response selection task (Whang et al., 2020; Gu et al., 2020; Humeau et al., 2020; Whang et al., 2021; Xu et al., 2021).",
"In addition to domain adaptation, the post-training method has the advantage of data augmentation because it learns the relationship between the two sequences in the dialogue session with the NSP.",
"However, the method does not reflect the conversational characteristics because it merely follows BERT's pre-training method.",
"To address this issue, we propose a novel post-training method that is suitable for a multi-turn dialogue.",
"The proposed method achieved better performance in comparison to the previous post-training.",
"Suppose that the dataset D = { ( c i , r i , y i ) } Ni =1 is a set of N triples that consist of the context c i , response r i , and ground truth label y i .",
"The context is a sequence of utterances, which is c i = { u 1 , u 2 , ..., u M } , where M is the maximum context length.",
"The j th utterance u j = { w j, 1 , w j, 2 , ..., w j,L } contains L tokens, where L is the maximum sequence length.",
"Each response, r i , is a single utterance.",
"y i { 0 , 1 } denotes the truth label of a given triple where y i = 1. This indicates that r i is the correct response for the context c i ; otherwise, y i = 0. The task is to find the matching model, g ( , ) , for the D .",
"The matching degree of c i and r i is obtained through g ( c i , r i ) for a given contextresponse pair ( c i , r i ).",
"This study is based on the binary classification to fine-tune BERT for the response selection task that analyzes the relationship between the context and response.",
"The input format ( x ) of the existing BERT model is ( [ CLS ] , sequenceA , [ SEP ] , sequenceB , [ SEP ] ), where [ CLS ] and [ SEP ] are CLS and SEP tokens, respectively.",
"To measure the matching degree of a context-response pair, we construct the input by using sequence A as a context and sequence B as a response.",
"In addition, the end of the utterance token (EOU) is placed at the end of each utterance to distinguish them in the context.",
"The input format of BERT for the response selection is as follows: x = [ CLS ] u 1 [ EOU ] ... u M [ EOU ] [ SEP ] r i [ SEP ] (1) x subsequently becomes input representation vectors through the sum of the position, segment, and token embedding.",
"The transformer block in BERT calculates the cross attention between the input representation of the context and the response through the self-attention mechanism.",
"Then, the final hidden vector of the first input token in BERT, T [ CLS ] , is used as the aggregate representation of the contextresponse pair.",
"The final score g ( c, r ) , which is the matching degree between the context and the response, is obtained by passing T [ CLS ] through a single-layer neural network.",
"where W fine is a task-specific trainable parameter for fine-tuning.",
"Eventually, the weights of the model are updated by using the cross-entropy loss function.",
"To improve the capability of selecting an appropriate response by effectively grasping multi-turn dialogue information, we propose a simple but powerful fine-grained post-training method in Figure",
"1. The fine-grained post-training method has two learning strategies.",
"The entire dialogue session is divided into multiple short context-response pairs, and URC is used as one of the training objectives.",
"Through the former strategy, the model learns the interaction of the related internal utterances of the dialogue.",
"Through URC, it learns the semantic relevance and coherence between the utterances.",
"We post-train the model by constructing multiple short context-response pairs using all utterances of the dialogue session to learn the utterance level interaction.",
"We regard every utterance as a response and its previous k utterances as a short context.",
"The short context contains fewer utterances than the average number of utterances in the dialogue sessions.",
"Each short context-response pair is trained to learn the internal utterance interactions, eventually allowing the model to understand the relationship between all the utterances in a dialogue session.",
"It also allows the model to learn the interaction of the utterances closely related to the response because the context is appropriately configured with a short length.",
"The NSP objective (Devlin et al., 2019) is inadequate for capturing the coherence between the utterances.",
"This is because NSP mainly learns the topic's semantic relevance by classifying between a random and the next utterance.",
"By using the SOP (Lan et al., 2020) as an objective function, the ability to distinguish the semantic relevance decreases because the model learns the coherence of two utterances with a similar topic.",
"To learn both the semantic relevance and the coherence in a dialogue, we propose a new training objective that is called the utterance relevance classification (URC) in Figure",
"2. The URC classifies the target utterance for a given short context into one of three labels.",
"The first label is a random utterance.",
"Secondly, an utterance, which is not the response, is randomly sampled in the same dialogue session.",
"Although utterances of the same dialogue session have a similar topic to the correct response, they are inappropriate for the coherence prediction.",
"Finally, the correct response is selected.",
"The model learns the topic prediction by performing a classification between the random utterances and correct responses, and the model makes the coherence predictions by classifying the random utterances and correct responses in the same dialogue sessions.",
"By classifying the relationship between the short context and the target utterance into three cases, the model can learn both the semantic relevance information and the coherence information of the dialogue session.",
"An overview of the fine-grained post-training (FP) method is shown in Figure",
"1. First, when given the conversation session U i = { u 1 , u 2 , ..., u M , u M +1 = r i } , we select the continuous utterances and form a short contextresponse pair S j = { u j , u j +1 , ..., u j + k 1 , u j + k } with a context length of k .",
"The model classifies the relationship between a short context sc = { u j , u j +1 , ..., u j + k 1 } and the given target utterance u t .",
"The target utterance can be one of three options: a random utterance u r , a random utterance for the same dialogue session u s , or the response u j + k , where 1 s M + 1 and j + k (cid:54) = s .",
"We denote the input sequence x for the fine-grained post-training as follows: x = [ CLS ] u j [ EOU ] ... u j + k 1 [ EOU ] [ SEP ] u t [ SEP ] (4) As an aggregate representation, T [ CLS ] is used.",
"The final score g urc ( sc, u t ) is obtained by feeding T [ CLS ] through a single-layer perceptron, and the degree of relevance between the short context and target utterance is obtained through the score.",
"To calculate the URC loss, we use the cross-entropy loss, which is formulated as follows: LURC = (cid:88) 3 (cid:88) i y i log( g urc ( sc, u t ) i ) (5) To train the proposed model, we use the MLM and URC together.",
"In the case of the MLM, we apply a dynamic masking technique proposed by RoBERTa (Liu et al., 2019), which is unlike BERT.",
"The model can learn more contextual representations because it learns by masking a random token each time instead of learning by masking a predetermined token.",
"To optimize the model, we use the sum of the cross-entropy loss of the MLM and URC, which is formulated as follows: LFP = LMLM + LURC (6) 4 Experiments 4.1 Datasets We tested our model on widely used benchmarks that include Ubuntu Corpus V1, Douban Corpus, and the E-commerce Corpus.",
"The statistics for the three datasets are presented in Table",
"1. Ubuntu Corpus The Ubuntu IRC Corpus V1 (Lowe et al., 2015) is chatting log conversations, a publicly available domain-specific dialogue dataset.",
"This dialogue data deals with Ubuntu-related topics.",
"In our study, the data proposed by Xu et al. (2017) are used.",
"The data are preprocessed with special placeholders such as num-bers, URLs, and system paths.",
"Douban Corpus Douban Corpus (Wu et al., 2017) is a Chinese open-domain dataset from the Douban group, which is a popular social networking service.",
"It consists of dyadic dialogues (i.e., a conversation between two people) that is longer than two turns.",
"The E-commerce Corpus (Zhang et al., 2018) is a Chinese multi-turn dialogue that is collected from Taobao, which is the largest e-commerce platform in China.",
"It contains real-world conversations between customers and customer service staff.",
"The corpus consists of diverse conversations such as consultations and recommendations.",
"For the fine-grained post-training, we reconstructed the three benchmark datasets.",
"Specifically, out of the one million triples in each benchmark's training set, we used 500K positive triples as dialogue sessions.",
"Since multiple short context-response pairs could be created in one dialogue session, we eventually constructed 12M, 9M, and 6M sub-context-response pairs for Ubuntu Corpus, Douban Corpus, E-commerce Corpus, respectively.",
"These sub-context-response pairs were used for the post-training.",
"Following the previous works (Tao et al., 2019; Yuan et al., 2019; Gu et al., 2020), we used recall as an evaluation metric.",
"Recall is denoted as R 10 @ k , which implies that the correct answer exists among the top k candidates out of the ten candidate responses.",
"Specifically, in the experiment, R 10 @1 , R 10 @2 , and R 10 @5 were used.",
"Apart from R 10 @ k , we also employed MAP (mean average precision), MRR (mean reciprocal rank), and P @1 (precision at one) for the Douban Corpus because the dataset may contain more than one positive response from the candidates.",
"We compared our fine-grained post-trained model, BERT-FP , with the following previous models.",
"For the initial checkpoint, we adapted the BERT base (110M) from Devlin et al. (2019).",
"Single-turn matching models: Lowe et al. (2015), Kadlec et al. (2015) proposed basic models with RNN, CNN, and LSTM.",
"SMN: Wu et al. (2017) decomposes the context-response pair into several utterance-response pairs.",
"After matching every utterance and response, the matching vector is accumulated as the final matching score.",
"DUA: Zhang et al. (2018) formulates the previous utterances into the context by using a deep utterance aggregation.",
"DAM: Zhou et al. (2018) proposed a transformer encoder-based model and calculated the matching score between the context and response through self-attention and cross-attention.",
"IoI: Through multiple interaction block chains, Tao et al. (2019) allows for deep-level matching between the utterances and responses.",
"ESIM: Chen and Wang (2019) applied the neural language inference (NLI)'s ESIM model to the response selection.",
"MSN: Yuan et al. (2019)'s model selects more relevant context utterances with a multihop selector, and it determines the degree of matching between the selected context utterances and the response.",
"BERT: A vanilla model fine-tuned to the response selection task on the pre-trained BERT base without post-training.",
"RoBERTa-SS-DA: Lu et al. (2020) proposed the speaker segmentation approach, which discriminates the different speakers and also applied dialogue augmentation.",
"BERT-DPT: Whang et al. (2020) proposed a model that applies domain post-training (DPT).",
"The model is post-trained with BERT's pre-training methods, MLM and NSP, and then fine-tuned to the response selection task.",
"BERT-VFT: Whang et al. (2020) applied the efficient variable fine-tuning (VFT) method that was proposed by Houlsby et al. (2019).",
"SA-BERT: Gu et al. (2020) incorporated speaker-aware embedding to the model; therefore, it is aware of the speaker change information.",
"BERT-SL: Xu et al. (2021) introduced four self-supervised tasks and trained the response selection model with these auxiliary tasks in a multi-task manner.",
"Table 2 shows the performance of the proposed BERT-FP that is evaluated on three benchmarks.",
"As you can see in the results, the proposed model outperformed all of the other models used as baselines.",
"In comparison to the vanilla model of BERT, our model achieved an absolute improvement in R 10 @1 by 10.3%p, 4.4%p, and 26%p on Ubuntu Corpus V1, Douban Corpus, and E-commerce Corpus, respectively.",
"Compared to BERT-DPT, our model achieved an absolute improvement of 6%p in R 10 @1 on the Ubuntu Corpus.",
"These results indicate that fine-grained post-training, which reflects the dialogue's characteristics, is superior to the previous post-training.",
"In comparison to the previous state-of-the-art models, UMSBERT + and BERT-SL, our model achieved an improved performance by a large margin in terms of all the metrics for the three benchmarks.",
"These results demonstrate that our method effectively learns the semantic relevance and coherence between the internal utterances, which enhances selection performance significantly.",
"Figure 3 shows the performance variations of BERT-FP depending on the length of the short context.",
"In this experiment, we trained the models with 10% of the training set and evaluated them with the entire test set to perform many experiments.",
"Therefore, they achieved lower performance.",
"For the Ubuntu Corpus and E-commerce Corpus, the best performance in R 10 @1 is achieved when the context length is three.",
"For Douban Corpus, we evaluated performance with MAP rather than Models R 10 @1 R 10 @2 R 10 @5 BERT-NSP 0.904 0.960 0.994 BERT-SOP 0.865 0.935 0.987 BERT-URC 0.911 0.962 0.994 Table 3: Performance according to the training objective on Ubuntu Corpus V1.",
"R 10 @1 because it may have multiple correct responses in the candidates.",
"The best performance in MAP on Douban Corpus is achieved when the context length is set to two.",
"We compared the proposed training objective (URC) with the previous training objectives (NSP, SOP).",
"Table 3 demonstrates that our training objective outperforms the other training objectives.",
"This indicates that learning both topics and the coherence between the internal utterances is important.",
"We investigated the impact of each part of the fine-grained post-training method through a series of ablation experiments on the Ubuntu Corpus in Table",
"4. The model without post-training (BERT) is used as the baseline.",
"Then, we gradually applied our methods for post-training.",
"+MLM indicates that the model is post-trained only with the MLM.",
"The _SCR suffix denotes the model that is post-trained with the short context-response pairs.",
"The comparison between +MLM and +MLM+NSP shows that the NSP during the existing post-training has little effect on the performance.",
"However, as shown in the comparison between +MLM and MLM+NSP_SCR, the NSP trained with a short context-response pair significantly improved the model performance.",
"The experimental results also showed that using URC instead of NSP enhances performance.",
"The post-training method has the effect of data augmentation.",
"However, it differs from the usual Models R 10 @1 R 10 @2 R 10 @5 BERT (Gu et al., 2020) 0.808 0.897 0.975 BERT-DA 0.880 0.946 0.990 BERT-FP 0.911 0.962 0.994 Table 5: Comparing the data augmentation on Ubuntu Corpus V1 Models R 10 @1 R 10 @2 R 10 @5 BERT (Gu et al., 2020) 0.808 0.897 0.975 BERT-FP-NF 0.862 0.933 0.986 BERT-FP 0.911 0.962 0.994 Table 6: The effectiveness of fine-grained post-training for response selection on Ubuntu Corpus V1 data augmentation method, which directly augments the data in the fine-tuning step.",
"Therefore, we compared the fine-grained post-training (BERT-FP) method with the typical data augmentation (BERT-DA) on the Ubuntu Corpus.",
"The data augmentation strategy is similar to the method used in Chen and Wang (2019).",
"We considered each utterance as a response and its previous utterances as its context.",
"The experimental results are shown in Table",
"5. BERT-FP outperforms the data augmentation model (BERT-DA) by a 3.1%p in R 10 @1 .",
"The significant improvement demonstrates the effectiveness of the proposed method in comparison to the data augmentation.",
"Our method, including post-training and fine-tuning steps, is about 2.5 times faster than BERT-DA.",
"In particular, the post-trained model takes much less time to fine-tune than BERT-DA, making them easy to adapt to various applications.",
"The Effectiveness of Fine-grained Post-Training for Response Selection Task",
"To demonstrate the effectiveness of the fine-grained post-training method for the response selection task, we compared three different models: BERT, BERT-FP, and BERT-FP-NF (no fine-tuning).",
"BERT-FP-NF is a model that was post-trained and evaluated without fine-tuning.",
"As shown in Table 6, the performance of BERT-FP-NF is close to BERT-FP, which is fine-tuned.",
"These results show that even before fine-tuning to the response selection task, our fine-grained post-training alone could measure the matching degree between the context and the response.",
"In this paper, we have proposed a new fine-grained post-training method that is suitable for the multiturn dialogue.",
"The proposed method allows the matching model to learn the semantic relevance and the coherence of the utterances in the dialogue, and it improves the model's capability to select the appropriate response.",
"The experimental results on the three benchmark datasets demonstrate our post-training method's superiority for the response selection.",
"From this, our model achieved a new state-of-the-art performance for all three benchmarks.",
"In the future, we plan to research new post-training methods that are suitable for a variety of tasks, such as question answering and dialogue generation.",
"This work was supported by the Institute for Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2020-0-00368, A Neural-Symbolic Model for Knowledge Acquisition and Inference Techniques)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"other"
] |
[
"Abstract Large Pre-trained Language Models (PLMs) have become ubiquitous in the development of language understanding technology and lie at the heart of many artificial intelligence advances.",
"While advances reported for English using PLMs are unprecedented, reported advances using PLMs for Hebrew are few and far between.",
"The problem is twofold.",
"First, so far, Hebrew resources for training large language models are not of the same magnitude as their English counterparts.",
"Second, most benchmarks available to evaluate progress in Hebrew NLP require morphological boundaries which are not available in the output of PLMs.",
"In this work we remedy both aspects.",
"We present AlephBERT , a large PLM for Modern Hebrew, trained on larger vocabulary and a larger dataset than any Hebrew PLM before.",
"Moreover, we introduce a novel neural architecture that recovers the morphological segments encoded in contextualized embedding vectors.",
"Based on this new morphological component we offer an evaluation suite consisting of multiple tasks and benchmarks that cover sentence-level, word-level and sub-word level analyses.",
"On all tasks, AlephBERT obtains state-of-the-art results beyond contemporary Hebrew state-of-the-art models.",
"We make our AlephBERT model, the morphological extraction component, and the Hebrew evaluation suite publicly available, for future investigations and evaluations of Hebrew PLMs.",
"Contextualized word representations provided by models such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), GPT3 (Brown et al., 2020), T5 (Raffel et al., 2020) and more, were shown in recent years to be a critical component for obtaining state-of-the-art performance on a wide range of Natural Language Processing (NLP) tasks, from surface syntactic tasks as tagging and parsing, to downstream semantic tasks as question answering, information extraction and text summarization.",
"While advances reported for English using such models are unprecedented, previously reported results using PLMs in Modern Hebrew are far from satisfactory.",
"Specifically, the BERT-based Hebrew section of multilingual-BERT (Devlin et al., 2019) (henceforth, mBERT), did not provide a similar boost in performance as observed by the English section of mBERT.",
"In fact, for several reported tasks, the results of the mBERT model are on a par with pre-neural models or neural models based on non-contextual embeddings (Tsarfaty et al., 2020; Klein and Tsarfaty, 2020).",
"An additional Hebrew BERT-based model, HeBERT (Chriqui and Yahav, 2021), has been recently released, yet without empirical evidence of performance improvements on key components of the Hebrew NLP pipeline.",
"The challenge of developing PLMs for morphologically-rich and medium-resourced languages such as Modern Hebrew is twofold.",
"First, contextualized word representations are obtained by pre-training a large language model on massive quantities of unlabeled texts.",
"In Hebrew, the size of published texts available for training is relatively small.",
"To wit, Hebrew Wikipedia (300K articles) used for training mBERT is orders of magnitude smaller compared to English Wikipedia (6M arti-cles).",
"Second, commonly accepted benchmarks for evaluating Hebrew models, via Morpho-Syntactic Tagging and Parsing (Sadde et al., 2018), or Named Entity Recognition (Bareket and Tsarfaty, 2020) require decomposition of words into morphemes , 1 which are distinct of the sub-words (a.k.a. wordpieces) provided by standard PLMs.",
"Such morphemes are as of yet not readily available in the PLMs' output embeddings.",
"input units used by the PLMs and the sub-word morphological units needed for evaluation.",
"PLMs employ sub-word tokenization mechanisms such as WordPiece or Byte-Pair Encoding (BPE) for the purposes of minimizing Out-Of-Vocabulary words (Sennrich et al., 2016).",
"These sub-word tokens are generated in a pre-processing step, without utilization of any linguistic information, and passed as input to the PLM.",
"Crucially, such word-pieces do not reflect morphological units .",
"Extracting morphological units from contextualized vectors provided by PLMs is challenging yet necessary in order to enable morphological-level evaluation of Hebrew PLMs on standard benchmarks.",
"In this paper we introduce AlephBERT , a Hebrew PLM trained on more data and a larger vocabulary than any Hebrew PLM before.",
"2 Moreover, we propose a novel architecture that extracts the morphological sub-word units implicitly encoded in the contextualized vectors outputted by PLMs.",
"Using AlephBERT and the proposed morphological extraction model we enable evaluation on all existing Hebrew benchmarks.",
"We thus present a processing and evaluation pipeline tailored to fit Morphologically Rich Languages (MRLs), i.e., covering 2 We make our PLM https://huggingface.co/ onlplab/alephbert-base and demo https://nlp.",
"sentence-level, word-level and most importantly sub-word morphological-level tasks ( Segmentation, Part-of-Speech Tagging, full Morphological Tagging, Dependency Parsing, Named Entity Recognition (NER) and Sentiment Analysis ), and present new and improved SOTA for Modern Hebrew on all of these tasks.",
"Contextualized word embedding vectors are a major driver for improved performance of deep learning models on many Natural Language Understanding (NLU) tasks.",
"Initially, ELMo (Peters et al., 2018) and ULMFit (Howard and Ruder, 2018) introduced contextualized word embedding frameworks by training LSTM-based models on massive amounts of texts.",
"The linguistic quality encoded in these models was demonstrated over 6 tasks: Question Answering, Textual Entailment, Semantic Role labeling, Coreference Resolution, Name Entity Extraction, and Sentiment Analysis.",
"The next big leap was obtained with the introduction of the GPT-1 framework by Radford and Sutskever (2018).",
"Instead of using LSTM layers, GPT is based on 12 layers of Transformer decoders with each decoder layer composed of a 768-dimensional feed-forward layer and 12 self-attention heads.",
"Devlin et al. (2019) followed along the same lines and implemented Bidirectional Encoder Representations from Transformers, or BERT in short.",
"BERT attends to the input tokens in both forward and backward directions while optimizing a Masked Language Model and a Next Sentence Prediction objective objectives.",
"BERT Benchmarks An integral part involved in developing various PLMs is providing NLU multitask benchmarks used to demonstrate the linguistic abilities of new models and approaches.",
"English BERT models are evaluated on 3 standard major benchmarks.",
"The Stanford Question Answering Dataset (SQuAD) (Rajpurkar et al., 2016) is used for testing paragraph-level reading comprehension abilities.",
"Wang et al. (2018) selected a diverse and relatively hard set of sentence and sentence-pair tasks which comprise the General Language Understanding Evaluation (GLUE) benchmark.",
"The SWAG (Situations With Adversarial Generations) dataset (Zellers et al., 2018) presents models with partial description of grounded situations to see if they can consistently predict subsequent scenarios, thus indicating abilities of commonsense reasoning.",
"When evaluating Hebrew PLMs, one of the key pitfalls is that there are no Hebrew versions for these benchmarks.",
"Furthermore, none of the suggested benchmarks account for examining the capacity of PLMs for encoding the word-internal morphological structures which are inherent in MRLs.",
"In this work we enable a generic morphological-level evaluation pipeline that is suited for PLMs of MRLs.",
"Multilingual vs. Monolingual BERT Devlin et al. (2019) produced 2 BERT models, for English and Chinese.",
"To support other languages, they trained a multilingual BERT (mBERT) model combining texts covering over 100 languages, in the hoped to benefit low-resource languages with the linguistic information obtained from languages with larger datasets.",
"In reality, however, mBERT performance on specific languages has not been as successful as English.",
"Consequently, several research efforts focused on building monolingual BERT models as well as providing language-specific evaluation benchmarks.",
"Liu et al. (2019) trained CamemBERT, a French BERT model evaluated on syntactic and semantic tasks in addition to natural language inference tasks.",
"Rybak et al. (2020) trained HerBERT, a BERT PLM for Polish.",
"They evaluated it on a diverse set of existing NLU benchmarks as well as a new dataset for sentiment analysis for the e-commerce domain.",
"Polignano et al. (2019) created Alberto, a BERT model for Italian, using a massive tweet collection.",
"They tested it on several NLU tasks subjectivity, polarity (sentiment) and irony detection in tweets.",
"In order to obtain a large enough training corpus in low-resources languages, such as Finnish (Virtanen et al., 2019) and Persian (Farahani et al., 2020), a great deal of effort went into filtering and cleaning text samples obtained from web crawls.",
"BERT for MRLs Languages with rich morphology introduce another challenge involving the identification and extraction of sub-word morphological information.",
"In many MRLs words are composed of sub-word morphological units, with each unit acting as a single syntactic unit bearing as single POS tag (mimicking words' in English).",
"Antoun et al. (2020) addressed this for Arabic, a Semitic MRLs, by pre-processing the training data using a morphological segmenter, producing morphological segments to be used for training AraBERT instead of the actual words.",
"By doing so, they were able to produce output vectors that corre-Language Oscar (duped) Size Wikipedia Articles English 2.3T 6,282,774 Russian 1.2T 1,713,164 Chinese 508G 1,188,715 French 282G 2,316,002 Arabic 82G 1,109,879 Hebrew 20G 292,201 Table 1: Corpora Size Comparison: Resource-savvy languages vs. Hebrew.",
"spond to morphological segments rather than the original space-delimited word-tokens.",
"However, this approach requires the application of the same segmenter at inference time as well, and like any pipeline approach, this setup is susceptible to error propagation.",
"This risk is magnified as words in MRLs may be morphologically ambiguous, and the predicted segments might not represent the correct interpretation of the words.",
"As a result, the quality of the PLM depends on the accuracy achieved by the segmenting component.",
"A particular novelty of this work is not making any changes to the input, letting the PLM encode morphological information associated with complete Hebrew tokens.",
"Instead, transforming the resulting contextualized word vectors into morphological-level segments via a novel neural architecture which we discuss shortly.",
"Evaluating PLMs for MRLs Across all of the above-mentioned language-specific PLMs, evaluation was performed on the word-,sentenceor paragraph-level.",
"Non examined the capacity of PLMs to encode sub-word morphological-level information which we focus on in this work.",
"Sahin et al. (2019) probed various information types encoded in embedded word vectors.",
"Similarly to us, they focused on languages with rich morphology where linguistic signals are encoded at the morphological, subword level.",
"Their work is more about explainability showing high positive correlation of probing tasks to the downstream tasks, especially for morphologically rich languages.",
"Unlike us, they assume a single POS tag and set of features per word in their probing tasks.",
"In Hebrew, Arabic and other MRLs, tokens may carry multiple POS per word, and are required to be segmented for further processing.",
"We provide a framework that extracts subword morphological units given contextualized word vectors, that enables to evaluate PLMs on morphologically-aware datasets where words can have multiple POS tags and feature-bundles.",
"Data The PLM termed AlephBERT that we provide herein is trained on a larger dataset and a larger vocabulary than any Hebrew BERT instantiation before.",
"The data we train on is listed in Table 2.",
"Concretely, we employ the following datasets for pre-training:",
"(i) Oscar: Deduplicated Hebrew portion extracted from Common Crawl via language classification, filtering and cleaning (Ortiz Surez et al., 2020).",
"(ii) Wikipedia: Texts from all of Hebrew Wikipedia, extracted using Attardi (2015).",
"(iii) Twitter: Hebrew tweets collected between 2014-09-28 and 2018-03-07.",
"We removed markers ( RT: , @ user mentions and URLs), and eliminated duplicates.",
"For data statistics, see Table 2.",
"The Hebrew portions of Oscar and Wikipedia provide us with a training-set size orders-of-magnitude smaller compared with resource-savvy languages, as shown in Table 1.",
"In order to build a strong PLM we need a considerable boost in the amount of sentences the PLM can learn from, which in our case comes form massive amounts of tweets added to the training set.",
"We acknowledge the potential inherent concerns associated with this data source (population bias, behavior patterns, bot masquerading as humans etc.) and note that we have not made any explicit attempt to identify these cases.",
"Honoring ethical and legal constraints we have not manually analyzed nor published this data source.",
"While the free form language expressed in tweets might differ significantly from the text found in Oscar and Wikipedia, the sheer volume of tweets helps us close the resource gap substantially with minimal effort.",
"3 Model We used the Transformers training framework of Huggingface (Wolf et al., 2020) and trained two different models a small model with 6 hidden layers learned from the Oscar portion of our dataset, and a base model with 12 hidden layers which was trained on the entire dataset.",
"The processing units used are wordpieces generated by training BERT tokenizers over the respective 3 For more details and an ethical discussion, see Section 8.",
"datasets with a vocabulary size of 52K in both cases.",
"Following the work on RoBERTa (Liu et al., 2019) we optimize AlephBERT with a masked-token prediction loss.",
"We deploy the default masking configuration where 15% of word piece tokens are masked.",
"In 80% of the cases, they are replaced by [MASK], in 10% of the cases, they are replaced by a random token and in the remaining cases, the masked tokens are left as is.",
"Operation To optimize GPU utilization and decrease training time we split the dataset into 4 chunks based on the number of tokens in a sentence and consequently we are able to increase batch sizes and dramatically shorten training time.",
"We trained for 5 epochs with learning rate 1e-4 followed by an additional 5 epochs with learning rate at 5e-5 for a total of 10 epochs.",
"We trained AlephBERT base over the entire dataset on an NVidia DGX server with 8 V100 GPUs which took 8 days.",
"AlephBERT small was trained over the Oscar portion only, using 4 GTX 2080ti GPUs taking 5 days in total.",
"Modern Hebrew is a Semitic language with rich morphology and complex orthography.",
"As a result, the basic processing units in the language are typically smaller than raw space-delimited tokens.",
"Subsequently, most standard evaluation tasks require knowledge of the internal morphological boundaries within the raw tokens.",
"To accommodate this granularity requirement we developed a neural model designed to produce the disambiguated morphological segments for each token in context.",
"These linguistic segmentations are distinct of the word-pieces employed by the PLM.",
"In the morphological extraction neural model, each input token is represented by (one or more) contextualized word-vectors produced by the PLM.",
"Each word-piece token is associated with a vector, and for each space-delimited token, we average the word-piece vectors.",
"We feed the resulting vector into a seq2seq model and encode the surface token as a sequence of characters using a BiLSTM, followed by a decoder that generates an output sequence of characters, using space as a special symbol signaling morphological boundaries.",
"For tasks involving both segments and labels (Part-of-Speech Tagging, Morphological-Features Tagging, Named-Entity Recognition) we expand this network in a multi-task learning setup; when generating an end-of-segment (space) symbol, the model also predicts task label, and we combine the segment-label losses.",
"The complete morphological extraction architecture is illustrated in Figure 2.",
"Goal In order to empirically gauge the effect of model size and data quantity on the quality of the language model, we compare the performance of AlephBERT (both small and base ) with all existing Hebrew BERT instantiations.",
"In this Section, we detail the tasks and evaluation metrics.",
"In the next Section, we present and analyze the results.",
"Sentiment Analysis We first report on a sentence classification task, assigning a sentence with one of three sentiment values: negative, positive, neutral.",
"Sentence-level predictions are achieved by directly fine-tuning the PLM using an additional sentence-classification head The sentence-level embedding vector representation is the one associated with the special [CLS] BERT token.",
"We used a version of the Hebrew Facebook Sentiment dataset (henceforth FB) of Amram et al. (2018) which we corrected by removing leaked samples.",
"4 We fine-tuned all models for 15 epochs with 5 different seeds, and report mean accuracy.",
"Named Entity Recognition In this setup we assume a sequence labeling task based on space-delimited word-tokens.",
"The input comprises of the sequence of words in the sentence, and the output contains BIOES tags indicating entity spans.",
"Word-level NER predictions are achieved by directly fine-tuning the PLMs using an additional token-classification head In cases where a word is split into multiple word pieces by the PLM tokenizer, we employ common practice and use the first word-piece vector.",
"We evaluate this model on two corpora.",
"(i) The Ben-Mordecai (BMC) corpus (Ben Mordecai and Elhadad, 2005), which contains 3294 sentences with 4600 entities and seven different entity categories (Date, Location, Money, Organization, Person, Percent, Time).",
"To remain compatible with the original work we train and test the models on 3 4 This version has a total of 8,465 samples and is publicly available here: https://github.com/OnlpLab/ Hebrew-Sentiment-Data 50 different splits as in Bareket and Tsarfaty (2020).",
"(ii) The Named Entities and MOrphology (NEMO) corpus 5 (Bareket and Tsarfaty, 2020) which is an extension of the SPMRL dataset with Named Entities.",
"The NEMO corpus contains 6220 sentences with 7713 entities of nine entity types (Language, Product, Event, Facility, Geo-Political Entity, Location, Organization, Person, Work-Of-Art).",
"We trained both models for 15 epochs with 5 different seeds and report mean F1 scores on entity spans.",
"Finally, to probe the PLM capacity to accurately predict word-internal structure, we test all models on five tasks that require knowledge of the internal morphology of raw words.",
"The input to all these tasks is a Hebrew sentence represented as a raw sequence of space-delimited words:",
"(i) Segmentation : Generating a sequence of morphological segments representing the basic processing units.",
"These units comply with the 2-level representation of tokens defined by UD, each unit with a single POS tag.",
"6",
"(ii) Part-of-Speech (POS) Tagging : Tagging each segment with a single POS.",
"(iii) Morphological Tagging : Tagging each segment with a single POS and a set of features.",
"Equivalent to the AllTags evaluation defined in the CoNLL18 shared task.",
"7",
"(iv) Morpheme-Based NER : Tagging each segment with a BIOES and its entity-type.",
"(v) Dependency Parsing : Use each segment as a node in the predicted dependency tree.",
"The Hebrew Section of the SPMRL Task (Sed-dah et al., 2013).",
"The Hebrew Section of the UD treebanks collection (Sadde et al., 2018) All models were trained for 15 epochs with 5 different seeds and we report two variants of mean F1 scores as described next.",
"5 Available here: https://github.com/OnlpLab/ NEMO-Corpus 6 https://universaldependencies.org/u/ overview/tokenization.html 7 https://universaldependencies.org/ conll18/results-alltags.html For tasks",
"(i)(iv) we use the morphological extraction model (Section 4) to extract the morphological segments of each word in context and also predict the labels via Multitask training.",
"For task",
"(iv) the NER task, we use the morphologically-annotated data files of the aforementioned SPMRL-based NEMO corpus (Bareket and Tsarfaty, 2020).",
"In addition to the multi-task setup described earlier, we design another setup in which we first only segment the text, and then perform fine-tuning with a token classification attention head directly applied to the PLM output for the segmented tokens (similar to the way we fine-tune the PLM for the word-based NER task described in the previous section).",
"We acknowledge that we are fine-tuning the PLM on morphological segments the model was not originally pre-trained on, however, as we shall see shortly, this seemingly unintuitive strategy performs surprisingly well.",
"For task",
"(v) we set up a dependency parsing evaluation pipeline using the standalone Hebrew parser offered by More et al. (2019) (a.k.a YAP) which was trained to produce SPMRL dependency labels.",
"The morphological information for each word (namely the segments and POS tags) is recovered by our morphological extraction model, and is used as input features for the YAP standalone dependency parser.",
"Aligned Segment The CoNLL18 Shared Task evaluation campaign 8 reports scores for segmentation and POS tagging 9 for all participating languages.",
"For multi-segment words, the gold and predicted segments are aligned by their Longest Common Sub-sequence, and only matching segments are counted as true positives.",
"We use the script to compare aligned segment and tagging scores between oracle (gold) segmentation and realistic (predicted) segmentation.",
"Aligned Multi-Set In addition to the CoNLL18 metrics, we compute F1 scores, with a slight but important difference from the shared task, as defined by More et al. (2019) and Seker and Tsarfaty (2020).",
"For each word, counts are based on multiset intersections of the gold and predicted labels ignoring the order of the segments while account-8 https://universaldependencies.org/ conll18/results.html 9 respectively referred to as 'Segmented Words' and 'UPOS' in the CoNLL18 evaluation script 51 Task NER (Word) Sentiment Corpus NEMO BMC FB Prev.",
"ing for the number of each segment.",
"Aligned mset is based on set difference which acknowledges the possible undercover of covert morphemes which is an appropriate measure of morphological accuracy.",
"Discussion To illustrate the difference between aligned segment and aligned mset , let us take for example the gold segmented tag sequence: b/IN, h/DET, bit/NOUN and the predicted segmented tag sequence b/IN, bit/NOUN .",
"According to aligned segment , the first segment ( b/IN ) is aligned and counted as a true positive, the second segment however is considered as a false positive ( bit/NOUN ) and false negative ( h/DET ) while the third gold segment is also counted as a false negative ( bit/NOUN ).",
"On the other hand with aligned multi-set both b/IN and bit/NOUN exist in the gold and predicted sets and counted as true positives, while h/DET is mismatched and counted as a false negative.",
"In both cased the total counts across words in the entire datasets are incremented accordingly and finally used for computing Precision, Recall and F1.",
"Sentence-Level Task Sentiment analysis accuracy results are provided in Table 4.",
"All BERT-based models substantially outperform the original CNN Baseline reported by Amram et al. (2018).",
"AlephBERT base is setting a new SOTA.",
"Word-Based Task On our two NER benchmarks, we report F1 scores on the word-based fine-tuned model in Table 4.",
"While we see noticeable improvements for the mBERT and HeBert variants over the current SOTA, the most significant increase is achieved by AlephBERT base , setting a new and improved SOTA on this task.",
"Specifically, we evaluate word segmentation, POS, Morphological Features, NER and dependencies compared against morphologically-labeled test sets.",
"In all cases, we use raw space-delimited tokens as input and produce morphological segments with our morphological extraction model.",
"Table 5 presents evaluation results for the SPRML dataset, compared against the previous SOTA of More et al. (2019).",
"For segmentation, POS tagging, and morphological tagging we report aligned multiset F1 scores.",
"BERT-based segmentations are similar, all scoring in the high range of 97-98 F1, which are hard to improve further.",
"10 For POS tagging and morphological features, all BERT-based models considerably outperform the previous SOTA.",
"For syntactic dependencies we report labeled and unlabeled accuracy scores of the trees generated by YAP (More et al., 2019) on our predicted segmentation.",
"Here we see impressive improvement compared to the previous SOTA of a joint morpho-syntactic framework.",
"It confirms that morphological errors early in the pipeline negatively impact downstream tasks, and highlight the importance of morphologically-driven benchmarks 10 According to error analysis, most of these errors are annotation errors or truly ambiguous cases.",
"as an integral part of PLM evaluation for MRLs.",
"All in all we see a repeating trend placing AlephBERT base first on all morphological tasks, indicating the depth of the model and a larger pretraining dataset improve the ability of the PLM to capture word-internal structure.",
"These trends are replicated on the UD Hebrew corpus, for two different evaluation metrics the Aligned MultiSet F1 Scores as in previous work on Hebrew (More et al., 2019), (Seker and Tsarfaty, 2020), and the Aligned Segment F1 scores metrics as described in the UD shared task (Zeman et al., 2018) reported in Tables 6 and 7 respectively.",
"Morpheme-Level NER results Earlier in this section we considered NER a word-level task that simply requires fine-tuning on the word level.",
"However, this setup is not accurate enough and less useful for downstream tasks, since the exact entity boundaries are often word internal (Bareket and Tsarfaty, 2020).",
"We hence report morpheme-based NER evaluation, respecting the exact boundaries of entity mentions.",
"To obtain morpheme-based labeled-span of Named Entities, we could either employ a pipeline, first predicting segmentation and then applying a fine-tuned labeling model directly on the segments , or employ a multi-task model and predict NER labels while performing segmentation.",
"assuming gold segmentation",
"(ii) a pipeline assuming predicted segmentation",
"(iii) segmentation and NER labels obtained jointly in a multi-task setup.",
"AlephBERT base consistently scores highest in all 3.",
"Looking at the Pipeline-Predicted scores, there is a clear correlation between a higher segmentation quality of a PLM and its ability to produce better NER results.",
"Moreover, the differences in NER scores are considerable (unlike the subtle differences in segmentation, POS and morphological features scores) and draw our attention to the relationship between the size of the PLM, the size of the pre-training data and the quality of the final NER models.",
"Specifically, HeBERT and AlephBERT small were both pre-trained on similar datasets and comparable vocabulary sizes (heBERT with 30K and AlephBERT-small with 52K) but HeBERT, with its 12 hidden layers, performs better compared to AlephBERT small which is composed of only 6 hidden layers.",
"It thus appears that semantic information is learned in those deeper layers, helping in both discriminating entities and improving the morphological segmentation capacity.",
"In addition, comparing AlephBERT base and HeBERT we note that they are both modeled with the same 12 hidden layer architecture the only differences between them are in the size of their vocabularies (30K vs 52K respectively) and the size of the training data (Oscar-Wikipedia vs Oscar-Wikipedia-Tweets).",
"The improvements exhibited by AlephBERT base , compared to HeBERT, suggest large amounts of training data and larger vocabulary are invaluable.",
"By exposing AlephBERT base to a substantially larger amount of text we increased the ability of the PLM to encode syntactic and semantic signals associated with Named Entities.",
"Our NER experiments further suggest that a pipeline composed of our accurate morphological segmentation model followed by AlephBERT base with a token classification head is the best strategy for generating morphologically-aware NER labels.",
"Finally, we observe that while AlephBERT excels at morphosyntactic tasks, on tasks with a more semantic flavor there is room for improvement.",
"Modern Hebrew, a morphologically-rich and medium-resourced language, has for long suffered from a gap in the resources available for NLP applications, and lower level of empirical results than observed in other, resource-rich languages.",
"This 53 work provides the first step in remedying the situation, by making available a large Hebrew PLM, named AlephBERT, with larger vocabulary and larger training set than any Hebrew PLM before, and with clear evidence as to its empirical advantages.",
"Crucially, we augment the PLM with a morphological disambiguation component that matches the input granularity of the downstream tasks.",
"Our system does not presuppose Hebrew-specific linguistic-rules, and can be transparently applied to any language for which 2-level segmentation data (i.e., the standard UD benchmarks) exists.",
"AlephBERT base obtains state-of-the-art results on morphological segmentation, POS tagging, morphological feature extraction, dependency parsing, named-entity recognition, and sentiment analysis, outperforming all existing Hebrew PLMs.",
"Our proposed morphologically-driven pipeline 11 serves as a solid foundation for future evaluation of Hebrew PLMs and of MRLs in general.",
"We follow Bender and Friedman (2018) regarding professional practice for NLP technology and address ethical issues that result from the use of data in the development of the models in our work.",
"Pre-Training Data.",
"The two initial data sources we used to pre-train the language models are Oscar and Wikipedia.",
"In using the Wikipedia and Oscar we followed standard language model training efforts, such as BERT and RoBERTa (Devlin et al., 2019; Liu et al., 2019).",
"We use the language-specific Oscar data according to the terms specified in Ortiz Surez et al. (2020) and we extract texts from language-specific Wikipedia dumps.",
"On top of that, a big portion of the data used to train AlephBERT originates from the Twitter sample stream.",
"12 As shown in Table 2, this data set includes 70M Hebrew tweets which were collected over a period of 4 years (2014 to 2018).",
"We acknowledge the potential concerns inherently associated with Twitter data (population bias, behavior patterns, bot masquerading as humans etc.) and note that we have not made any explicit attempt to identify these cases.",
"We only used the text field of the tweets and completely discard any other information included 11 Available at https://github.com/OnlpLab/ AlephBERT 12 https://developer.twitter.com/en/ docs/twitter-api/tweets/volume-streams/api-reference/get-tweets-sample-stream in the stream (such as identities, followers, structure of threads, date of publication, etc).",
"We have not made any effort to identify or filter out any samples based on user properties such as age, gender and location nor have we made any effort to identify content characteristics such as genre or topic.",
"To reduce exposure of private information we cleaned up all user mentions and URLs from the text.",
"Honoring ethical and legal constraints we have not manually analyzed nor published this data source.",
"While the free-form language expressed in tweets might differ significantly from the text found in Oscar/Wikipedia, the sheer volume of tweets helps us close the substantial resource gap.",
"Training and Evaluation Benchmarks.",
"The SPMRL (Seddah et al., 2013) and UD (Sadde et al., 2018) datasets we used for evaluating segmentation, tagging and parsing, were used to both train our morphological extraction model as well as provide us with the test data to evaluate on morphological level tasks.",
"Both datasets are publicly available and widely used in research and industry.",
"The NEMO corpus (Bareket and Tsarfaty, 2020) used to train and evaluate word and morpheme level NER is an extension of the SPMRL dataset augmented with entities and follows the same license terms.",
"The BMC dataset used for training and evaluating word-level NER was created and published by Ben Mordecai and Elhadad (2005) and it is publicly available for NER evaluation.",
"We used the sentiment analysis dataset of Amram et al. (2018) for training and evaluating AlephBERT on a sentence level task, and we follow their terms of use.",
"As mentioned, this dataset had some flows, and we describe carefully the steps we've taken to fix them before using this corpus in our experiments for internal evaluation purposes.",
"We make our in-house cleaning scripts and split information publicly available.",
"This research was funded by the European Research Council (ERC grant agreement no. 677352) and by a research grant from the Ministry of Science and Technology (MOST) of the Israeli Government, for which we are grateful."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other"
] |
[
"Existing continual relation learning ( CRL ) methods rely on plenty of labeled training data for learning a new task, which can be hard to acquire in real scenario as getting large and representative labeled data is often expensive and time-consuming.",
"It is therefore necessary for the model to learn novel relational patterns with very few labeled data while avoiding catastrophic forgetting of previous task knowledge.",
"In this paper, we formulate this challenging yet practical problem as continual few-shot relation learning ( CFRL ).",
"Based on the finding that learning for new emerging few-shot tasks often results in feature distributions that are incompatible with previous tasks' learned distributions, we propose a novel method based on embedding space regularization and data augmentation.",
"Our method generalizes to new few-shot tasks and avoids catastrophic forgetting of previous tasks by enforcing extra constraints on the relational embeddings and by adding extra relevant data in a self-supervised manner.",
"With extensive experiments we demonstrate that our method can significantly outperform previous state-of-the-art methods in CFRL task settings.",
"1 1 Introduction Relation Extraction (RE) aims to detect the relationship between two entities in a sentence, for example, predicting the relation birthdate in the sentence Kamala Harris was born in Oakland, California, on October 20, 1964 . for the two entities Kamala Harris and October 20, 1964 .",
"It serves as a fundamental step for downstream tasks such as search and question answering (Dong et al., 2015; Yu et al., 2017).",
"Traditionally, RE methods were built by considering a fixed static set of relations (Miwa and Bansal, 2016; Han et al., 2018a).",
"However, similar to entity recognition, RE is also an open-vocabulary problem (Sennrich et al., 2016), 1 Code and models are available at https://github.com/qcwthu/Continual_Fewshot_Relation_Learning Figure 1: Difference between Continual Relation Learning ( CRL ) and Continual Few-shot Relation Learning ( CFRL ).",
"Despite their effectiveness, one major limitation 2776 of these methods is that they all assume plenty of training data for learning new relations (tasks), which is hard to satisfy in real scenario where continual learning is desirable, as acquiring large labeled datasets for every new relation is expensive and sometimes impractical for quick deployment ( e.g., RE from news articles during the onset of an emerging event like Covid-19).",
"where the relation set keeps growing as new relation types emerge with new data.",
"A potential solution is to formalize RE as Continual Relation Learning or CRL (Wang et al., 2019).",
"In CRL , the model learns relational knowledge through a sequence of tasks, where the relation set changes dynamically from the current task to the next.",
"The model is expected to perform well on both the novel and previous tasks, which is challenging due to the existence of Catastrophic Forgetting phenomenon (McCloskey and Cohen, 1989; French, 1999) in continual learning.",
"In this phenomenon, the model forgets previous relational knowledge after learning new relational patterns.",
"Existing methods to address catastrophic forgetting in CRL can be divided into three categories: ( i ) regularization-based methods, ( ii ) architecture-based methods, and ( iii ) memory-based methods.",
"Recent work shows that memory-based methods which save several key examples from previous tasks to a memory and reuse them when learning new tasks are more effective in NLP (Wang et al., 2019; Sun et al., 2020).",
"Successful memory-based CRL methods include EAEMR (Wang et al., 2019), MLLRE (Obamuyide and Vlachos, 2019), EMAR (Han et al., 2020), and CML (Wu et al., 2021).",
"In fact, one of the main objectives of continual learning is to quickly adapt to new environments or tasks by exploiting previously acquired knowledge, a hallmark of human intelligence (Lopez-Paz and Ranzato, 2017).",
"If the new tasks are few-shot , the existing methods suffer from over-fitting as shown later in our experiments (4).",
"Considering that humans can acquire new knowledge from a handful of examples, it is expected for the models to generalize well on the new tasks with few data.",
"We regard this problem as Continual Few-shot Relation Learning or CFRL (Fig. 1).",
"Indeed, in relation to CFRL , Zhang et al. (2021), Zhu et al. (2021) and Chen and Lee (2021) recently introduce methods for incremental few-shot learning in Computer Vision.",
"Based on the observation that the learning of emerging few-shot tasks may result in distorted feature distributions of new data which are incompatible with previous embedding space (Ren et al., 2020), this work introduces a novel model based on Embedding space Regularization and Data Augmentation (ERDA) for CFRL .",
"In particular, we propose a multi-margin loss and a pairwise margin loss in addition to the cross-entropy loss to impose further relational constraints in the embedding space.",
"We also introduce a novel contrastive loss to learn more effectively from the memory data.",
"Our proposed data augmentation method selects relevant samples from unlabeled text to provide more relational knowledge for the few-shot tasks.",
"The empirical results show that our method can significantly outperform previous state-of-the-art methods.",
"In summary, our main contributions are: To the best of our knowledge, we are the first one to consider CFRL .",
"We define the CFRL problem and construct a benchmark for the problem.",
"We propose ERDA, a novel method for CFRL based on embedding space regularization and data augmentation.",
"With extensive experiments, we demonstrate the effectiveness of our method compared to existing ones and analyse our results thoroughly.",
"Conventional RE methods include supervised (Ze-lenko et al., 2002; Liu et al., 2013; Zeng et al., 2014; Miwa and Bansal, 2016), semi-supervised (Chen et al., 2006; Sun et al., 2011; Hu et al., 2020) and distantly supervised methods (Mintz et al., 2009; Yao et al., 2011; Zeng et al., 2015; Han et al., 2018a).",
"These methods rely on a predefined relation set and have limitations in real scenario where novel relations are emerging.",
"There have been some efforts which focus on relation learning without predefined types, including open RE (Shinyama and Sekine, 2006; Etzioni et al., 2008; Cui et al., 2018; Gao et al., 2020) and continual relation learning (Wang et al., 2019; Obamuyide and Vlachos, 2019; Han et al., 2020; Wu et al., 2021).",
"Continual Learning (CL) aims to learn knowledge from a sequence of tasks.",
"The main problem CL attempts to address is catastrophic forgetting (Mc-Closkey and Cohen, 1989), i.e., the model forgets previous knowledge after learning new tasks.",
"Prior methods to alleviate this problem can be mainly divided into three categories.",
"First, regularization-based methods impose constraints on the update of neural weights important to previous tasks to alleviate catastrophic forgetting (Li and Hoiem, 2017; Kirkpatrick et al., 2017; Zenke et al., 2017; Ritter et al., 2018).",
"Second, architecture-based methods dynamically change model architectures to acquire new information while remembering previous knowledge (Chen et al., 2016; Rusu et al., 2016; Fernando et al., 2017; Mallya et al., 2018).",
"Finally, memory-based methods maintain a memory to save key samples of previous tasks to prevent forgetting (Rebuffi et al., 2017; Lopez-Paz and Ranzato, 2017; Shin et al., 2017; Chaudhry et al., 2019).",
"These methods mainly focus on learning a single type of tasks.",
"More recently, researchers have considered lifelong language learning (Sun et al., 2020; Qin and Joty, 2022), where the model is expected to continually learn from different types of tasks.",
"Few-shot Learning (FSL) aims to solve tasks containing only a few labeled samples, which faces the issue of over-fitting.",
"To address this, existing methods have explored three different directions: ( i ) data-based methods use prior knowledge to augment data to the few-shot set (Santoro et al., 2016; Benaim and Wolf, 2018; Gao et al., 2020); ( ii ) model-based methods reduce the hypothesis space using prior knowledge (Rezende et al., 2016; Triantafillou et al., 2017; Hu et al., 2018); and 2777 ( iii ) algorithm-based methods try to find a more suitable strategy to search for the best hypothesis in the whole hypothesis space (Hoffman et al., 2013; Ravi and Larochelle, 2017; Finn et al., 2017).",
"Summary.",
"Existing work in CRL which involves a sequence of tasks containing sufficient training data, mainly focuses on alleviating the catastrophic forgetting of previous relational knowledge when the model is trained on new tasks.",
"The work in few-shot learning mostly leverages prior knowledge to address the over-fitting of novel few-shot tasks.",
"In contrast to these lines of work, we aim to solve a more challenging yet more practical problem CFRL where the model needs to learn relational patterns from a sequence of few-shot tasks continually.",
"CFRL involves learning from a sequence of tasks T = ( T 1 , . . . , T n ) , where every task T k has its own training set D k train , validation set D k valid , and test set D k test .",
"Each dataset D contains several samples { ( x i , y i ) } | D | i =1 , whose labels y i belong to the relation set R k of task T k .",
"In contrast to the previously addressed continual relation learning ( CRL ), CFRL assumes that except for the first task which has enough data for training, the subsequent new tasks are all few-shot , meaning that they have only few labeled instances (see Fig. 1).",
"For example, consider there are three relation learning tasks T 1 , T 2 and T 3 with their corresponding relation sets R 1 , R 2 , and R 3 , each having 10 relations.",
"In CFRL , we assume the existing task T 1 has enough training data ( e.g., 100 samples for every relation in R 1 ), while the new tasks T 2 and T 3 are few-shot with only few ( e.g., 5 ) samples for every relation in R 2 and R 3 .",
"Assuming that the relation number of each few-shot task is N and the sample number of every relation is K , we call this setup N -way K -shot continual learning.",
"The problem setup of CFRL is aligned with the real scenario, where we generally have sufficient data for an existing task, but only few labeled data as new tasks emerge.",
"The model in CFRL is expected to first learn T 1 well, which has sufficient training data to obtain good ability to extract the relation information in the sentence.",
"Then at time step k , the model will be trained on the training set D k train of few-shot task Figure 2: Our framework for CFRL .",
"T k .",
"After learning T k , the model is expected to perform well on both T k and the previous k 1 tasks, as the model will be evaluated on D k test = ki =1 D i test consisting of all known relations after learning T k , i.e., R k = ki =1 R i .",
"This requires the model to overcome the catastrophic forgetting of previous knowledge and to learn new knowledge well with very few labeled data.",
"To overcome the catastrophic forgetting problem, a memory M = (cid:8) M 1 , M 2 , ... (cid:9) , which stores some key samples of previous tasks is maintained during the learning.",
"When the model is learning T k , it has access to the data saved in memory M 1 , ..., M k 1 .",
"As there is no limit on the number of tasks, the size of memory M k is constrained to be small.",
"Therefore, the model has to select only key samples from the training set D k train to save them in M k .",
"In our CFRL setting, only one sample per relation is allowed to be saved in the memory.",
"Our framework for CFRL is shown in Fig. 2 and Alg.",
"1 describes the overall training process (see Appendix A.1 for a block diagram).",
"At time step k , given the training data D k train for the task T k , depending on whether the task is a few-shot or not, the process has four or three working modules, respectively.",
"The general learning process (3.3) has three steps that apply to all tasks.",
"If the task is a few-shot task ( k > 1 ), we apply an additional step to create an augmented training set (cid:101) D k train .",
"For the initial task ( k = 1 ), we have (cid:101) D k train = D k train .",
"For any task T k , we use a siamese model to encode every new relation r i R k into r i IR d as well as the sentences, and train the model on (cid:101) D k train to acquire relation information of the new data (3.3.2).",
"To overcome forgetting, we select the most informative sample for each relation r i R k from D k train and update the memory M k (3.3.3).",
"Finally, we combine (cid:101) D k train and M k as the training data for learning new relational patterns and 2778 Algorithm 1 Training process at time step k Require: the training set D k train and the relation set R k of the current task T k , the current memory M k 1 and the known relation set R k 1 , the model , the similarity model S , and the unlabeled text corpus C .",
"remembering previous knowledge (3.3.4).",
"We also simultaneously update the representation of all relations in R k , which involves making a forward pass through the current model.",
"The learning and updating are done iteratively for convergence.",
"For data augmentation in few-shot tasks (3.4), we select reliable samples with high relational similarity score from an unlabelled Wikipedia corpus using a fine-tuned BERT (Devlin et al., 2019), which serves as the relational similarity model S .",
"In the interests of coherence, we first present the general learning method followed by the augmentation process for few-shot learning.",
"The siamese encoder ( f ) aims at extracting generic and relation related features from the input.",
"The input can be a labeled sentence or the name of a relation.",
"We adopt two kinds of encoders: Bi-LSTM To have a fair comparison with previous work, we use the same architecture as Han et al. (2020).",
"It takes GloVe embeddings (Pen-nington et al., 2014) of the words in a given input and produces a vector representation through a Bi-LSTM (Hochreiter and Schmidhuber, 1997).",
"BERT We adopt BERT base which has 12 layers and 110M parameters.",
"As the new tasks are few-shot, we only fine-tune the 12-th encoding layer and the extra linear layer.",
"We include special tokens around the entities (#' for the head entity and @' for the tail entity) in a given labeled sentence to improve the encoder's understanding of relation information.",
"We use the [ CLS ] token features as the representation of the input sequence.",
"At time step k , to have a good understanding of the new relations, we fine-tune the model on the expanded dataset (cid:101) D k train .",
"The model f first encodes the name of each new relation r j R k into its representation r j IR d by making a forward pass.",
"Then, we optimize the parameters ( ) by minimizing a loss L new that consists of a cross entropy loss, a multi-margin loss and a pairwise margin loss.",
"where R k is the set of all known relations at step k , g ( , ) is a function used to measure similarity between two vectors ( e.g., cosine similarity or L2 distance), and a,b is the Kronecker delta function a,b = 1 if a equals b , otherwise a,b = 0 .",
"In inference, we choose the relation label that has the highest similarity with the input sentence (Eq. 8).",
"To ensure that an example has the highest similarity with the true relation, we additionally design two margin-based losses, which increase the score between an example and the true label while decreasing the scores for the wrong labels.",
"The first one is a multi-margin loss defined as: L mm = (cid:88) ( x i ,y i ) (cid:101) D k train | R k | (cid:88) j =1 ,j = t i max (cid:16) 0 , m 1 g ( f ( x i ) , r t i ) + g ( f ( x i ) , r j ) (cid:17) (2) where t i is the correct relation index in R k satisfying r t i = y i and m 1 is a margin value.",
"The L mm loss attempts to ensure intra-class compactness while increasing inter-class distances.",
"The second one is a pairwise margin loss L pm : (cid:88) ( x i ,y i ) (cid:101) D k train max(0 , m 2 g ( f ( x i ) , r t i ) + g ( f ( x i ) , r s i )) (3) 2779 where m 2 is the margin for L pm and s i = arg max s g ( f ( x i ) , r s ) s.t. s = t i , the closest wrong label.",
"The L pm loss penalizes the cases where the similarity score of the closest wrong label is higher than the score of the correct label (Yang et al., 2018).",
"Both L mm and L pm improve the discriminative ability of the model (4.4).",
"The total loss for learning on T k is defined as: L new = ce L ce + mm L mm + pm L pm (4) where ce , mm and pm are the relative weights of the component losses, respectively.",
"After training the model f with Eq.",
"(4), we use it to select one sample per new relation.",
"Specifically, for every new relation r j R k , we obtain the centroid feature c j by averaging the embeddings of all samples labeled as r j in D k train as follows.",
"where D kr j = { ( x i , y i ) | ( x i , y i ) D k train , y i = r j } .",
"Then we select the instance closest to c j from D kr j as the most informative sample and save it in memory M k .",
"Note that the selection is done from D k train , not from the expanded set (cid:101) D k train .",
"As the learning of new relational patterns may cause catastrophic forgetting of previous knowledge (see baselines in 4), our model needs to learn from the memory data to alleviate forgetting.",
"We combine the expanded set (cid:101) D k train and the whole memory data M k = kj =1 M j into (cid:101) H k to allow the model to learn new relational knowledge and consolidate previous knowledge.",
"However, the memory data is limited containing only one sample per relation.",
"To learn effectively from such limited data, we design a novel method to generate a hard negative sample set P i for every sample in M k .",
"The negative samples are generated on the fly.",
"After sampling a mini-batch B t from (cid:101) H k , we consider all memory data in B t as MB t .",
"For every sample ( x i , y i ) in MB t , we replace its head entity e hi or tail entity e ti with the corresponding entity of a randomly selected sample in the same batch B t to get the hard negative sample set P i = { ( x P i j , y i ) } | P i | j =1 .",
"margin-based contrastive loss L con as follows.",
"L con = (cid:88) ( x i , y i ) M Bt max (cid:16) 0 , m 3 g ( f ( x i ) , r t i )+ (cid:88) ( x Pij , y i ) P i g ( f ( x P i j ) , r t i ) (cid:17) (6) where t i is the relation index satisfying r t i = y i and m 3 is the margin value for L con .",
"This loss forces the model to distinguish the valid relations from the hard negatives so that the model learns more precise and fine-grained relational knowledge.",
"In addition, we also use the three losses L ce and L mm and L pm defined in 3.3.2 to update on B t .",
"The total loss on the memory data is: L mem = ce L ce + mm L mm + pm L pm + con L con (7) where ce , mm , pm and con are the relative weights of the corresponding losses.",
"Updating Relation Embeddings After training the model on (cid:101) H k for few steps, we use the memory M k to update the relation embedding r i of all known relations.",
"For a relation r i R k , we average the embeddings (obtained by making a forward pass through f ) of the relation name and memory data to obtain its updated representation r i .",
"The training of and updating of r i is done iteratively to grasp new relational patterns while alleviating the catastrophic forgetting of previous knowledge.",
"For a given input x i in D k test , we calculate the similarity between x i and all known relations, and pick the one with the highest similarity score:",
"For each few-shot task T k , we aim to get more data by selecting reliable samples from an unlabeled corpus C with tagged entities before the general learning process (3.3) begins.",
"We achieve this using a relational similarity model S and sentences from Wikipedia as C .",
"The model S (described later) takes a sentence as input and produces a normalized vector representation.",
"The cosine similarity between two vectors is used to measure the relational similarity between the two corresponding sentences.",
"A higher similarity means the two sentences are more likely to have the same relation label.",
"We propose two novel selection methods, which are complementary to each other.",
"(a) Augmentation via Entity Matching For each instance ( x i , y i ) in D k train , we extract its entity pair ( e hi , e ti ) with e hi being the head entity and e ti being the tail entity.",
"As sentences with the same entity pair are more likely to express the same relation, we first collect a candidate set Q = { (cid:101) x j } |Q| j =1 from C , where (cid:101) x j shares the same entity pair ( e hi , e ti ) with x i .",
"If Q is a non-empty set, we pair all (cid:101) x j in Q with x i , and denote each pair as (cid:101) x j , x i .",
"Then we use S to obtain a similarity score s j for (cid:101) x j , x i .",
"After getting scores for all pairs, we pick the instances (cid:101) x j with similarity score s j higher than a predefined threshold as new samples and label them with relation y i .",
"The selected instances are then augmented to D k train as additional data.",
"(b) Augmentation via Similarity Search The hard entity matching could be too restrictive at times.",
"For example, even though the sentences Harry Potter is written by Joanne Rowling and Charles Dickens is the author of A Tale of Two Cities share the same relation author , hard matching fails to find any relevance.",
"Therefore, in cases when entity matching returns an empty Q , we resort to similarity search using Faiss (Johnson et al., 2017).",
"Given a query vector q i , it can efficiently search for vectors { v j } Kj =1 with the topK highest similarity scores in a large vector set V .",
"In our case, q i is the representation of x i and V contains the representations of the sentences in C .",
"We use S to obtain these representations; the difference is that V is pre-computed while q i is obtained during training.",
"We labeled the topK most similar instances with y i and augment them to D k train .",
"Similarity Model To train S , inspired by Soares et al. (2019), we adopt a contrastive learning method to fine-tune a BERT base model on C , whose sentences are already tagged with entities.",
"Based on the observation that sentences with the same entity pair are more likely to encode the same relation, we use sentence pairs containing the same entities in C as positive samples.",
"For negatives, instead of using all sentence pairs containing different entities, we select pairs sharing only one entity as hard negatives ( i.e., pair ( x i , x j ) where e hi = e hj and e ti = e tj or e ti = e tj and e hi = e hj ).",
"We randomly sample the same number of negative samples as the positive ones to balance the training.",
"where S ( x ) is the normalized representation of x obtained from the final layer of BERT.",
"Then we optimize the parameters of S by minimizing a binary cross entropy loss L pretrain as follows.",
"where C p is a positive batch and C n is a negative batch.",
"This objective tries to ensure that sentence pairs with the same entity pairs have higher cosine similarity than those with different entities.",
"We define the benchmark and evaluation metric for CFRL before presenting our experimental results.",
"Benchmark As the benchmark for CFRL needs to have sufficient relations as well as data and be suitable for few-shot learning, we create the CFRL benchmark based on FewRel (Han et al., 2018b).",
"FewRel is a large-scale dataset for few-shot RE, which contains 80 relations with hundreds of samples per relation.",
"We randomly split the 80 relations into 8 tasks, where each task contains 10 relations ( 10-way ).",
"To have enough data for the first task T 1 , we sample 100 samples per relation.",
"All the subsequent tasks T 2 , ..., T 8 are few-shot; for each relation, we conduct 2-shot , 5-shot and 10-shot experiments to verify the effectiveness of our method.",
"In addition, to demonstrate the generalizability of our method, we also create a CFRL benchmark based on the TACRED dataset (Zhang et al., 2017) which contains only 42 relations.",
"We filter out the special relation n/a (not available) and split the remaining 41 relations into 8 tasks.",
"Except for the first task that contains 6 relations, all other tasks have 5 relations ( 5-way ).",
"Similar to FewRel, we randomly sample 100 examples per relation in T 1 and conduct 5-shot and 10-shot experiments.",
"Metric At time step k , we evaluate the model performance through relation classification accuracy on the test sets D k test = ki =1 D i test of all seen tasks {T i } ki =1 .",
"This metric reflects whether the model can alleviate catastrophic forgetting while acquiring novel knowledge well with very few data.",
"Since the model performance might be influenced by task sequences and few-shot training samples, we run every experiment 6 times each time with a different random seed to ensure a random task order and model initialization, and report the average 2781 Method Task index 1 2 3 4 5 6 7 8 SeqRun 92 .",
"The model settings are shown in Appendix A.2.",
"We compare our approach with the following baselines: SeqRun fine-tunes the model only on the training data of the new tasks without using any memory data.",
"It may face serious catastrophic forgetting and serves as a lower bound .",
"Joint Training stores all previous samples in the memory and trains the model on all data for each new task.",
"It serves as an upper bound in CRL .",
"EMR (Wang et al., 2019) maintains a memory for storing selected samples from previous tasks.",
"When training on a novel task, EMR combines the new training data and memory data.",
"EMAR (Han et al., 2020) is the state-of-the-art on CRL , which adopts memory activation and reconsolidation to alleviate catastrophic forgetting.",
"IDLVQ-C (Chen and Lee, 2021) introduces quantized reference vectors to represent previous knowledge and mitigates catastrophic forgetting by imposing constraints on the quantized vectors and embedded space.",
"It was originally proposed for image classification with state-of-the-art results in incremental few-shot learning.",
"We compare the performance of different methods using the same setting as EMAR (Han et al., 2020), which uses a Bi-LSTM encoder.",
"We also report the results with a BERT encoder.",
"settings.",
"2 From the results, we can observe that: Our proposed ERDA outperforms previous baselines in all CFRL settings, which demonstrates the superiority of our method.",
"Simply fine-tuning the model with new few-shot examples leads to rapid drops in accuracy due to severe over-fitting and catastrophic forgetting.",
"Although EMR and EMAR adopt a memory module to alleviate forgetting, their performance still decreases quickly as they require plenty of training data for learning a new task.",
"Compared with EMR and EMAR, IDLVQ-C is slightly better as it introduces quantized vectors that can better represent the embedding space of few-shot tasks.",
"However, IDLVQ-C does not necessarily push the samples from different relations to be far apart in the embedding space and the up-2 To avoid visual clutter, we report only mean scores over 6 runs in Table 1 and refer to Table 6 and Table 5 in Appendix for variance and elaborate results for different task order.",
"dating method for the reference vectors may not be optimal.",
"ERDA outperforms IDLVQ-C by a large margin through embedding space regularization and self-supervised data augmentation.",
"To verify this, we show the embedding space of IDLVQ-C and ERDA using t-SNE (Van der Maaten and Hinton, 2008).",
"We randomly choose four classes from the first task of FewRel and two classes from the new task, and visualize the test data of these classes in Fig. 4.",
"As can be seen, the embedding space obtained by ERDA shows better intra-class compactness and larger inter-class distances.",
"Unlike CRL , joint training does not always serve as an upper bound in CFRL due to the extremely imbalanced data distribution.",
"Benefiting from the ability to learn feature distribution with very few data, both ERDA and IDLVQ-C perform better than joint training in the 2-shot setting.",
"However, as the number of few-shot samples increases, the performance of IDLVQ-C falls far behind joint training, while ERDA still performs better.",
"In the 5-shot setting, ERDA could achieve better results than joint training which verifies the effectiveness of self-supervised data augmentation (more on this in 4.4).",
"Although ERDA performs worse than joint training in the 10-shot setting, its results are still much better than other baselines.",
"After learning all few-shot tasks, ERDA outperforms IDLVQ-C by 9.69 % , 12.67 % and 11.49 % in the 2-shot , 5-shot and 10-shot settings, respectively.",
"Moreover, the relative gain of ERDA keeps growing with the increasing number of new few-shot tasks.",
"This demonstrates the ability of our method in handling a longer sequence of CFRL tasks.",
"TACRED Benchmark Fig. 5 shows the 5-way 5-shot and 5-way 10-shot results on TACRED.",
"We can see that here also ERDA outperforms all other methods by a large margin which verifies the strong 2 4 6 8 Task index 40 50 60 70 80 90 100 A cc u r a c y ( % )",
"Results with BERT We show the results with BERT base of different methods on FewRel in Fig. 6 for 10-way 2-shot and 10-shot and Table 4 for 10-way 5-shot (in Appendix).",
"The results of on TACRED benchmark are shown in Fig. 7 for 5-way 5-shot and 10-shot .",
"From the results, we can observe that ERDA outperforms previous baselines in all CFRL settings with a BERT encoder.",
"We conduct several ablations to analyze the contribution of different components of ERDA on the FewRel 10-way 5-shot setting.",
"In particular, we investigate seven other variants of ERDA by removing one component at a time: ( a ) the multi-margin loss L mm , ( b ) the pairwise margin loss L pm , ( c ) the margin-based contrastive loss L con , ( d ) the whole 2-stage data augmentation module, ( e ) the entity matching method of augmentation, ( f ) the similarity search method of augmentation, and ( g ) memory.",
"From the results in Table 3, we can observe that all components improve the performance of our model.",
"Specifically, L mm yields about 1.51 % performance boost as it brings samples of the same 2783 con 0 0.01 0.02 0.05 0.1 0.2 0.5 1.0 Accuracy ( % ) 51 .",
"relation closer to each other while enforcing larger distances among different relation distributions.",
"The L pm improves the accuracy by 3.18 % , which demonstrates the effect of contrasting with the nearest wrong label.",
"The adoption of L con leads to 1.28 % improvement, which shows that generating hard negative samples for memory data can help to better remember previous relational knowledge.",
"To better investigate the influence of L con , we conduct experiments with different con and show the results in Table 2.",
"We can see that the model achieves the best accuracy of 53.38 with con = 0 .",
"02 while the accuracy is only 52.13 with con = 0 .",
"5 .",
"In addition, the performance of the variant without L con is worse than the performance of all other variants, which demonstrates the effectiveness of L con .",
"The data augmentation module improves the performance by 1.72 % as it can extract informative samples from unlabeled text which provide more relational knowledge for few-shot tasks.",
"The results of variants without entity matching or similarity search verify that the two data augmentation methods are generally complementary to each other.",
"One could argue that the data augmentation module increases the complexity of ERDA compared to other models.",
"However, astute readers can find that even without data augmentation, ERDA outperforms IDLVQ-C significantly for all tasks (compare ERDA w.o. DA' with the baselines in Table 1).",
"ERDA's Performance under CRL Although ERDA is designed for CFRL , we also evaluate the embedding space regularization (ERDA w.o. DA') in the CRL setting.",
"We sample 100 examples per relation for every task in FewRel and compare our method with the state-of-the-art method EMAR.",
"The results are shown in Fig. 8.",
"We can see that ERDA outperforms EMAR in all tasks by 1.25 -4.95 % proving that the embedding regularization can be a general method for CRL .",
"We have introduced continual few-shot relation learning ( CFRL ), a challenging yet practical problem where the model needs to learn new relational knowledge with very few labeled data continually.",
"We have proposed a novel method, named ERDA, to alleviate the over-fitting and catastrophic forgetting problems which are the core issues in CFRL .",
"ERDA imposes relational constraints in the embedding space with innovative losses and adds extra informative data for few-shot tasks in a self-supervised manner to better grasp novel relational patterns and remember previous knowledge.",
"Extensive experimental results and analysis show that ERDA significantly outperforms previous methods in all CFRL settings investigated in this work.",
"In the future, we would like to investigate ways to combine meta-learning with CFRL ."
] | [
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"objective",
"method",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"abstain",
"objective",
"objective"
] |
[
"Despite inextricable ties between race and language, little work has considered race in NLP research and development.",
"In this work, we survey 79 papers from the ACL anthology that mention race.",
"These papers reveal various types of race-related bias in all stages of NLP model development, highlighting the need for proactive consideration of how NLP systems can uphold racial hierarchies.",
"However, persistent gaps in research on race and NLP remain: race has been siloed as a niche topic and remains ignored in many NLP tasks; most work operationalizes race as a fixed single-dimensional variable with a ground-truth label, which risks reinforcing differences produced by historical racism; and the voices of historically marginalized people are nearly absent in NLP literature.",
"By identifying where and how NLP literature has and has not considered race, especially in comparison to related fields, our work calls for inclusion and racial justice in NLP research practices.",
"Race and language are tied in complicated ways.",
"Raciolinguistics scholars have studied how they are mutually constructed: historically, colonial pow-ers construct linguistic and racial hierarchies to justify violence, and currently, beliefs about the inferiority of racialized people's language practices continue to justify social and economic exclusion (Rosa and Flores, 2017).",
"1 Furthermore, language is the primary means through which stereotypes and prejudices are communicated and perpetuated (Hamilton and Trolier, 1986; Bar-Tal et al., 2013).",
"However, questions of race and racial bias have been minimally explored in NLP literature.",
"1 We use racialization to refer the process of ascribing and prescribing a racial category or classification to an individual or group of people ...based on racial attributes including but not limited to cultural and social history, physical features, and skin color (Hudley, 2017).",
"While researchers and activists have increasingly drawn attention to racism in computer science and academia, frequently-cited examples of racial bias in AI are often drawn from disciplines other than NLP, such as computer vision (facial recognition) (Buolamwini and Gebru, 2018) or machine learning (recidivism risk prediction) (Angwin et al., 2016).",
"Even the presence of racial biases in search engines like Google (Sweeney, 2013; Noble, 2018) has prompted little investigation in the ACL community.",
"Work on NLP and race remains sparse, particularly in contrast to concerns about gender bias, which have led to surveys, workshops, and shared tasks (Sun et al., 2019; Webster et al., 2019).",
"In this work, we conduct a comprehensive survey of how NLP literature and research practices engage with race.",
"We first examine 79 papers from the ACL Anthology that mention the words race', racial', or racism' and highlight examples of how racial biases manifest at all stages of NLP model pipelines ( 3).",
"We then describe some of the limitations of current work ( 4), specifically showing that NLP research has only examined race in a narrow range of tasks with limited or no social context.",
"Finally, in 5, we revisit the NLP pipeline with a focus on how people generate data, build models, and are affected by deployed systems, and we highlight current failures to engage with people traditionally underrepresented in STEM and academia.",
"While little work has examined the role of race in NLP specifically, prior work has discussed race in related fields, including human-computer interaction (HCI) (Ogbonnaya-Ogburu et al., 2020; Rankin and Thomas, 2019; Schlesinger et al., 2017), fairness in machine learning (Hanna et al., 2020), and linguistics (Hudley et al., 2020; Motha, 2020).",
"We draw comparisons and guidance from this work and show its relevance to NLP research.",
"Our work differs from NLP-focused related work on gender bias (Sun et al., 2019), bias' generally (Blodgett et al., 2020), and the adverse impacts of language models (Bender et al., 2021) in its explicit focus on race and racism.",
"In surveying research in NLP and related fields, we ultimately find that NLP systems and research practices produce differences along racialized lines .",
"Our work calls for NLP researchers to consider the social hierarchies upheld and exacerbated by NLP research and to shift the field toward greater inclusion and racial justice (Hudley et al., 2020).",
"It has been widely accepted by social scientists that race is a social construct, meaning it was brought into existence or shaped by historical events, social forces, political power, and/or colonial conquest rather than reflecting biological or natural' differences (Hanna et al., 2020).",
"More recent work has criticized the social construction theory as circular and rooted in academic discourse, and instead referred to race as colonial constituted practices, including an inherited western, modern-colonial practice of violence, assemblage, superordination, exploitation and segregation (Saucier et al., 2016).",
"The term race is also multi-dimensional and can refer to a variety of different perspectives, including racial identity (how you self-identify), observed race (the race others perceive you to be), and reflected race (the race you believe others perceive you to be) (Roth, 2016; Hanna et al., 2020; Ogbonnaya-Ogburu et al., 2020).",
"Racial categorizations often differ across dimensions and depend on the defined categorization schema.",
"For example, the United States census considers Hispanic an ethnicity, not a race, but surveys suggest that 2/3 of people who identify as Hispanic consider it a part of their racial background.",
"2 Similarly, the census does not consider Jewish' a race, but some NLP work considers anti-Semitism a form of racism (Hasanuzzaman et al., 2017).",
"Race depends on historical and social contextthere are no ground truth' labels or categories (Roth, 2016).",
"As the work we survey primarily focuses on the United States, our analysis similarly focuses on the U.S. However, as race and racism are global constructs, some aspects of our analysis are applicable to other contexts.",
"We suggest that future studies on racialization in NLP ground their analysis in the appropriate geo-cultural context, which may result 2 https://bit.ly/3r9J1fO , https://pewrsr.",
"In this section, we introduce our primary survey datapapers from the ACL Anthology 3 and we describe some of their major findings to emphasize that NLP systems encode racial biases .",
"We searched the anthology for papers containing the terms racial', racism', or race', discarding ones that only mentioned race in the references section or in data examples and adding related papers cited by the initial set if they were also in the ACL Anthology.",
"In using keyword searches, we focus on papers that explicitly mention race and consider papers that use euphemistic terms to not have substantial engagement on this topic.",
"As our focus is on NLP and the ACL community, we do not include NLP-related papers published in other venues in the reported metrics (e.g. Table 1), but we do draw from them throughout our analysis.",
"Our initial search identified 165 papers.",
"However, reviewing all of them revealed that many do not deeply engage on the topic.",
"For example, 37 papers mention racism' as a form of abusive language or use racist' as an offensive/hate speech label without further engagement.",
"30 papers only mention race as future work, related work, or motivation, e.g. in a survey about gender bias, Non-binary genders as well as racial biases have largely been ignored in NLP (Sun et al., 2019).",
"After discarding these types of papers, our final analysis set consists of 79 papers.",
"4 Table 1 provides an overview of the 79 papers, manually coded for each paper's primary NLP task and its focal goal or contribution.",
"We determined task/application labels through an iterative process: listing the main focus of each paper and then collapsing similar categories.",
"In cases where papers could rightfully be included in multiple categories, we assign them to the best-matching one based on stated contributions and the percentage of the paper devoted to each possible category.",
"In the Appendix we provide additional categorizations of the papers 3 The ACL Anthology includes papers from all official ACL venues and some non-ACL events listed in Appendix A, as of December 2020 it included 6 , 200 papers 4 We do not discard all papers about abusive language, only ones that exclusively use racism/racist as a classification label.",
"We retain papers with further engagement, e.g. discussions of how to define racism or identification of racial bias in hate speech classifiers.",
"according to publication year, venue, and racial categories used, as well as the full list of 79 papers.",
"Next, we present examples that identify racial bias in NLP models, focusing on 5 parts of a standard NLP pipeline: data, data labels, models, model outputs, and social analyses of outputs.",
"We include papers described in Table 1 and also relevant literature beyond the ACL Anthology (e.g. NeurIPS, PNAS, Science).",
"These examples are not intended to be exhaustive, and in 4 we describe some of the ways that NLP literature has failed to engage with race, but nevertheless, we present them as evidence that NLP systems perpetuate harmful biases along racialized lines .",
"Data A substantial amount of prior work has already shown how NLP systems, especially word embeddings and language models, can absorb and amplify social biases in data sets (Bolukbasi et al., 2016; Zhao et al., 2017).",
"While most work focuses on gender bias, some work has made similar observations about racial bias (Rudinger et al., 2017; Garg et al., 2018; Kurita et al., 2019).",
"These studies focus on how training data might describe racial minorities in biased ways, for example, by examining words associated with terms like black' or traditionally European/African American names (Caliskan et al., 2017; Manzini et al., 2019).",
"Some studies additionally capture who is described, revealing under-representation in training data, sometimes tangentially to primary research questions: Rudinger et al. (2017) suggest that gender bias may be easier to identify than racial or ethnic bias in Natural Language Inference data sets because of data sparsity, and Caliskan et al. (2017) alter the Implicit Association Test stimuli that they use to measure biases in word embeddings because some African American names were not frequent enough in their corpora.",
"An equally important consideration, in addition to whom the data describes is who authored the data .",
"For example, Blodgett et al. (2018) show that parsing systems trained on White Mainstream American English perform poorly on African American English (AAE).",
"5 In a more general example, Wikipedia has become a popular data source for many NLP tasks.",
"However, surveys suggest that Wikipedia editors are primarily from white-majority countries, 6 and several initiatives have pointed out systemic racial biases in Wikipedia coverage (Adams et al., 2019; Field et al., 2021).",
"7 Models trained on these data only learn to process the type of text generated by these users, and further, only learn information about the topics these users are interested in.",
"The representativeness of data sets is a well-discussed issue in social-oriented tasks, like inferring public opinion (Olteanu et al., 2019), but this issue is also an important consideration in neutral' tasks like parsing (Waseem et al., 2021).",
"The type of data that researchers choose to train their models on does not just affect what data the models perform well for, it affects what people the models work for.",
"NLP researchers cannot assume models will be useful or function for marginalized people unless they are trained on data 5 We note that conceptualizations of AAE and the accompanying terminology for the variety have shifted considerably in the last half century; see King (2020) for an overview.",
"Data Labels Although model biases are often blamed on raw data, several of the papers we survey identify biases in the way researchers categorize or obtain data annotations.",
"For example: Annotation schema Returning to Blodgett et al. (2018), this work defines new parsing standards for formalisms common in AAE, demonstrating how parsing labels themselves were not designed for racialized language varieties.",
"Annotation instructions Sap et al. (2019) show that annotators are less likely to label tweets using AAE as offensive if they are told the likely language varieties of the tweets.",
"Thus, how annotation schemes are designed (e.g. what contextual information is provided) can impact annotators' decisions, and failing to provide sufficient context can result in racial biases.",
"Annotator selection Waseem (2016) show that feminist/anti-racist activists assign different offensive language labels to tweets than figure-eight workers, demonstrating that annotators' lived experiences affect data annotations.",
"Models Some papers have found evidence that model instances or architectures can change the racial biases of outputs produced by the model.",
"Sommerauer and Fokkens (2019) find that the word embedding associations around words like race' and racial' change not only depending on the model architecture used to train embeddings, but also on the specific model instance used to extract them, perhaps because of differing random seeds.",
"Kiritchenko and Mohammad (2018) examine gender and race biases in 200 sentiment analysis systems submitted to a shared task and find different levels of bias in different systems.",
"As the training data for the shared task was standardized, all models were trained on the same data.",
"However, participants could have used external training data or pre-trained embeddings, so a more detailed investigation of results is needed to ascertain which factors most contribute to disparate performance.",
"Model Outputs Several papers focus on model outcomes, and how NLP systems could perpetuate and amplify bias if they are deployed: Classifiers trained on common abusive language data sets are more likely to label tweets containing characteristics of AAE as offensive (Davidson et al., 2019; Sap et al., 2019).",
"Classifiers for abusive language are more likely to label text containing identity terms like black' as offensive (Dixon et al., 2018).",
"GPT outputs text with more negative sentiment when prompted with AAE -like inputs (Groenwold et al., 2020).",
"Social Analyses of Outputs While the examples in this section primarily focus on racial biases in trained NLP systems, other work (e.g. included in Social Science/Social Media' in Table 1) uses NLP tools to analyze race in society.",
"Examples include examining how commentators describe football players of different races (Merullo et al., 2019) or how words like prejudice' have changed meaning over time (Vylomova et al., 2019).",
"While differing in goals, this work is often susceptible to the same pitfalls as other NLP tasks.",
"One area requiring particular caution is in the interpretation of results produced by analysis models.",
"For example, while word embeddings have become a common way to measure semantic change or estimate word meanings (Garg et al., 2018), Joseph and Morgan (2020) show that embedding associations do not always correlate with human opinions; in particular, correlations are stronger for beliefs about gender than race.",
"Relatedly, in HCI, the recognition that authors' own biases can affect their interpretations of results has caused some authors to provide self-disclosures (Schlesinger et al., 2017), but this practice is uncommon in NLP.",
"We conclude this section by observing that when researchers have looked for racial biases in NLP systems, they have usually found them.",
"This literature calls for proactive approaches in considering how data is collected, annotated, used, and interpreted to prevent NLP systems from exacerbating historical racial hierarchies.",
"While 3 demonstrates ways that NLP systems encode racial biases, we next identify gaps and limitations in how these works have examined racism, focusing on how and in what tasks researchers have considered race.",
"We ultimately conclude that prior NLP literature has marginalized research on race and encourage deeper engagement with other fields, critical views of simplified classification schema, and broader application scope in future work (Blod-gett et al., 2020; Hanna et al., 2020).",
"The papers we surveyed suggest that research on race in NLP has used a very limited range of data sets, which fails to account for the multidimensionality of race and simplifications inherent in classification.",
"We identified 3 common data sources: 8 9 papers use a set of tweets with inferred probabilistic topic labels based on alignment with U.S. census race/ethnicity groups (or the provided inference model) (Blodgett et al., 2016).",
"11 papers use lists of names drawn from Sweeney (2013), Caliskan et al. (2017), or Garg et al. (2018).",
"Most commonly, 6 papers use African/European American names from the Word Embedding Association Test (WEAT) (Caliskan et al., 2017), which in turn draws data from Greenwald et al. (1998) and Bertrand and Mullainathan (2004).",
"10 papers use explicit keywords like Black woman', often placed in templates like I am a to test if model performance remains the same for different identity terms.",
"While these commonly-used data sets can identify performance disparities, they only capture a narrow subset of the multiple dimensions of race ( 2).",
"For example, none of them capture self-identified race.",
"While observed race is often appropriate for examining discrimination and some types of disparities, it is impossible to assess potential harms and benefits of NLP systems without assessing their performance over text generated by and directed to people of different races.",
"The corpus from Blodgett et al. (2016) does serve as a starting point and forms the basis of most current work assessing performance gaps in NLP models (Sap et al., 2019; Blodgett et al., 2018; Xia et al., 2020; Xu et al., 2019; Groenwold et al., 2020), but even this corpus is explicitly not intended to infer race.",
"Furthermore, names and hand-selected identity terms are not sufficient for uncovering model bias.",
"De-Arteaga et al. (2019) show this in examining gender bias in occupation classification: when overt indicators like names and pronouns are scrubbed from the data, performance gaps and potential allocational harms still remain.",
"Names also 8 We provide further counts of what racial categories papers use and how they operationalize them in Appendix B. generalize poorly.",
"While identity terms can be examined across languages (van Miltenburg et al., 2017), differences in naming conventions often do not translate, leading some studies to omit examining racial bias in non-English languages (Lauscher and Glavas, 2019).",
"Even within English, names often fail to generalize across domains, geographies, and time.",
"For example, names drawn from the U.S. census generalize poorly to Twitter (Wood-Doughty et al., 2018), and names common among Black and white children were not distinctly different prior to the 1970s (Fryer Jr and Levitt, 2004; Sweeney, 2013).",
"We focus on these 3 data sets as they were most common in the papers we surveyed, but we note that others exist.",
"Preotiuc-Pietro and Ungar (2018) provide a data set of tweets with self-identified race of their authors, though it is little used in subsequent work and focused on demographic prediction, rather than evaluating model performance gaps.",
"Two recently-released data sets (Nadeem et al., 2020; Nangia et al., 2020) provide crowd-sourced pairs of moreand less-stereotypical text.",
"More work is needed to understand any privacy concerns and the strengths and limitations of these data (Blodgett et al., 2021).",
"Additionally, some papers collect domain-specific data, such as self-reported race in an online community (Loveys et al., 2018), or crowd-sourced annotations of perceived race of football players (Merullo et al., 2019).",
"While these works offer clear contextualization, it is difficult to use these data sets to address other research questions.",
"Work that uses the same few data sets inevitably also uses the same few classification schemes, often without justification.",
"The most common explicitly stated source of racial categories is the U.S. census, which reflects the general trend of U.S.-centrism in NLP research (the vast majority of work we surveyed also focused on English).",
"While census categories are sometimes appropriate, repeated use of classification schemes and accompanying data sets without considering who defined these schemes and whether or not they are appropriate for the current context risks perpetuating the misconception that race is natural' across geo-cultural contexts.",
"We refer to Hanna et al. (2020) for a more thorough overview of the harms of widespread uncritical adoption of racial categories, which can in turn re-entrench systems of racial stratification which give rise to real health and social inequalities.",
"At best, the way race has been operationalized in NLP research is only capable of examining a narrow subset of potential harms.",
"At worst, it risks reinforcing racism by presenting racial divisions as natural, rather than the product of social and historical context (Bowker and Star, 2000).",
"As an example of questioning who devised racial categories and for what purpose, we consider the pattern of re-using names from Greenwald et al. (1998), who describe their data as sets of names judged by introductory psychology students to be more likely to belong to White Americans than to Black Americans or vice versa.",
"When incorporating this data into WEAT, Caliskan et al. (2017) discard some judged African American names as too infrequent in their embedding data.",
"Work subsequently drawing from WEAT makes no mention of the discarded names nor contains much discussion of how the data was generated and whether or not names judged to be white or Black by introductory psychology students in 1998 are an appropriate benchmark for the studied task.",
"While gathering data to examine race in NLP is challenging, and in this work we ourselves draw from examples that use Greenwald et al. (1998), it is difficult to interpret what implications arise when models exhibit disparities over this data and to what extent models without disparities can be considered debiased'.",
"Finally, almost all of the work we examined conducts single-dimensional analyses, e.g. focus on race or gender but not both simultaneously.",
"This focus contrasts with the concept of intersectionality , which has shown that examining discrimination along a single axis fails to capture the experiences of people who face marginalization along multiple axes.",
"For example, consideration of race often emphasizes the experience of gender-privileged people (e.g. Black men), while consideration of gender emphasizes the experience of race-privileged people (e.g. white women).",
"Neither reflect the experience of people who face discrimination along both axes (e.g. Black women) (Crenshaw, 1989).",
"A small selection of papers have examined intersectional biases in embeddings or word co-occurrences (Herbelot et al., 2012; May et al., 2019; Tan and Celis, 2019; Lepori, 2020), but we did not identify mentions of intersectionality in any other NLP research areas.",
"Further, several of these papers use NLP technology to examine or validate theories on intersectionality; they do not draw from theory on intersectionality to critically examine NLP models.",
"These omissions can mask harms: Jiang and Fellbaum (2020) provide an example using word embeddings of how failing to consider intersectionality can render invisible people marginalized in multiple ways.",
"Numerous directions remain for exploration, such as how debiasing' models along one social dimension affects other dimensions.",
"Surveys in HCI offer further frameworks on how to incorporate identity and intersectionality into computational research (Schlesinger et al., 2017; Rankin and Thomas, 2019).",
"Finally, Table 1 reveals many common NLP applications where race has not been examined, such as machine translation, summarization, or question answering.",
"9 While some tasks seem inherently more relevant to social context than others (a claim we dispute in this work, particularly in 5), research on race is compartmentalized to limited areas of NLP even in comparison with work on bias' .",
"For example, Blodgett et al. (2020) identify 20 papers that examine bias in co-reference resolution systems and 8 in machine translation, whereas we identify 0 papers in either that consider race.",
"Instead, race is most often mentioned in NLP papers in the context of abusive language, and work on detecting or removing bias in NLP models has focused on word embeddings.",
"Overall, our survey identifies a need for the examination of race in a broader range of NLP tasks, the development of multi-dimensional data sets, and careful consideration of context and appropriateness of racial categories.",
"In general, race is difficult to operationalize, but NLP researchers do not need to start from scratch, and can instead draw from relevant work in other fields.",
"While in 4 we primarily discuss race as a topic or a construct, in this section, we consider the role, or more pointedly, the absence, of traditionally underrepresented people in NLP research.",
"9 We identified only 8 relevant papers on Text Generation, which focus on other areas including chat bots, GPT2 / 3 , humor generation, and story generation.",
"As discussed in 3.2, data and annotations are generated by people, and failure to consider who created data can lead to harms.",
"In 3.2 we identify a need for diverse training data in order to ensure models work for a diverse set of people, and in 4 we describe a similar need for diversity in data that is used to assess algorithmic fairness.",
"However, gathering this type of data without consideration of the people who generated it can introduce privacy violations and risks of demographic profiling.",
"As an example, in 2019, partially in response to research showing that facial recognition algorithms perform worse on darker-skinned than lighter-skinned people (Buolamwini and Gebru, 2018; Raji and Buolamwini, 2019), researchers at IBM created the Diversity in Faces data set, which consists of 1 million photos sampled from the the publicly available YFCC-100M data set and annotated with craniofacial distances, areas and ratios, facial symmetry and contrast, skin color, age and gender predictions (Merler et al., 2019).",
"While this data set aimed to improve the fairness of facial recognition technology, it included photos collected from a Flickr, a photo-sharing web-site whose users did not explicitly consent for this use of their photos.",
"Some of these users filed a lawsuit against IBM, in part for subjecting them to increased surveillance, stalking, identity theft, and other invasions of privacy and fraud. 10 NLP researchers could easily repeat this incident, for example, by using demographic profiling of social media users to create more diverse data sets.",
"While obtaining diverse, representative, real-world data sets is important for building models, data must be collected with consideration for the people who generated it, such as obtaining informed consent, setting limits of uses, and preserving privacy, as well as recognizing that some communities may not want their data used for NLP at all (Paullada, 2020).",
"Research is additionally carried out by people who determine what projects to pursue and how to approach them.",
"While statistics on ACL conferences and publications have focused on geographic 10 https://bit.ly/3r3LuIk https://nbcnews.to/3j5hI39 IBM has since removed the Diversity in Faces data set as well as their Detect Faces public API and stopped their use of and research on facial recognition.",
"https://bit.ly/3j2Jv4i representation rather than race, they do highlight under-representation.",
"Out of 2 , 695 author affili-ations associated with papers in the ACL Anthology for 5 major conferences held in 2018 , only 5 ( 0 . 2 %) were from Africa, compared with 1 , 114 from North America ( 41 . 3 %).",
"11 Statistics published for 2017 conference attendees and ACL fellows similarly reveal a much higher percentage of people from North, Central and South Amer-ica ( 55 % attendees / 74 % fellows) than from Eu-rope, Middle East and Africa ( 19 %/ 13 %) or Asia-Pacific ( 23 %/ 13 %).",
"12 These broad regional categories likely mask further under-representation, e.g. percentage of attendees and fellows from Africa as compared to Europe.",
"According to an NSF report that includes racial statistics rather than nationality, 14% of doctorate degrees in Computer Science awarded by U.S. institutions to U.S. citizens and permanent residents were awarded to Asian students, < 4 % to Black or African American students, and 0% to American Indian or Alaska Native students (National Center for Science and Engineering Statistics, 2019).",
"13 It is difficult to envision reducing or eliminating racial differences in NLP systems without changes in the researchers building these systems.",
"One theory that exemplifies this challenge is interest convergence , which suggests that people in positions of power only take action against systematic problems like racism when it also advances their own interests (Bell Jr, 1980).",
"Ogbonnaya-Ogburu et al. (2020) identify instances of interest convergence in the HCI community, primarily in diversity initiatives that benefit institutions' images rather than underrepresented people.",
"In a research setting, interest convergence can encourage studies of incremental and surface-level biases while discouraging research that might be perceived as controversial and force fundamental changes in the field.",
"Demographic statistics are not sufficient for avoiding pitfalls like interest convergence, as they fail to capture the lived experiences of researchers.",
"Ogbonnaya-Ogburu et al. (2020) provide several examples of challenges that non-white HCI researchers have faced, including the invisible labor of representing diversity', everyday microaggres-11 http://www.marekrei.com/blog/ geographic-diversity-of-nlp-conferences/ 12 https://www.aclweb.org/portal/ content/acl-diversity-statistics 13 Results exclude respondents who did not report race or ethnicity or were Native Hawaiian or Other Pacific Islander.",
"sions, and altering their research directions in accordance with their advisors' interests.",
"Rankin and Thomas (2019) further discuss how research conducted by people of different races is perceived differently: Black women in academia who conduct research about the intersections of race, gender, class, and so on are perceived as doing service,' whereas white colleagues who conduct the same research are perceived as doing cutting-edge research that demands attention and recognition.",
"While we draw examples about race from HCI in the absence of published work on these topics in NLP, the lack of linguistic diversity in NLP research similarly demonstrates how representation does not necessarily imply inclusion.",
"Although researchers from various parts of the world (Asia, in particular) do have some numerical representation among ACL authors, attendees, and fellows, NLP research overwhelmingly favors a small set of languages, with a heavy skew towards European languages (Joshi et al., 2020) and standard' language varieties (Ku-mar et al., 2021).",
"Finally, NLP research produces technology that is used by people, and even work without direct applications is typically intended for incorporation into application-based systems.",
"With the recognition that technology ultimately affects people, researchers on ethics in NLP have increasingly called for considerations of whom technology might harm and suggested that there are some NLP technologies that should not be built at all.",
"In the context of perpetuating racism, examples include criticism of tools for predicting demographic information (Tat-man, 2020) and automatic prison term prediction (Leins et al., 2020), motivated by the history of using technology to police racial minorities and related criticism in other fields (Browne, 2015; Buolamwini and Gebru, 2018; McIlwain, 2019).",
"In cases where potential harms are less direct, they are often unaddressed entirely.",
"For example, while low-resource NLP is a large area of research, a paper on machine translation of white American and European languages is unlikely to discuss how continual model improvements in these settings in-crease technological inequality.",
"Little work on low-resource NLP has focused on the realities of structural racism or differences in lived experience and how they might affect the way technology should be designed.",
"Detection of abusive language offers an informative case study on the danger of failing to consider people affected by technology.",
"Work on abusive language often aims to detect racism for content moderation (Waseem and Hovy, 2016).",
"However, more recent work has show that existing hate speech classifiers are likely to falsely label text containing identity terms like black' or text containing linguistic markers of AAE as toxic (Dixon et al., 2018; Sap et al., 2019; Davidson et al., 2019; Xia et al., 2020).",
"Deploying these models could censor the posts of the very people they purport to help.",
"In other areas of statistics and machine learning, focus on participatory design has sought to amplify the voices of people affected by technology and its development.",
"An ICML 2020 workshop titled Participatory Approaches to Machine Learn-ing highlights a number of papers in this area (Kulynych et al., 2020; Brown et al., 2019).",
"A few related examples exist in NLP, e.g. Gupta et al. (2020) gather data for an interactive dialogue agent intended to provide more accessible information about heart failure to Hispanic/Latinx and African American patients.",
"The authors engage with healthcare providers and doctors, though they leave focal groups with patients for future work.",
"While NLP researchers may not be best situated to examine how people interact with deployed technology, they could instead draw motivation from fields that have stronger histories of participatory design, such as HCI.",
"However, we did not identify citing participatory design studies conducted by others as common practice in the work we surveyed.",
"As in the case of researcher demographics, participatory design is not an end-all solution.",
"Sloane et al. (2020) provide a discussion of how participatory design can collapse to participation-washing' and how such work must be context-specific, long-term, and genuine.",
"We conclude by synthesizing some of the observations made in the preceding sections into more actionable items.",
"First, NLP research needs to explicitly incorporate race.",
"We quote Benjamin (2019): [technical systems and social codes] operate within powerful systems of meaning that render some things visible, others invisible, and create a vast array of distortions and dangers.",
"In the context of NLP research, this philosophy implies that all technology we build works in ser-vice of some ideas or relations, either by upholding them or dismantling them.",
"Any research that is not actively combating prevalent social systems like racism risks perpetuating or exacerbating them.",
"Our work identifies several ways in which NLP research upholds racism: Systems contain representational harms and performance gaps throughout NLP pipelines Research on race is restricted to a narrow subset of tasks and definitions of race, which can mask harms and falsely reify race as natural' Traditionally underrepresented people are excluded from the research process, both as consumers and producers of technology Furthermore, while we focus on race, which we note has received substantially less attention than gender, many of the observations in this work hold for social characteristics that have received even less attention in NLP research, such as socioeconomic class, disability, or sexual orientation (Mendelsohn et al., 2020; Hutchinson et al., 2020).",
"Nevertheless, none of these challenges can be addressed without direct engagement with marginalized communities of color.",
"NLP researchers can draw on precedents for this type of engagement from other fields, such as participatory design and value sensitive design models (Friedman et al., 2013).",
"Additionally, numerous organizations already exist that serve as starting points for partnerships, such as Black in AI, Masakhane, Data for Black Lives, and the Algorithmic Justice League.",
"Finally, race and language are complicated, and while readers may look for clearer recommendations, no one data set, model, or set of guidelines can solve' racism in NLP.",
"For instance, while we draw from linguistics, Hudley et al. (2020) in turn call on linguists to draw models of racial justice from anthropology, sociology, and psychology.",
"Relatedly, there are numerous racialized effects that NLP research can have that we do not address in this work; for example, Bender et al. (2021) and Strubell et al. (2019) discuss the environmental costs of training large language models, and how global warming disproportionately affects marginalized communities.",
"We suggest that readers use our work as one starting point for bringing inclusion and racial justice into NLP.",
"We gratefully thank Hanna Kim, Kartik Goyal, Ar-tidoro Pagnoni, Qinlan Shen, and Michael Miller Yoder for their feedback on this work.",
"Z.W. has been supported in part by the Canada 150 Research Chair program and the UK-Canada Artificial Intelligence Initiative.",
"A.F. has been supported in part by a Google PhD Fellowship and a GRFP under Grant No.",
"DGE1745016.",
"This material is based upon work supported in part by the National Science Foundation under Grants No.",
"IIS2040926 and IIS2007960.",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.",
"We, the authors of this work, are situated in the cultural contexts of the United States of America and the United Kingdom/Europe, and some of us identify as people of color.",
"We all identify as NLP researchers, and we acknowledge that we are situated within the traditionally exclusionary practices of academic research.",
"These perspectives have impacted our work, and there are viewpoints outside of our institutions and experiences that our work may not fully represent."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"result",
"other",
"result",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain"
] |
[
"Most data selection research in machine translation focuses on improving a single domain.",
"We perform data selection for multiple domains at once.",
"This is achieved by carefully introducing instance-level domain-relevance features and automatically constructing a training curriculum to gradually concentrate on multi-domain relevant and noise-reduced data batches.",
"Both the choice of features and the use of curriculum are crucial for balancing and improving all domains, including out-of-domain.",
"In large-scale experiments, the multi-domain curriculum simultaneously reaches or outperforms the individual performance and brings solid gains over no-curriculum training.",
"In machine translation (MT), data selection, e.g., (Moore and Lewis, 2010; Axelrod et al., 2011), has remained as a fundamental and important research topic.",
"It has played a crucial role in domain adaptation by selecting domain-matching training examples, or data cleaning (aka denoising) by selecting high-quality examples.",
"So far, the most extensively studied scenario assumes a single domain to improve.",
"It becomes both technically challenging and practically appealing to build a large-scale multi-domain neural machine translation (NMT) model that performs simultaneously well on multiple domains at once.",
"This requires addressing research challenges such as catastrophic forgetting (Good-fellow et al., 2014) at scale and data balancing.",
"Such a model can easily find potential use cases, i.e., as a solid general service, for downstream transfer learning, for better deployment efficiency, or for transfer learning across datasets.",
"Unfortunately, existing single-domain data-selection methods do not work well for multiple domains.",
"accuracy of one domain will often hurt that of another (van der Wees et al., 2017; Britz et al., 2017), and improving model generalization across all domains by clean-data selection (Koehn et al., 2018) may not promise optimization of a particular domain.",
"Multiple aspects need to be considered for training a multi-domain model.",
"This paper presents a dynamic data selection method to multi-domain NMT.",
"Things we do differently from previous work in mixing data are the choice of instance-level features and the employment of a multi-domain curriculum that is additionally able to denoise.",
"These are crucial for mixing and improving all domains, including out-of-domain.",
"We experiment with large datasets at different noise levels and show that the resulting models meet our requirements.",
"In MT, research that is most relevant to our work is data selection and data mixing, both being concerned with how to sample examples to train an MT model, usually for domain adaptation.",
"Table 1 categorizes previous research by two aspects and shows where our work stands.",
"These two aspects are: 1. Is the method concerned with a single domain or multiple domains?",
"2. Does the method use data statically or dynamically?",
"Static data selection for a single domain.",
"Moore and Lewis (2010) select in-domain data for n-gram language model (LM) training.",
"It is later generalized by Axelrod et al. (2011) to select parallel data for training MT models.",
"Chen and Huang (2016); Chen et al. (2016) use classifiers to select domain data.",
"Clean-data selection (Koehn et al., 2019, 2018; Junczys-Dowmunt, 2018) reduces harmful data noise to improve translation quality across domains.",
"All these works select a data subset for a single domain 1 and treat the selected data as a static/flat distribution.",
"Dynamic data selection for a single domain.",
"Static selection has two shortcomings: it discards data and it treats all examples equally after selection.",
"When data is scarce, any data could be helpful, even if it is out of domain or noisy 2 .",
"Dynamic data selection is introduced to sort data from least in-domain to most in-domain.",
"Training NMT models on data sorted this way effectively takes advantage of transfer learning.",
"Curriculum learning (CL) (Bengio et al., 2009) has been used as a formulation for dynamic data selection.",
"Domain curricula (van der Wees et al., 2017; Zhang et al., 2019) are used for domain adaptation.",
"Model stacking (Sajjad et al., 2017; Freitag and Al-Onaizan, 2016) is a practical idea to build domain models.",
"CL is also used for denoising (Ku-mar et al., 2019; Wang et al., 2018a,b), and for faster convergence and improved general quality (Zhang et al., 2018; Platanios et al., 2019).",
"Wang et al. (2018a) introduce a curriculum for training efficiency.",
"In addition to data sorting/curriculum, instance/loss weighting (Wang et al., 2017; Chen et al., 2017; Wang et al., 2019b) has been used as an alternative.",
"CL for NMT represents the SOTA data-selection method, but most existing works target at a single domain, be it a specific domain or the denoising domain.",
"Static data mixing for multiple domains.",
"When mixing data from multiple domains, a fundamental challenge is to address catastrophic forgetting (Goodfellow et al., 2014)training an NMT model to focus on one domain can likely hurt another (van der Wees et al., 2017; Britz et al., 1 We treat denoising as a domain in the paper, inspired by previous works that treat data noise using domain adaptation methods, e.g., (Junczys-Dowmunt, 2018).",
"2 We refer to data regularization (using more data) and to transfer learning (fine-tuning) to exploit both data quantity and quality, the idea behind dynamic data selection.",
"See Appendix C. 2017).",
"Britz et al. (2017) learn domain-discerning (or -invariant) network representation with a domain discriminator network for NMT.",
"The methods, however, require that domain labels are available in data.",
"Tars and Fishel (2018) cluster data and tag each cluster as multi-domain NMT training data, but the method treats data in each cluster as a flat distribution.",
"Farajian et al. (2017) implement multi-domain NMT by on-the-fly data retrieval and adaptation per sentence, at increased inference cost.",
"Most existing methods (or experiment setups) have the following problems:",
"(i) They mix data statically.",
"(ii) They don't consider the impact of data noise, which is a source of catastrophic forgetting.",
"(iii) Experiments are carried out with small datasets, without separate examination on the data regularization effect.",
"(iv) They do not examine out-of-domain performamce.",
"Automatic data balancing for multi-domains.",
"(Wang et al., 2020) automatically learn to weight (flat) data streams of multi-languages (or \"do-mains\").",
"We perform dynamic data selection and regularization through a mulit-domain curriculum.",
"Automatic curriculum learning.",
"Our work falls under automatic curriculum construction (Graves et al., 2017) and is directly inspired by Tsvetkov et al. (2016), who learn to weight and combine instance-level features to form a curriculum for an embedding learning task, through Bayesian Optimization.",
"A similar idea (Ruder and Plank, 2017) is used to improve other NLP tasks.",
"Here, we use the idea for NMT to construct a multi-domain data selection scheme with various selection scores at our disposal.",
"The problem we study is connected to the more general multi-objective optimization problem.",
"Duh (2018) uses Bandit learning to tune hyper-parameters such as the number of network layers for NMT.",
"More related work.",
"Previously, catastrophic forgetting has mostly been studied in the continued-training setup (Saunders et al., 2019; Khayrallah et al., 2018), to refer to the degrading performance on the out-of-domain task when a model is fine-tuned on in-domain data.",
"This setup is a popular topic in general machine learning research (Aljundi et al., 2019).",
"Thompson et al. (2018) study domain adaptation by freezing subnetworks.",
"Our work instead addresses forgetting in the data-balancing scenario for multi-domains.",
"We use curriculum to generalize fine-tuning.",
"We first introduce curriculum learning (CL) (Ben-gio et al., 2009), which serves as a formulation for SOTA single-domain dynamic data selection and which our method is built upon and generalizes.",
"In CL, a curriculum, C , is a sequence of training criteria over training steps.",
"A training criterion, Q t ( y | x ) , at step t is associated with a set of weights, W t ( x, y ) , 3 over training sentence pairs ( x, y ) in a parallel dataset D , where y is the translation for x .",
"Q t ( y | x ) is a re-weighting of the original training distribution P ( y | x ) : Q t ( y | x ) W t ( x, y ) P ( y | x ) , ( x, y ) D (1) Hence, for T maximum training steps, C is a sequence: C = (cid:104) Q 1 , ..., Q t , ..., QT (cid:105) (2) At step t , an online learner randomly samples a data batch from Q t to fine-tune model m t 1 into m t .",
"Therefore, C corresponds to a sequence of models, (cid:104) m 1 , ..., m t , ..., M (cid:105) .",
"M is the final model that the entire curriculum has been optimizing towards.",
"Intermediate models, m t , serve as stepping stones to M , to transfer knowledge through them and regularize the training for generalization.",
"A performance metric P ( C ) evaluates M on a development or test set, after training on C .",
"3 As a preview, in our paper, W t ( x, y ) uses uniform weights over selected examples and assigns zero weights for filtered examples, similar to a mask.",
"In NMT, CL is used to implement dynamic data selection.",
"First, a scoring function (Section 4.3) is employed to measure the usefulness of an example to a domain and sort data.",
"Then mini-batch sampling, e.g., (Kocmi and Bojar, 2017), is designed to realize the weighting W t , to dynamically evolve the training criteria Q t towards in-domain.",
"Figure 1 (1)-(4) illustrates the basic idea of the curriculum we use.",
"(1) shows three sentence pairs, S 1 , S 2 , S 3 , each having three scores, respectively representing usefulness to three domains.",
"A grey-domain training curriculum, for example, relies on the data order in (2), gradually discards least useful examples according to W t ( x, y ) (Eq. 1) in Table 2 (1): At step 1, the learner uniformly samples from all examples ( W 1 ), producing model m 1 .",
"In step 2, the least-in-domain S 3 is discarded (strikethrough) by W 2 so we sample from subset { S 1 , S 2 } uniformly to reach m 2 .",
"We repeat this until reaching the final model M .",
"In this process, sampling is uniform in each step, but in-domain examples (e.g., S 1 ) are reused more over steps.",
"Similarly, we can construct the dark-domain curriculum in Figure 1 (3) and the white-domain (4).",
"The challenges in multi-domain/-task data selection lie in addressing catastrophic forgetting and data balancing.",
"In Figure 1, while curriculum (2) moves a model to the grey-domain direction, this direction may not necessarily be positively consistent with the dark domain (Figure 1 (3)), causing dropped dark-domain performance.",
"Ideally, a training example that introduces the least forgetting across all domains would have gradients that move the model in a common direction towards all domains.",
"While this may not be easily feasible by selecting a single example, we would like the intuition to work in a data batch on average.",
"Therefore, our idea is to carefully introduce D curriculum finetuneNMT f 1 ( x, y ) f N ( x, y ) ... model v 1 v N ...",
"per-example data-selection scores (called features) to measure domain sharing, intelligently weight them to balance the domains of interest, and dynamically schedule examples to trade-off between regularization and domain adaptation.",
"1. Features of an example reflect its relevance to domains.",
"2. Feature weights are jointly learned/optimized based on end model performance.",
"3. Training is dynamic, by gradually focusing on multi-domain relevant and noise-reduced data batches.",
"Furthermore, a viable multi-domain curriculum meets the following performance requirements :",
"(i) It improves the baseline model across all domains.",
"(ii) It simultaneously reaches (or outperforms) the peak performance of individual single-domain curricula.",
"Formally, for a sentence pair ( x, y ) , let f n ( x, y ) R be its n -th feature that specifies how ( x, y ) is useful to a domain.",
"Suppose we are interested in K domains and each example has N features.",
"For instance, each sentence pair of S 1 , S 2 , S 3 in Figure 1 (1) has three features ( N = 3 ), each for one domain ( K = 3 ).",
"4 We represent ( x, y ) 's features using a feature vector F ( x, y ) = 4 But N does not necessarily equal K because we can introduce multiple features for one domain or a single feature for multiple domains.",
"[ f 0 ( x, y ) , ..., f N 1 ( x, y )] .",
"Given a weight vector V = [ v 0 , ..., v N 1 ] for all sentence pairs, we compute an aggregated score f ( x, y ) = V F ( x, y ) (4) for each sentence pair and sort the entire data in increasing order.",
"We then construct a curriculum (cid:98) C ( V ) to fine-tune a warmed-up model, evaluate its performance and propose a next weight vector.",
"After several iterations/trials, the optimal weight vector V is the one with the best end performance: V = arg max VP ( (cid:98) C ( V )) (5) Figure 2 shows the framework.",
"For the process to be practical and scalable, (cid:98) C fine-tunes a warmed-up model for a small number of steps.",
"The learned V can then eventually be used for retraining a final model from scratch.",
"We design the following types of features for each training example and instantiate them in Experiments (Section 5).",
"NMT domain features ( q Z ) compute, for a pair ( x, y ) , the cross-entropy difference between two NMT models: q Z ( x, y )=log P ( y | x ; Z ) log P ( y | x ; base ) | y | (6) P ( y | x ; base ) is a baseline model with parameters base trained on the background parallel corpus, P ( y | x ; Z ) is a Z -domain model with Z by fine-tuning base on a small, Z -domain parallel corpus (cid:98) DZ with trusted quality and | y | is the length of y .",
"q Z discerns both noise and domain Z (Wang et al., 2019a).",
"Each domain Z has its own (cid:98) DZ .",
"Importantly, Grangier (2019) shows that, under the Taylor approximation (Abramowitz and Ste-gun, 1964), q Z approximates the dot product between gradient, g ( x, y ; base ) , of training example ( x, y ) and gradient, g ( (cid:98) DZ , base ) , of seed data (cid:98) DZ .",
"5 Thus an example with positive q Z likely 5 That is, according to Grangier (2019): q Z ( x, y ) | y | = log P ( y | x ; Z ) log P ( y | x ; base ) g ( x, y ; base ) (cid:62) g ( (cid:98) DZ , base ) (7) when base and Z are close, which is the case for fine-tuning: Z = base + g ( (cid:98) DZ , base ) .",
"moves a model towards domain Z .",
"For multiple domains, Z 1 , ..., ZK , selecting a batch of examples with q Z k 's all being positive would move a model towards a common direction shared across multiple domains, which alleviates forgetting.",
"The Z -domain feature q Z ( x, y ) can be easily generalized into a single multi-domain feature , q Z , for a set of domains Z : q Z ( x, y )=log P ( y | x ; Z ) log P ( y | x ; base ) | y | (8) by simply concatenating all the seed parallel corpus (cid:98) DZ from the constituent domains into (cid:98) DZ and use it to fine-tune the baseline base into Z .",
"A benefit of q Z is scalability: using a single feature value to approximate ( x, y ) 's gradient consistency with the multiple domains at once.",
"Simple concatenation means, however, domain balancing is not optimized as in Eq.",
"5.",
"NLM domain features ( d Z ) (Moore and Lewis, 2010; van der Wees et al., 2017) compute Z domain relevance of sentence x with neural language models (NLM), like q Z : d Z ( x ) = log P ( x ; Z ) log P ( x ; base ) | x | (9) where P ( x ; base ) is an NLM with parameters base trained on the x half of the background parallel data, and P ( x ; Z ) is obtained by fine-tuning P ( x ; base ) on Z -domain monolingual data.",
"Although d Z may not necessarily reflect the translation gradient of an example under an NMT model, it effectively assesses the Z -domain relevance and, furthermore, allows us to include additional larger amounts of in-domain monolingual data.",
"We do not use its bilingual version (Axelrod et al., 2011), but choose to consider only the source side, for simplicity.",
"Cross-lingual embedding similarity feature ( emb ) computes the cosine similarity of a sentence pair in a cross-lingual embedding space.",
"The embedding model is trained to produce similar representations exclusively for true bilingual sentence pairs, following Yang et al. (2019).",
"BERT quality feature ( BERT ) represents quality scores from a fine-tuned multilingual BERT model (Devlin et al., 2018).",
"We fine-tune a pre-trained BERT model 6 on a supervised dataset with positive and negative translation pairs.",
"These features compensate each other by capturing the information in a sentence pair from different aspects: NLM features capture domain.",
"NMT features additionally discern noise.",
"BERT and emb are introduced for denoising, by transfer-ing the strength of the data they are trained on.",
"All these features are from previous research and here we integrate them to solve a generalized problem.",
"Eq.",
"5 evaluates the end performance P ( (cid:98) C ( V )) of a multi-domain curriculum candidate.",
"We simply combine the validation sets from multi-domains into a single validation set to report the perplexity of the last model checkpoint, after training the model on (cid:98) C ( V ) .",
"The best multi-domain curriculum minimizes model's perplexity (or maximizes its negative per Eq. 5) on the mixed validation set.",
"We experiment with different mixing ratios.",
"We solve Eq.",
"5 with Bayesian Optimization (BayesOpt) (Shahriari et al., 2016) as the optimizer in Figure 2. BayesOpt is derivative-free and can optimize expensive black-box functions, with no assumption of the form of P .",
"It has recently become popular for training expensive machine-learning models in the AutoML paradigm.",
"It consists of a surrogate model for approximating P ( (cid:98) C ( V )) and an acquisition function for deciding the next sample to evaluate.",
"The surrogate model evaluates (cid:98) C ( V ) without running the actual NMT training, by the Gaussian process (GP) priors over functions that express assumptions about P .",
"The acquisition function depends on previous trials, as well as the GP hyper-parameters.",
"The Expected Improvement (EI) criterion (Srinivas et al., 2010) is usually used as acquisition function.",
"Algorithm 1 depicts how BayesOpt works in our setup.",
"We use Vizier (Golovin et al., 2017) for Batched Gaussian Process Bandit, but open-source implementations of BayesOpt are easily available.",
"7 .",
"We pre-compute all features for each sentence pair ( x, y ) in training data and turn its features into a single score f ( x, y ) by Eq.",
"4, given a weight vector.",
"We then construct a curriculum by instantiating its re-weighting W t ( x, y ) (Eq. 1).",
"To that end, we define a Boolean, dynamic data selection function f ( x, y ; t ) to check, at step t , if ( x, y ) D belongs to the top ( t ) -ratio examples in training data D sorted in increasing order of f ( x, y ) , (0 < 1) .",
"So f is a mask.",
"Suppose n ( t ) examples are selected by f ( x, y ; t ) , the re-weighting will then be W t ( x, y ) = 1 /n ( t ) f ( x, y ; t ) .",
"Filtered examples have zero weights and selected ones are uniformly weighted.",
"We set ( t ) = (1 / 2) t/H to decay/tighten over time 8 , controlled by the hyper-parameter H .",
"During training, f ( x, y ; t ) progressively selects higher f ( x, y ) scoring examples.",
"In implementation, we integrate f ( x, y ; t ) in the data feeder to pass only selected examples to the downstream model trainer; we also normalize f ( x, y ) offline to directly compare to ( t ) online to decide filtering.",
"As an example, the W t ( x, y ) for the multi-domain curriculum order in Figure 1 (5) can look like Table 2 (2).",
"Data and domains.",
"We experiment with two English French training datasets: the noisy ParaCrawl data 9 (290 million sentence pairs) and the WMT14 training data (38 million pairs).",
"We use SentencePiece model (Kudo, 2018) for subword segmentation with a source-target shared vocabulary of 32,000 subword units.",
"We evaluate our method with three domains: two specific domains, news and TED subtitles, and out-of-domain.",
"News domain uses the WMT14 news 7 E.g.,https://github.com/tobegit3hub/ advisor 8 When the training data is small, we can, in practice, let a model warm up before applying the schedule.",
"testset ( N14 ) for testing, and WMT12-13 for validation in early stopping (Prechelt, 1997).",
"The TED domain uses the IWSLT15 testset ( T15 ) for testing, and the IWSLT14 testset for validation.",
"Out-of-domain performance is measured by two additional testsets, patent testset ( PA ) (2000 sentences) 10 and WMT15 news discussion testset ( D15 ).",
"We report SacreBLEU 11 (Post, 2018).",
"Features.",
"NMT features use the parallel data to train the baseline NMT models.",
"The new-domain-discerning NMT feature q N uses WMT10-11 (5500 pairs) as in-domain data (cid:98) DN .",
"The TED NMT feature q T uses the TED subtitle training data (22k pairs) as in-domain data (cid:98) DT .",
"NLM features use the English half of parallel data to train the baseline NLMs.",
"The news-domain-discerning NLM feature d N uses the 28 million English sentences from WMT14.",
"The TED subtitle NLM feature d T uses the English side of IWSLT15 in-domain parallel training data.",
"The training of the cross-lingual embedding model follows Yang et al. (2019) with a 3-layer transformer (Vaswani et al., 2017) (more details in Appendix A).",
"For the BERT feature, we sample positive pairs from the same data to train the cross-lingual embedding model.",
"The negatives are generated using the cross-lingual embedding model, via 10-nearest neighbor retrieval in the embedding space, excluding the true translation.",
"We pick the nearest neighbor to form a hard negative pair with the English sentence, and a random neighbor to form another negative pair.",
"We sample 600k positive pairs and produce 1.8M pairs in total.",
"Model.",
"We use LSTM NMT (Wu et al., 2016) as our models, but with the Adam optimizer (Kingma and Ba, 2015).",
"The batch size is 10k averaged over 8 length-buckets (with synchronous training).",
"NLM/NMT features uses 512 dimensions by 3 layersNLM shares the same architecture as NMT by using dummy source sentences (Sennrich et al., 2016).",
"The final models are of 1024 dimensions by 8 layers, trained for 55 k max steps.",
"Training on WMT data uses a dropout probability of 0.2.",
"Transformer results are in Appendix B. Curriculum optimization.",
"10 Randomly sampled from www.epo.org 11 Signature: BLEU+case.mixed+numrefs.1+ smooth.exp+tok.13a+version.1.4.2",
"and the last 5 in exploitation.",
"Each trial trains for 2 k steps 12 by fine-tuning a warmed-up model with the candidate curriculum.",
"The curriculum decays ( ( t ) ) from 100% and plateaus at 20% at step 2k.",
"We simply and heuristically set a range of [0 . 0 , 1 . 0] for all feature weights.",
"We don't normalize feature values when weighting them.",
"We evaluate if the multi-domain curriculum meets requirements",
"(i) and",
"(ii) in Section 4.1.",
"B : baseline that does not use curriculum learning.",
"(cid:98) C 6 -feats : multi-domain curriculum with 6 features, d N , d T , q N , q T , BERT , emb , weights learned by BayesOpt.",
"Table 3 shows (cid:98) C 6 -feats improves B on all testsets, especially on noisy ParaCrawlrequirement",
"(i) is met.",
"It is important to note that our WMT baseline (W1) matches Wu et al. (2016) on N14, as shown by re-computed tokenized BLEU (italics).",
"We examine the following individual curricula, by training NMT models with each, respectively:",
"C d N , uses news NLM feature d N (Eq. 9).",
"C d T , uses TED subtitle NLM feature d T .",
"C q N , uses news NMT feature q N (Eq. 6).",
"C q T , uses TED NMT feature q T .",
"CBERT , uses BERT quality feature.",
"C emb , uses cross-lingual embedding feature.",
"12 2k is empirically chosen to be practical.",
"We use a number of fine-tuning trials in Eq.",
"5.",
"NMT training is expensive so we don't want a trial to tune for many steps.",
"NMT is very adaptive on domain data, so each trial does not need many steps.",
"We find no significant difference among 1k, 2k, 6k.",
"In Table 4, frame boxes mark the best BLEUs (P* or W*) per column, across P3-P7 or W3-W7.",
"The last column shows averaged BLEU over all testsets.",
"Bold font indicates C 6 -feats matches or improves W*.",
"As shown, C 6 -feats matches or slightly outperforms the per-domain curricula across testsets.",
"Therefore, (cid:98) C 6 -feats meets requirement",
"(ii).",
"Strengths and weaknesses of a feature.",
"Table 4 also reveals the relative strengths and weaknesses of each type of features.",
"The peak BLEU (in a frame box) on each testset is achieved by one of CBERT / emb , C q N and C q T , less by NLM features d N , d T .",
"This contrast seems bigger on the noisy ParaCrawl, but the NLM features do bring gains over B .",
"Overall, CBERT / emb (P5, W5) perform well, attributed to their denoising power, but lose to the NMT features (P7, W7) on T15, due to lack of explicit capturing of domain.",
"The NMT features seem to subtly compensate in domains, and the domain features in denoising, but working with other features improves the model.",
"BERT and emb features.",
"Both BERT and emb use knowledge external to the experiment setup.",
"For a fair comparison to baselines and a better understanding of them, we drop them by building Curriculum N14 T15 PA D15 Avg P2: (cid:98) C 6 -feats 37.0 38.1 48.3 35.7 39.8 P8: (cid:98) C 4 -feats 36.6 38.1 46.7 35.5 39.2 W2: (cid:98) C 6 -feats 39.3 38.8 46.1 36.1 40.1 W8: (cid:98) C 4 -feats 38.9 38.9 46.5 36.1 40.1 Table 5: BERT and emb features positively contribute to (cid:98) C 6 -feats on ParaCrawl (P).",
"Learned feature weights.",
"Figure 3 shows BayesOpt learns to weight features adaptively in (cid:98) C 6 -feats on ParaCrawl (grey) and WMT (white), respectively.",
"ParaCrawl is very noisy thus noise non-discerning features d N and d T do not have a chance to help, but their weights become stronger on the cleaner WMT training data.",
"It is surprising that BERT feature is still useful to the WMT training.",
"We hypothesize this may suggest BERT feature have additional strength to just denoising, or that data noise could be subtle and exist in cleaner data.",
"We compare BayesOpt (BO) and Random Search (RS) (Bergstra and Bengio, 2012) to solve Eq.",
"5, as well as uniform weighting (Uniform).",
"In Table 6, all improve baselines, especially on ParaCrawl (P).",
"RS does surprisingly well on ParaCrawl, but BayesOpt appears better overall.",
"13 5.3.3 Mixing validation sets Eq.",
"BLEUs.",
"For example, on ParaCrawl, when news sentences are absent from the validation set, N14 drops by 0.7 BLEU (P8 vs. P13).",
"We use the four feats as in (cid:98) C 4 -feats in this examination.",
"We simulate dynamic data selection with a random sample of 2000 pairs from the WMT data and annotate each pair by human raters with 0 (nonsense) 4 (perfect) quality scale (following Wang et al. (2018b)).",
"We sort the pairs by f ( x, y ) (Eq. 4).",
"A threshold selects a subset of pairs, for which we average the respective NMT feature values as the domain relevance.",
"Figure 4 shows that the multi-domain curriculum ( (cid:98) C 6 -feats ) learns to dynamically increase quality and multi-domain relevance.",
"Therefore, our idea (Section 4.1) works as intended.",
"Furthermore, training seems to gradually increase quality or domain in different speeds, determined by Eq.",
"5.",
"With the learned weights, we compute a weight for each example to sort data to form a curriculum.",
"Alternatively, we could weight the cross-entropy loss for that sentence during training (Wang et al., 2017; Chen et al., 2017).",
"Table 8 shows that curriculum yields improvements over weighing per-Mixing Ratio N14 T15 PA D15 Avg P11: 1.0:0.0 36.3 37.8 47.3 35.3 39.2 P12: 0.8:0.2 36.4 38.2 47.7 35.4 39.4 P8: 0.5:0.5 36.6 38.1 46.7 35.5 39.2 P13: 0.0:1.0 35.9 38.1 47.0 35.2 39.1 W11: 1.0:0.0 39.1 38.6 46.4 36.0 40.0 W12: 0.8:0.2 39.0 38.7 46.3 35.7 39.9 W8: 0.5:0.5 38.9 38.9 46.5 36.1 40.1 W13: 0.0:1.0 39.1 38.6 46.4 36.0 40.0 Table 7: Guiding multi-domain curriculum learning by mixing validation sets.",
"sentence loss, in particular on noisy training data, confirming previous findings (van der Wees et al., 2017).",
"C q N and C q T each use a small in-domain parallel dataset, but we can simply fine-tune the final models on either dataset (+N, +T) or their concatenation (+N+T).",
"Table 9 shows that (cid:98) C 6-feats can be further improved by in-domain fine-tuning 14 and that both (cid:98) C 6-feats and its fine-tuning still improve the fine-tuned baselines, in particular on ParaCrawl.",
"One potential issue with using multiple per-domain features ( q Z ( x, y ) 's in Eq.",
"6) is scores are not shared across domains and linear weighting may not capture feature dependency.",
"For example, we need two NMT features if there are two domains.",
"We replace the two NMT features, q N and q T , in (cid:98) C 4-feats with a single two-domain feature q Z = { N,T } (Eq. 8), but with the two corresponding NLM features unchanged (so the new experiment has 3 features).",
"Table 10 shows multi-domain feature contributes slightly better than linear combination of per-domain features (P19 vs. P8).",
"The per-domain features, however, have the advantage of efficient feature weighting.",
"In case of many features, learning to compress them seems to be an interesting future investigation.",
"Model N14 T15 PA D15 Avg P8: per-dom.",
"36.6 38.1 46.7 35.5 39.2 P19: multi-dom.",
"36.6 38.6 46.8 35.9 39.5 Table 10: Multi-domain/task feature (Eq. 8) seems to contribute slightly better than linear combination of multiple perdomain features (Eq. 6).",
"Existing curriculum learning research in NMT focuses on a single domain.",
"We present a multi-domain curriculum learning method.",
"We carefully introduce instance-level features and learn a training curriculum to gradually concentrate on multi-domain relevant and noise-reduced data batches.",
"End-to-end experiments and ablation studies on large datasets at different noise levels show that the multi-domain curriculum simultaneously reaches or outperforms the individual performance and brings solid gains over no-curriculum training, on in-domain and out-of-domain testsets.",
"The authors would like to thank David Grangier for Eq.",
"7 and derivation, the three anonymous reviewers for their insightful reviews, Yuan Cao for his technical suggestions, Jason Smith, Markus Freitag, Pidong Wang and Reid Pryzant for comments on an earlier draft, Quoc V. Le for suggestions in a related thread."
] | [
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"other"
] |
[
"Automated simplification models aim to make input texts more readable.",
"Such methods have the potential to make complex information accessible to a wider audience, e.g., providing access to recent medical literature which might otherwise be impenetrable for a lay reader.",
"However, such models risk introducing errors into automatically simplified texts, for instance by inserting statements unsupported by the corresponding original text, or by omitting key information.",
"Providing more readable but inaccurate versions of texts may in many cases be worse than providing no such access at all.",
"The problem of factual accuracy (and the lack thereof) has received heightened attention in the context of summarization models, but the factuality of automatically simplified texts has not been investigated.",
"We introduce a taxonomy of errors that we use to analyze both references drawn from standard simplification datasets and state-of-the-art model outputs.",
"We find that errors often appear in both that are not captured by existing evaluation metrics, motivating a need for research into ensuring the factual accuracy of automated simplification models.",
"Simplification methods aim to make texts more readable without altering their meaning.",
"This may permit information accessibility to a wide range of audiences, e.g., non-native speakers (Yano et al., 1994), children (De Belder and Moens, 2010), as well as individuals with aphasia (Carroll et al., 1998) and dyslexia (Rello et al., 2013).",
"Simplification may also help laypeople digest technical information that would otherwise be impenetrable (Damay et al., 2006; Devaraj et al., 2021).",
"Recent work has made substantial progress by designing sequence-to-sequence neural models that translate complex sentences into simplified versions (Xu et al., 2016; Alva-Manchego et al., 2020).",
"(1) [Original] There was no difference in operating time or perioperative complication rates.",
"[Model simplified] However, there was not enough evidence to determine if there was an important difference in operative time or complication rates when compared to conventional surgery.",
"(2) [Original] All studies were associated with methodological limitations.",
"[Model simplified] All studies were of poor quality and had limitations in the way they were conducted.",
"(3) [Original] On June 24 1979 (the 750th anniversary of the village), Glinde received its town charter.",
"[Model simplified] On June 24 1979, the 750th anniversary of the village was renamed.",
"(4) [Original] Others agreed with the federal court; they started marrying people in the morning.",
"[Model simplified] Others agreed with the federal court; they started trying in morning.",
"(5) [Original] In 2014, Mary Barra became CEO of General Motors, making her the first female CEO of a major automobile company.",
"[Model simplified] Also, just one woman leads a major automobile company.",
"Omitted main subject.",
"An important but mostly overlooked aspect of automated simplificationespecially in the conditional text generation regimeis whether outputs are faithful to the inputs that they are simplifying.",
"Consider, for example, automatically simplifying medical texts (Devaraj et al., 2021): Presenting individuals with readable medical information that contains factual errors is probably worse than providing no such access at all.",
"Recent work has acknowledged factuality and faithfulness as key issues to be addressed in other conditional generation tasks like summarization (Kryscinski et al., 2020a; Maynez et al., 2020; Pagnoni et al., 2021; Goyal and Durrett, 2021), yet so far little research has thoroughly studied the kinds of errors that simplification datasets and system outputs exhibit.",
"This work seeks to close this research gap.",
"Table 1 shows examples of generated outputs from existing simplification systems, and these clearly illustrate that factuality is an issue.",
"We conduct multi-dimensional analyses based on the edit nature of simplification (Xu et al., 2015; Dong et al., 2019) and define a small typology of (poten-tial) factual errors in the context of simplification.",
"Inserting information can be useful to define jargon and provide explanatory content, but introducing irrelevant or erroneous content (hallucinating) is bad (e.g., examples 1-2 in Table 1).",
"Omitting information related to the main entity or event could lead to a change in how the text is understood (e.g., example 5 in Table 1).",
"Finally, making inappropriate substitutions can result in inconsistencies (e.g., examples 3-4 in Table 1).",
"Together these dimensions represent the precision, recall, and accuracy of information conveyed in simplified texts.",
"We collect human ratings of factuality for these aspects on two widely used simplification corpora: Wikilarge (Zhang and Lapata, 2017) and Newsela (Xu et al., 2015).",
"Automatically aligned sentences from these two datasets are typically used to train and evaluate supervised simplification systems.",
"We find that errors occur frequently in the validation and test sets of both datasets, although they are more common in Newsela (Section 6).",
"We then evaluate outputs from several modern simplification models (Zhang and Lapata, 2017; Dong et al., 2019; Martin et al., 2020; Maddela et al., 2021), as well as a fine-tuned T5 (Raffel et al., 2020) model.",
"Compared to RNN-based models, Transformer-based ones tend to have less severe deletion and substitution errors; however, the pre-trained T5 produced more hallucinations on the more abstractive Newsela dataset.",
"We find that existing quality metrics for simplification such as SARI (Xu et al., 2016) correlate poorly with factuality.",
"Although deletion errors correlate with existing semantic similarity measures, they fail to capture insertion and substitution.",
"As an initial step towards automatic factuality assessment in simplification, we train RoBERTa (Liu et al., 2019)-based classification models using our annotated data, and use synthetically generated data to supplement training.",
"We demonstrate that this is a challenging task.",
"Our code and data can be found at https://github.com/AshOlogn/Evaluating-Factuality-in-Text-Simplification.",
"Factuality (and the lack thereof) has been iden-tified as critical in recent work in unsupservised simplification (Laban et al., 2021) and medical simplification (Devaraj et al., 2021).",
"Guo et al. (2018) incorporated textual entailment into their simplification task via an auxillary loss.",
"They showed that this improved simplifications with respect to standard metrics and human assessments of output fluency, adequacy, and simplicity, but they did not explicitly evaluate the resultant factuality of outputs, which is our focus.",
"Given the paucity of prior work investigating factuality in the context of automated simplification, the most relevant thread of research to the present effort is work on measuring (and sometimes improving) the factuality in outputs from neural summarization systems.",
"Falke et al. (2019a) proposed using textual entailment predictions as a means to identify errors in generated summaries.",
"Elsewhere, Kryscinski et al. (2020a) used weak supervision heuristic transformations used to intentionally introduce factual errorsto train a model to identify inaccuracies in outputs.",
"Maynez et al. (2020) enlisted humans to evaluate hallucinations (content found in a summary but not in its corresponding input) in automatically generated outputs.",
"They report that for models trained on the XSUM dataset (Narayan et al., 2018), over 70% of summaries contain hallucinations.",
"This corroborates other recent work (Falke et al., 2019a; Wallace et al., 2021), which has also found that ROUGE is a weak gauge of factuality.",
"Wang et al. (2020a) proposed QAGS , which uses automated question-answering to measure the consistency between reference and generated summaries.",
"Elsewhere, Xu et al. (2020) proposed evaluating textual factuality independent of surface realization via Semantic Role Labeling (SRL).",
"Finally, Pagnoni et al. (2021) introduced the FRANK (meta-)benchmark for evaluating factuality metrics for summarization.",
"While FRANK is tailored towards summarization-specific error categories including discourse, our ontology broadly reflects the goal of simplification (retaining content with simpler language) from the perspective of information precision, recall, and accuracy.",
"Above we reviewed various recently proposed frameworks and methods for assessing the factual",
"accuracy of automatically-generated summaries .",
"We aim in this work to similarly codify content errors in simplification .",
"Below we describe broad categories of errors 1 we observed in simplification datasets and system outputs, and then use these to design annotation guidelines that formalize accuracy assessment (Sec-tion 5).",
"Our analysis revealed three broad categories, illustrated in Table 2: (1) Information Insertion: This occurs when information not mentioned in the complex sentence is inserted intoor hallucinated inits simplified counterpart.",
"The insertion may be as small as mentioning a proper noun not in the complex sentence, or as large as introducing a new main idea.",
"This category is similar to extrinsic hallucination in the summarization literature (Maynez et al., 2020; Goyal and Durrett, 2021).",
"(2) Information Deletion: This is when information in the complex sentence is omitted from the simplified sentence.",
"A minor example of this is the reverse of the insertion case above, where an entity is mentioned by name in the complex sentence but only by pronoun in the simplified sentence.",
"(3) Information Substitution: This is when information in the complex sentence is modified in the simplified sentence such that it changes the meaning.",
"This category is broad, encompassing both alterations to the simplified sentence that directly contradict information in the complex sentence, and those that do not.",
"Because errors can co-occur, we adopt a multidimensional labeling scheme that requires a different label to be provided for each category.",
"Each category label specifies the severity of the error: 0no/trivial change; 1nontrivial but preserves main idea; 2doesn't preserve main idea; -1 gibberish, specified in Figure",
"1. Table 1 shows 1 We adapt a graded labeling scheme based on content and meaning preservation.",
"For brevity, we use the word error as a generic term to refer to all the phenomena captured by our labeling scheme, even those that may be considered acceptable in some simplification systems.",
"level-2 examples from system outputs for insertion (examples 1-2), substitution (examples 3-4), and deletion (example 5).",
"Reference examples are discussed in Section 6.",
"Interpretation as Precision and Recall In simplification one attempts to rewrite a given complex sentence to be simpler while preserving most of the information that it contains.",
"The categories above can be interpreted as errors in information precision (the fraction of content that also appears in the complex sentence) and recall (the fraction of content in the complex sentence preserved during simplification).",
"With this interpretation, a false positive (affecting precision ) occurs when the simplified sentence contains information not present in the source, i.e., introduces a hallucination.",
"And a false negative (hindering recall ) is where the simplified sentence omits key information in the source.",
"We annotate data from the simplification datasets themselves (we will call these reference examples), as well as from model-generated text.",
"Thus we assess how the distribution of errors in the references compares to that of errors in system outputs and glean insights that might relate model architecture and training choices to the kinds of errors produced.",
"Lapata, 2017) datasets.",
"These are commonly used in the literature, and so results have been reported on these corpora for a diverse collection of models.",
"Wikilarge comprises 296K roughly-aligned sentences pairs from English Wikipedia and Simple English Wikipedia.",
"Newsela (Xu et al., 2015) consists of 96K sentence pairs extracted from a dataset of news stories rewritten at 4 reading levels by professionals.",
"To make analysis tractable in this work, we examine the simplest level for Newsela.",
"We annotated 400 pairs of (complex, simplified) sentences each from the validation and test sets for Newsela.",
"For Wikilarge, we annotated 400 pairs from the validation set and 359 from the test set (this constitutes the entire test set).",
"Simplification Models.",
"We annotated outputs generated by a collection of models on the same validation and test examples from Wikilarge and Newsela, respectively.",
"We selected a set of models intended to be representative of different architectures and training methods.",
"More specifically, for RNN-based models we considered Dress (Zhang and Lapata, 2017) and EditNTS (Dong et al., 2019).",
"Dress is an LSTM model trained using REINFORCE (Williams, 1992) to minimize a reward function consisting of meaning preservation, simplicity, and fluency terms.",
"EditNTS represents each sentence pair as a sequence of edit operations and directly learns these operations to perform simplification.",
"For Transformer-based architectures we evaluated two previously proposed models: Access (Martin et al., 2020) and ControlTS (Mad-dela et al., 2021).",
"Access trains a randomly-initialized Transformer to generate simplifications parametrized by control tokens influencing traits like lexical complexity and length compression.",
"ControlTS is a hybrid method that generates simplification candidates using grammatical rules and then applies a BERT-based (Devlin et al., 2019) paraphrasing model.",
"In addition, we also fine-tuned T5 (Raffel et al., 2020) for the simplification task, detailed in Appendix A. T5 is a Transformer-based model jointly pretrained both on unsupervised language modeling objectives and a host of supervised tasks including summarization and translation, all framed as text-to-text problems.",
"ples from datasets, and for model-generated simplifications.",
"To ensure that only annotators who understood our labeling scheme would be included, we released a qualification task consisting of 10 sentence pairs with perfect agreement among two of the authors, with detailed explanation of the labeling scheme, and required that annotators achieve at least 75% accuracy on this set.",
"After worker qualification, examples were released to only qualified workers, and each example was annotated by 3 workers.",
"The final label for each category (insertion, deletion, substitution) was set to the majority label if one existed.",
"If every annotator provided a different label for a given category, we removed this example for purposes of this category.",
"For example, if annotators provided insertion labels of { 1 , 1 , 2 } and deletion labels of { 2 , 1 , 0 } for a specific instance, then this would not be assigned a deletion label, but would receive a final insertion label of",
"1. Workers were compensated $10.00 per hour on the annotation task.",
"Inter-annotator Agreement.",
"We quantified the degree of inter-annotator agreement using 3 metrics, each capturing a different dimension of labeling consistency for each category: First, we report the percentage of examples that had a well-defined majority label for each category.",
"Most annotators agreed on labels for the majority of examples (first column in Table 3), meaning that very few annotations had to be discarded for any category.",
"Because 0 was the most common label for all 3 categories, especially for the reference examples from the datasets, we also recorded the percentage of examples with majority non-zero annotations that also have a well-defined majority label.",
"For example, the labels { 0 , 1 , 2 } are majority non-zero but do not correspond to a well-defined majority label, while { 0 , 1 , 1 } satisfies both conditions.",
"Table 3 (column 2) indicates that even among examples where most annotators agree that there is an error, the majority agree on a specific label of 1, 2, or",
"-1. 7334 Category Dataset 0 1 2 -1 Insertion Wikilarge 91.1 6.3 0.3 2.3 Newsela 68.2 20.2 11.1 0.5 Deletion Wikilarge 76.2 18.0 3.5 2.3 Newsela 15.8 40.8 42.9 0.5 Substitution Wikilarge 90.1 6.7 0.9 2.3 Newsela 94.9 3.8 0.8 0.5 Table 4: Insertion, deletion, and substitution error distributions (%) in Wikilarge and Newsela test datasets.",
"We also measured Krippendorff's alpha (Krip-pendorff, 1970) with an ordinal level of measurement (assigning the -1 label a value of 3 to indicate maximum severity).",
"Dataset annotations for insertion enjoy moderate agreement ( = 0 . 425 ), those for deletion imply substantial agreement ( = 0 . 639 ), and those for substitution exhibit fair agreement ( = 0 . 200 ) (Artstein and Poesio, 2008).",
"The latter is possibly due to the clear majority label of 0 among substitution labels.",
"The % majority agreement scores indicate that although the annotation scheme involves a degree of subjectivity in distinguishing between minor and major errors, with proper screening crowdsource workers can label text pairs with our annotation scheme consistently enough so that a well-defined label can be assigned to the vast majority of examples.",
"Quantitative Analysis Table 4 reports distributions of acquired labels for information insertion, deletion, and substitution errors over the annotated reference examples.",
"Deletion errors are far more common than insertion errors in both datasets, though Wikilarge has fewer of both than Newsela.",
"This is unsurprising, as one of the motivations for introducing the Newsela dataset was that it contains shorter and less syntactically-complex simplifications.",
"Reassuringly, there were very few substitution errors found in either dataset.",
"Table 5 shows a clear positive correlation between length reduction and the severity of deletion errors present.",
"As expected, sentences are shortened more substantially in Newsela than in Wikilarge.",
"One the other hand, while Table 5 indicates that the examples with nonzero insertion labels collectively see a greater increase in length than those with no insertion errors, the mean length increase for level 2 examples is smaller than that for level",
"1. Simplifications in Newsela are more abstractive (Xu et al., 2015), i.e., simplified sentences copy fewer phrases verbatim from inputs.",
"This can be quantified via normalized edit distance (Lev-enshtein, 1965), which yielded a median of 0.46 for Newsela examples compared to the 0.38 for Wikilarge (after noise filtering described in Appendix B).",
"Table 5 indicates that on average the more erroneous the insertion or deletion, the greater the normalized edit distance between the original and simplified sentences.",
"These results suggest that while reducing sentence length and rewording can be beneficial (Klare, 1963), too much can negatively impact factuality.",
"Qualitative Analysis We also manually inspected insertion and deletion errors in both datasets, revealing clear patterns of deletion errors.",
"Label 1 deletions by definition involve omissions of nonsalient details that do not much affect the meaning of the sentence, e.g.: Original: Mayfield wrote and sang on a string of message-oriented records, including Keep on Pushing\" and People Get Ready.\"",
"Simplified: Mayfield wrote and sang on records that had a message. ( Newsela, deletion-1 ) Label 2 deletions have two common manifestations across the datasets. The first involves deletion of the main clause and subsequent promotion of a secondary clause: Original: Until you know how the sausage is made, you don't know how expensive it is to make that sausage, said Josh Updike, creative director of Rethink Leisure & Entertainment, which is working on several projects in China and elsewhere in Asia.",
"Simplified: The company is working on several projects in China and Asia.",
"( Newsela, deletion-2 )",
"Another common type of label 2 deletion involves removing a key (though often small) phrase that effectively reframes the entire sentence, e.g.: Original: You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version .",
"Simplified: You may add a passage of up to five words as a Front-Cover Text and a passage of up to 25 words as a Back-Cover Text to the end of the list of Cover Texts.",
"( Wikilarge, deletion-2 )",
"By deleting in the Modified Version (emphasis ours), the simplified sentence erroneously states that one may add frontand back-cover passages to the list of cover texts to the unmodified version, which is implicitly forbidden in the original.",
"Because of the small number of insertion errors on Wikilarge, we were unable to identify any meaningful trends.",
"However, we observed patterns in 7335 % length change Normalized edit distance Level 0 Level 1 Level 2 Level 0 Level 1 Level 2 Insertion Wikilarge -5.0 (17.0) 22.4 (36.9) 7.1 (0.0) 0.20 (0.20) 0.55 (0.40) 0.58 (0.0) Newsela -39.4 (23.8) -19.0 (36.9) -38.3 (29.0) 0.41 (0.17) 0.51 (0.21) 0.54 (0.04) Deletion Wikilarge 2.8 (15.8) -22.3 (18.9) -35.9 (15.9) 0.19 (0.23) 0.35 (0.18) 0.39 (0.14) Newsela 1.5 (27.6) -34.8 (23.1) -49.6 (22.8) 0.34 (0.31) 0.46 (0.13) 0.53 (0.10) Table 5: % length change (left) and normalized edit distances (right) in simplified sentences in each insertion and deletion error category (mean standard deviation).",
"Newsela for both levels 1 and 2 of insertions, pertaining to quotative phrases (e.g., inserting experts said to the beginning of a sentence even though the original sentence did not mention an expert), and temporal phrases, e.g.: Original: They could not afford to pay their son's roughly $10,000 cost for classes at the University of Texas at Austin.",
"Simplified: When he grew up, they could not afford to pay $10,000 for him to go to the University of Texas at Austin.",
"( Newsela, insertion-1 )",
"We observed more contextually related errors for Newsela due to its style and its simplification process.",
"Newsela documents were edited by professionals who rewrote the entire original document, and so information inserted or deleted could move from or to adjacent sentences.",
"This preserves information for the whole document but causes problems at the sentence level.",
"Also, compared to Wikilarge, Newsela's news articles naturally involve more complex discourse (Van Dijk, 2013).",
"These factors lead to relatively underspeci-fied sentences (Li et al., 2016) in the simplified text when they are taken out-of-context during training and evaluation.",
"This observation calls for the inclusion of document context during simplification (Sun et al., 2020), or performing decontextual-ization (Choi et al., 2021) before simplifying.",
"Table 6 shows the distributions of insertion, deletion, and substitution errors annotated in system outputs.",
"2 It also shows the standard simplification evaluation metricSARI scores (Xu et al., 2016) for the annotated set.",
"For the three models that 2 DRESS only released their Wikilarge outputs; ControlTS had different data splits for Newsela.",
"reported both Wikilarge and Newsela outputs, the relative frequency of deletion errors between the two datasets appears to be preserved in model outputs, though for the RNN models errors are milder on Newsela and amplified on Wikilarge.",
"A clear relationship between dataset and system output distributions does not exist for insertion and substitution errors.",
"For Dress and EditNTS , this is due to the fact that the minor differences in insertion errors are dwarfed by the larger number of -1 (gibberish) labels assigned to Newsela outputs.",
"Interestingly, outputs from the T5 model were rarely labeled as -1 errors, so the difference in insertion errors is more apparent.",
"In the case of substitution, the Newsela outputs for Dress and T5 models show much higher rates of substitution errors than the Wikilarge outputs, despite the opposite being true for the datasets themselves.",
"EditNTS does not show the same pattern, but again, the high rate of -1 errors subsumes every other trend.",
"One possible reason for this phenomenon could be that the higher abstractiveness of Newsela encourages models to rewrite the input sentence to a greater extent and destroy the original meaning in the process.",
"In general the models produce substitution errors more frequently than are found in the dataset, meaning that they are introduced by the models themselves and not merely learned from the data.",
"Model comparisons There are a few differences in error distributions between the RNN-based and Transformer-based models, and between pretrained vs. non-pretrained Transformer models.",
"All three Transformer models have less severe deletion errors than the RNN models on Wikilarge, and in addition T5 has lower deletion error rates on Newsela.",
"Perhaps the most striking trend is that the Transformer models have far lower -1 gibberish errors than RNN-based models, even Access , which is not pre-trained on the language modeling task.",
"T5 which has been pre-trained on large amounts of dataproduced more insertion errors, while Access produced more substitution errors.",
"Quantitative Analysis We explore the relationships between the factuality annotations of system outputs and both length reduction and normalized edit distance.",
"We briefly describe our findings here and defer numerical details to Appendix C. For every model except Access , there is a clear positive correlation between the severity of deletion errors and the degree of length reduction between the complex input and generated simplification.",
"This is consistent with the trend observed for the datasets.",
"No consistent relationships between length change and levels of insertion and substitution errors are exhibited by the system outputs.",
"As in the case of length reduction, mean edit distances increase with the severity of deletion error with no consistent trends found for insertion and substitution labels.",
"Qualitative analysis We also manually inspect model outputs, detailed in Appendix D, and summarize main observations here.",
"As in the data, models also produce deletions ranging from single words and short phrases to clauses.",
"For the two RNN models, DRESS and EditNTS , level 1 errors primarily consist of shorter deletion errors, which include pronoun errors and modifiers.",
"Level 2 errors are almost always longer deletions, yet we did not observe the promotion of a subordinate clause to a main one as in the references, suggesting that models tend to follow syntactic rules more strictly.",
"For T5 , we additionally observe level 2 errors in which the model deletes a semantically critical word.",
"We observed more error variability in the other two transformer models, Access and ControlTS .",
"Models introduced varying numbers of insertion and substitution errors, but in inspection we did not observe any clear properties of these as a function of model type.",
"Relationship to SARI.",
"SARI is the most popular metric used to evaluate text simplification models (Xu et al., 2016).",
"For each model, we report Spearman's rank correlation coefficient (Spearman, 1904) between SARI and each error category.",
"As Table 7 reports, there is only a weak correlation between SARI and the prevalence of information errors, and both the direction and magnitude of the correlation are highly dependent on model and dataset.",
"This lack of correlation is unsurprising since SARI uses lexical overlap between the generated text with the reference text pair to judge simplification quality.",
"This parallels the case with ROUGE in summarization (Falke et al., 2019a; Maynez et al., 2020; Wallace et al., 2021).",
"Measures of Semantic Similarity.",
"Many existing text simplification systems attempt to address the problem of meaning preservation by using a semantic similarity score either directly in their loss/reward function or in a candidate ranking step (Zhang and Lapata, 2017; Kriz et al., 2019; Zhao et al., 2020; Maddela et al., 2021).",
"Additionally, some of these metrics have been included in recent factuality evaluation platforms in summarization (Pagnoni et al., 2021).",
"We explore the extent to which existing similarity methods detect 7337 Similarity Measure I D S Jaccard Similarity 0 .",
"information errors as outlined in our annotation scheme.",
"We consider: (1) Jaccard similarity; (2) cosine similarity between averaged GloVe (Pen-nington et al., 2014) or ELMo (Peters et al., 2018) embeddings of the original and simplified sentences; (3) cosine similarity between Sentence-BERT (Reimers and Gurevych, 2019) embeddings; and (4) BERTScore (Zhang et al., 2019).",
"As Table 8 indicates, the semantic similarity measures explored capture deletion errors quite well, while being a moderate indicator of insertion errors and a very weak one for substitution errors.",
"Since deletion and substitution errors are common in most of the models we evaluated, the results indicate that better methods are needed to detect unacceptable deletions and intrinsic hallucinations in simplification outputs.",
"Measures of Factuality.",
"As in text simplification, the most common evaluation metrics used in text summarization like ROUGE do not adequately account for the factuality of model generations with respect to the input texts (Kryscinski et al., 2019).",
"For this reason, recent works have proposed model-based metrics to automatically assess factuality (Falke et al., 2019b; Durmus et al., 2020; Wang et al., 2020b; Kryscinski et al., 2020b; Goyal and Durrett, 2020).",
"We consider the following systems: (1) FACT-CC, which is a BERT-based model trained on a synthetic dataset to classify text pairs as being factually inconsistent or not (Kryscin-ski et al., 2020b), and (2) DAE, which is another BERT-based model that classifies each dependency arc in the model output as entailing the source text or not (Goyal and Durrett, 2020).",
"More specifically, for FACT-CC we use the model's probability that each simplification example is inconsistent.",
"For DAE we use the average of the lowest k probabilities that a dependency arc in the target sentence does not entail the source for k = 1 , 3 , 5 .",
"semantic similarity like Jaccard similarity, though DAE scores correlate better with substitution errors than do FACT-CC and all evaluated measures of semantic similarity.",
"Since manual annotation is costly and time-consuming, as a first step towards large-scale evaluation, we present an initial attempt at automating factuality assessment by training a model on human annotations.",
"To supplement training, we explore methods of generating synthetic data to improve model performance.",
"We framed automatic factuality assessment as a classification task in which a separate classifier is trained for each category (Insertion, Deletion, and Substitution), for each of the levels 0, 1, and",
"2. We treat the annotations used in our previous analyses as the test set and have additional data annotated to function as the training set for this task.",
"We therefore collected a total of 1004 additional examples annotated across Wikilarge, Newsela, Access outputs on Wikilarge, and T5 outputs on Newsela and Wikilarge.",
"We fine-tuned RoBERTa (Liu et al., 2019) with a classification head.",
"Synthetic Data Generation As Table 10 indicates, the validation dataset is both small and highly imbalanced, with very few level 2 insertion and substitution errors.",
"To alleviate this issue, we experimented with a few methods of generating synthetic insertion and substitution errors on which to pretrain the model.",
"We accomplished this by modifying each of the complex sentences in the validation set.",
"To generate insertion errors, we replace names with pronouns and remove phrases from the source text to create target texts (informa-tion deletions) and then swap the source and target to produce information insertions.",
"To generate substitutions, we change numbers in the source text, negate statements, and used BERT masking to perturb information in the sentence.",
"We generated 10K examples in total; Appendix E.1 describes these 7338 Level 0 Level 1 Level 2 Category # F1 # F1 # F1 Insertion 823 87.9 104 36.6 40 30.4 Deletion 413 84.2 356 57.1 204 52.1 Substitution 810 82.7 110 19.8 33 9.5 Table 10: Annotated label counts in the training set, and F1 on the test set.",
"Training and Evaluation The model is evaluated using the F1-scores with respect to each class (0,1,2), and when selecting checkpoints during training, the average of the label 1 and 2 F1 scores is used.",
"The deletion model was trained directly on its training data, whereas the insertion and substitution models were initially pretrained on the synthetic datasets.",
"Training details are provided in Appendix E.3.",
"Results Table 10 shows the test F1 scores achieved by the three classifiers.",
"As expected, the deletion classifier achieved the best 1 and 2 F1 scores, likely due to the fact that the training dataset had plenty of level 1 and 2 deletion errors.",
"Although the insertion and substitution datasets are similarly skewed, the insertion classifier significantly outperforms the substitution one.",
"We found that using synthetic data is useful: without it, F1s for levels 1 and/or 2 are near 0 for insertion and substitution.",
"Even with data augmentation, however, detecting errors is a challenging task.",
"We have presented an evaluation of the factuality of automated simplification corpora and model outputs, using an error typology with varied degrees of severity.",
"We found that errors appear frequently in both references and generated outputs.",
"In the datasets, deletion errors are quite frequent, with Newsela containing more than Wikilarge.",
"The system outputs indicate that the models also tend to delete information, which is likely a behavior learned from the training data.",
"Model outputs contain more substitution errors than the datasets, so that behavior is probably a model bias rather than something picked up from the data.",
"Although we examined the two commonly used sentence-level datasets, factuality errors do extend to other domains and larger units of text.",
"Our initial analysis of factuality in medical text simplification (Devaraj et al., 2021) found errors of all three types, an indication that factual simplification is an open problem in such high-stake areas.",
"The details of our analysis are in Appendix F. We also found that factuality errors are not well captured by existing metrics used in simplification such as SARI (Xu et al., 2016).",
"While semantic similarity metrics correlate with deletion errors, they poorly correlate with insertion or substitution.",
"We further present an initial model for automatic factuality assessment, which we demonstrate is a challenging task.",
"This work was partially supported by NSF grants IIS-1850153, IIS-2107524, IIS-1901117, as well as the National Institutes of Health (NIH), grant R01-LM012086.",
"We also acknowledge the Texas Advanced Computing Center (TACC) at UT Austin for providing the computational resources for many of the results within this paper.",
"We are grateful to the anonymous reviewers for their comments and feedback."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"method",
"abstain",
"result",
"abstain",
"method",
"objective",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"abstain",
"objective",
"other",
"other",
"other"
] |
[
"We improve the informativeness of models for conditional text generation using techniques from computational pragmatics.",
"These techniques formulate language production as a game between speakers and listeners, in which a speaker should generate output text that a listener can use to correctly identify the original input that the text describes.",
"While such approaches are widely used in cognitive science and grounded language learning, they have received less attention for more standard language generation tasks.",
"We consider two pragmatic modeling methods for text generation: one where pragmatics is imposed by information preservation, and another where pragmatics is imposed by explicit modeling of distractors.",
"We find that these methods improve the performance of strong existing systems for abstractive summarization and generation from structured meaning representations.",
"Computational approaches to pragmatics cast language generation and interpretation as game-theoretic or Bayesian inference procedures (Gol-land et al., 2010; Frank and Goodman, 2012).",
"While such approaches are capable of modeling a variety of pragmatic phenomena, their main application in natural language processing has been to improve the informativeness of generated text in grounded language learning problems (Monroe et al., 2018).",
"In this paper, we show that pragmatic reasoning can be similarly used to improve performance in more traditional language generation tasks like generation from structured meaning representations (Figure",
"1) and summarization.",
"Our work builds on a line of learned Rational Speech Acts (RSA) models (Monroe and Potts, 2015; Andreas and Klein, 2016), in which generated strings are selected to optimize the behav-Input meaning representation ( i ): NAME [FITZBILLIES ], EATTYPE [ COFFEE SHOP ], FOOD [ENGLISH ], PRICERANGE [ CHEAP ], CUSTOMERRATING [5 OUT OF 5], AREA [RIVERSIDE ], FAMILYFRIENDLY [ YES ] Human-written A cheap coffee shop in riverside with a 5 out of 5 customer rating is Fitzbillies.",
"ior of an embedded listener model.",
"The canonical presentation of the RSA framework (Frank and Goodman, 2012) is grounded in reference resolution: models of speakers attempt to describe referents in the presence of distractors, and models of listeners attempt to resolve descriptors to referents.",
"Recent work has extended these models to more complex groundings, including images (Mao et al., 2015) and trajectories (Fried et al., 2018).",
"The techniques used in these settings are similar, and the primary intuition of the RSA framework is preserved: from the speaker's perspective, a good description is one that picks out, as discriminatively as possible, the content the speaker intends for the listener to identify.",
"Outside of grounding, cognitive modeling (Frank et al., 2009), and targeted analysis of linguistic phenomena (Orita et al., 2015), rational speech acts models have seen limited application in the natural language processing literature.",
"In this work we show that they can be extended to a distinct class of language generation problems that use as referents structured descriptions of lingustic content, or other natural language texts.",
"In accordance with the maxim of quantity (Grice, 1970) or the Q-principle (Horn, 1984), pragmatic approaches naturally correct underin-formativeness problems observed in state-of-the-art language generation systems ( S 0 in Figure 1).",
"We present experiments on two language generation tasks: generation from meaning representations (Novikova et al., 2017) and summarization.",
"For each task, we evaluate two models of pragmatics: the reconstructor-based model of Fried et al. (2018) and the distractor-based model of Cohn-Gordon et al. (2018).",
"Both models improve performance on both tasks, increasing ROUGE scores by 0.20.5 points on the CNN/Daily Mail abstractive summarization dataset and BLEU scores by 2 points on the End-to-End (E2E) generation dataset, obtaining new state-of-the-art results.",
"We formulate a conditional generation task as taking an input i from a space of possible inputs I (e.g., input sentences for abstractive summarization; meaning representations for structured generation) and producing an output o as a sequence of tokens ( o 1 , . . . , o T ) .",
"We build our pragmatic approaches on top of learned base speaker models S 0 , which produce a probability distribution S 0 ( o | i ) over output text for a given input.",
"We focus on two conditional generation tasks where the information in the input context should largely be preserved in the output text, and apply the pragmatic procedures outlined in Sec. 3 to each task.",
"For these S 0 models we use systems from past work that are strong, but may still be underinfor-mative relative to human reference outputs (e.g., Figure 1).",
"Meaning Representations Our first task is generation from structured meaning representations (MRs) containing attribute-value pairs (Novikova et al., 2017).",
"An example is shown in Figure 1, where systems must generate a description of the restaurant with the specified attributes.",
"We apply pragmatics to encourage output strings from which the input MR can be identified.",
"For our S 0 model, we use a publicly-released neural generation system (Puzikov and Gurevych, 2018) that achieves comparable performance to the best published results in Dusek et al. (2018).",
"Abstractive Summarization Our second task is multi-sentence document summarization.",
"There is a vast amount of past work on summarization (Nenkova and McKeown, 2011); recent neural models have used large datasets (e.g., Hermann et al. (2015)) to train models in both the extractive (Cheng and Lapata, 2016; Nallapati et al., 2017) and abstractive (Rush et al., 2015; See et al., 2017) settings.",
"Among these works, we build on the recent abstractive neural summarization system of Chen and Bansal (2018).",
"First, this system uses a sentence-level extractive model RNN-EXT to identify a sequence of salient sentences i (1) , . . . i ( P ) in each source document.",
"Second, the system uses an abstractive model ABS to rewrite each i ( p ) into an output o ( p ) , which are then concatenated to produce the final summary.",
"We rely on the fixed RNNEXT model to extract sentences as inputs in our pragmatic procedure, using ABS as our S 0 model and applying pragmatics to the i ( p ) o ( p ) abstractive step.",
"To produce informative outputs, we consider pragmatic methods that extend the base speaker models, S 0 , using listener models, L , which produce a distribution L ( i | o ) over possible inputs given an output.",
"Listener models are used to derive pragmatic speakers , S 1 ( o | i ) , which produce output that has a high probability of making a listener model L identify the correct input.",
"There are a large space of possible choices for designing L and deriving S 1 ; we follow two lines of past work which we categorize as reconstructor-based and distractor-based .",
"We tailor each of these pragmatic methods to both our two tasks by developing reconstructor models and methods of choosing distractors.",
"Pragmatic approaches in this category (Dusek and Jurccek, 2016; Fried et al., 2018) rely on a reconstructor listener model LR defined independently of the speaker.",
"This listener model produces a distribution LR ( i | o ) over all possible input contexts i I , given an output description o .",
"We use sequence-to-sequence or structured classification models for LR (described below), and train these models on the same data used to supervise the S 0 models.",
"The listener model and the base speaker model together define a pragmatic speaker , with output score given by: SR 1 ( o | i ) = LR ( i | o ) S 0 ( o | i ) 1 (1) where is a rationality parameter that controls how much the model optimizes for discriminative outputs (see Monroe et al. (2017) and Fried et al. (2018) for a discussion).",
"We select an output text sequence o for a given input i by choosing the highest scoring output under Eq.",
"1 from a set of candidates obtained by beam search in S 0 ( | i ) .",
"Meaning Representations We construct LR for the meaning representation generation task as a multi-task, multi-class classifier, defining a distribution over possible values for each attribute.",
"Each MR attribute has its own prediction layer and attention-based aggregation layer, which conditions on a basic encoding of o shared across all attributes.",
"See Appendix A.1 for architecture details.",
"We then define LR ( i | o ) as the joint probability of predicting all input MR attributes in i from o .",
"Summarization To construct LR for summarization, we train an ABS model (of the type we use for S 0 , Chen and Bansal (2018)) but in reverse, i.e., taking as input a sentence in the summary and producing a sentence in the source document.",
"We train LR on the same heuristically-extracted and aligned source document sentences used to train S 0 (Chen and Bansal, 2018).",
"Pragmatic approaches in this category (Frank and Goodman, 2012; Andreas and Klein, 2016; Vedan-tam et al., 2017; Cohn-Gordon et al., 2018) derive pragmatic behavior by producing outputs that distinguish the input i from an alternate distractor input (or inputs).",
"We construct a distractor (cid:101) for a given input i in a task-dependent way.",
"1 We follow the approach of Cohn-Gordon et al. (2018), outlined briefly here.",
"The base speakers we build on produce outputs incrementally, where the probability of o t , the word output at time t , is conditioned on the input and the previously generated words: S 0 ( o t | i, o <t ) .",
"Since the output is generated incrementally and there is no separate 1 In tasks such as contrastive captioning or referring expression generation, these distractors are given; for the conditional generation task, we will show that pragmatic behavior can be obtained by constructing or selecting a single distractor that contrasts with the input i .",
"listener model that needs to condition on entire output decisions, the distractor-based approach is able to make pragmatic decisions at each word rather than choosing between entire output candidates (as in the reconstructor approaches).",
"The listener LD and pragmatic speaker SD 1 are derived from the base speaker S 0 and a belief distribution p t ( ) maintained at each timestep t over the possible inputs ID : LD ( i | o <t ) S 0 ( o <t | i ) p t 1 ( i ) (2) SD 1 ( o t | i, o <t ) LD ( i | o <t ) S 0 ( o t | i, o <t ) (3) p t ( i ) S 0 ( o t | i, o <t ) LD ( i | o <t ) (4) where is again a rationality parameter, and the initial belief distribution p 0 ( ) is uniform, i.e., p 0 ( i ) = p 0 ( (cid:101) ) = 0 .",
"5 .",
"Eqs.",
"2 and 4 are normalized over the true input i and distractor (cid:101) ; Eq.",
"3 is normalized over the output vocabulary.",
"We construct an output text sequence for the pragmatic speaker SD 1 incrementally using beam search to approximately maximize Eq.",
"3.",
"Meaning Representations A distractor MR is automatically constructed for each input to be the most distinctive possible against the input.",
"We construct this distractor by masking each present input attribute and replacing the value of each non-present attribute with the value that is most frequent for that attribute in the training data.",
"For example, for the input MR in Figure 1, the distractor is NEAR [BURGERKING ].",
"Summarization For each extracted input sentence i ( p ) , we use the previous extracted sentence i ( p 1) from the same document as the distractor input (cid:101) (for the first sentence we do not use a distrac-tor).",
"This is intended to encourage outputs o ( p ) to contain distinctive information against other summaries produced within the same document.",
"For each of our two conditional generation tasks we evaluate on a standard benchmark dataset, following past work by using automatic evaluation against human-produced reference text.",
"We choose hyperparameters for our models (beam size, and parameters and ) to maximize task metrics on each dataset's development set; see Appendix A.2 for the settings used.",
"2 2 Our code is publicly available at https://github.",
"We evaluate on the E2E task of generation from meaning representations containing restaurant attributes (Novikova et al., 2017).",
"We report the task's five automatic metrics: BLEU (Papineni et al., 2002), NIST (Doddington, 2002), METEOR (Lavie and Agarwal, 2007), ROUGE-L (Lin, 2004) and CIDE r (Vedantam et al., 2015).",
"Table 1 compares the performance of our base S 0 and pragmatic models to the baseline T-Gen system (Dusek and Jurccek, 2016) and the best previous result from the 20 primary systems evaluated in the E2E challenge (Dusek et al., 2018).",
"The systems obtaining these results encompass a range of approaches: a template system (Puzikov and Gurevych, 2018), a neural model (Zhang et al., 2018), models trained with reinforcement learning (Gong, 2018), and systems using ensembling and reranking (Juraska et al., 2018).",
"To ensure that the benefit of the reconstructor-based pragmatic approach, which uses two models, is not due solely to a model combination effect, we also compare to an ensemble of two base models ( S 0 2 ).",
"This ensemble uses a weighted combination of scores of two independently-trained S 0 models, following Eq.",
"1 (with weights tuned on the development data).",
"Both of our pragmatic systems improve over the strong baseline S 0 system on all five metrics, with the largest improvements (2.1 BLEU , 0.2 NIST , 0.8 METEOR , 1.5 ROUGE-L , and 0.1 CIDE r) from the SR 1 model.",
"This SR 1 model outperforms the previous best results obtained by any system in the E2E challenge on BLEU , NIST , and CIDE r, with comparable performance on METEOR and ROUGE-L .",
"We evaluate on the CNN/Daily Mail summarization dataset (Hermann et al., 2015; Nallapati et al., 2016), using See et",
"al.'s (2017) non-anonymized preprocessing.",
"As in previous work (Chen and Bansal, 2018), we evaluate using ROUGE and METEOR .",
"Table 2 compares our pragmatic systems to the base S 0 model (with scores taken from Chen and Bansal (2018); we obtained comparable performance in our reproduction 3 ), an ensemble of two of these base models, and the best previous abstractive summarization result for each metric on this dataset (Celikyilmaz et al., 2018; Paulus et al., 2018; Chen and Bansal, 2018).",
"We also report two extractive baselines: Lead-3 , which uses the first three sentences of the document as the summary (See et al., 2017), and Inputs , the concatenation of the extracted sentences used as inputs to our models (i.e., i (1) , . . . , i ( P ) ).",
"The pragmatic methods obtain improvements of 0.20.5 in ROUGE scores and 0.21.8 METEOR over the base S 0 model, with the distractor-based approach SD 1 outperforming the reconstructor-based approach SR 1 .",
"SD 1 is strong across all metrics, obtaining results competitive to the best previous abstractive systems.",
"3 We use retrained versions of Chen and Bansal (2018)'s sentence extractor and abstractive S 0 models in all our experiments, as well as their n-gram reranking-based inference procedure, replacing scores from the base model S 0 with scores from SR 1 or SD 1 in the respective pragmatic procedures.",
"(a) Coverage ratios by attribute type for the base model S 0 and pragmatic models SR 1 and SD 1 .",
"The pragmatic models typically improve coverage ratios across attribute types when compared to the base model.",
"(b) Coverage ratios by attribute type (columns) for the base model S 0 , and for the pragmatic system SD 1 when constructing the distractor by masking the specified attribute (rows).",
"Cell colors are the degree the coverage ratio increases (green) or decreases (red) relative to S 0 .",
"The base speaker S 0 model is often underinfor-mative, e.g., for the E2E task failing to mention certain attributes of a MR, even though almost all the training examples incorporate all of them.",
"To better understand the performance improvements from the pragmatic models for E2E, we compute a coverage ratio as a proxy measure of how well content in the input is preserved in the generated outputs.",
"The coverage ratio for each attribute is the fraction of times there is an exact match between the text in the generated output and the attribute's value in the source MR (for instances where the attribute is specified).",
"4 Figure",
"2(a) shows coverage ratio by attribute category for all models.",
"The SR 1 model increases the coverage ratio when compared to S 0 across all attributes, showing that using the reconstruction model score to select outputs does lead to an increase in mentions for each attribute.",
"Coverage ratios increase for SD 1 as well in four out of six categories, but the increase is typically less than that produced by SR 1 .",
"While SD 1 optimizes less explicitly for attribute mentions than SR 1 , it still provides a potential method to control generated outputs by choosing alternate distractors.",
"Figure",
"2(b) shows coverage ratios for SD 1 when masking only a single attribute in the distractor.",
"The highest coverage ratio for each attribute is usually obtained when masking that attribute in the distractor MR (entries on the main diagonal, underlined), in particular for FAMILYFRIENDLY (FF), FOOD , PRICERANGE 4 Note that this measure roughly provides a lower bound on the model's actual informativeness for each attribute, since the measure does not assign credit for paraphrases.",
"(PR), and AREA .",
"However, masking a single attribute sometimes results in decreasing the coverage ratio, and we also observe substantial increases from masking other attributes: e.g., masking either FAMILYFRIENDLY or CUSTOMERRATING (CR) produces an equal increase in coverage ratio for the CUSTOMERRATING attribute.",
"This may reflect underlying correlations in the training data, as these two attributes have a small number of possible values (3 and 7, respectively).",
"Our results show that S 0 models from previous work, while strong, still imperfectly capture the behavior that people exhibit when generating text; and an explicit pragmatic modeling procedure can improve results.",
"Both pragmatic methods evaluated in this paper encourage prediction of outputs that can be used to identify their inputs, either by reconstructing inputs in their entirety or distinguishing true inputs from distractors, so it is perhaps unsurprising that both methods produce similar improvements in performance.",
"Future work might allow finer-grained modeling of the tradeoff between under and over -informativity within the sequence generation pipeline (e.g., with a learned communication cost model) or explore applications of pragmatics for content selection earlier in the generation pipeline.",
"Thanks to Reuben Cohn-Gordon for many helpful discussions and suggestions.",
"This work was supported by DARPA through the XAI program.",
"DF is supported by a Tencent AI Lab Fellowship."
] | [
"result",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"other",
"other",
"other"
] |
[
"It is a common belief that training deep transformers from scratch requires large datasets.",
"Consequently, for small datasets, people usually use shallow and simple additional layers on top of pre-trained models during fine-tuning.",
"This work shows that this does not always need to be the case: with proper initialization and optimization, the benefits of very deep transformers can carry over to challenging tasks with small datasets, including Text-to-SQL semantic parsing and logical reading comprehension.",
"In particular, we successfully train 48 layers of transformers, comprising 24 fine-tuned layers from pre-trained RoBERTa and 24 relation-aware layers trained from scratch.",
"With fewer training steps and no task-specific pre-training, we obtain the state-of-the-art performance on the challenging cross-domain Text-to-SQL parsing benchmark Spider 1 .",
"We achieve this by deriving a novel D ata-dependent T ransformer Fix ed-up date initialization scheme (DT-Fixup), inspired by the prior T-Fixup work (Huang et al., 2020).",
"Further error analysis shows that increasing depth can help improve generalization on small datasets for hard cases that require reasoning and structural understanding.",
"In recent years, large-scale pre-trained language models (Radford et al., 2019; Devlin et al., 2018; Liu et al., 2019b) trained with transformers (Vaswani et al., 2017) have become standard building blocks of modern NLP systems to help improve generalization when task-specific annotations are limited.",
"In practice, it has been found that deeper transformers generally yield better results with sufficient training data (Lan et al., 2019), Work done while the author was an intern in Borealis AI.",
"1 The code to reproduce our results can be found in: https://github.com/BorealisAI/DT-Fixup especially on tasks involving reasoning and structural understanding.",
"This suggests that additional transformer layers should be employed in conjunction with pre-trained models, instead of simple and shallow neural components, such as a classifier head, currently used by models of many NLP tasks.",
"However, the common belief in the literature is that training deep transformers from scratch requires large datasets, and few attempts have been made on small datasets, to the best of our knowledge.",
"One implication is that although extra transformer layers on top of pre-trained models should help with more challenging problems in principle, it does not work in practice due to limited training data.",
"We show that after resolving several optimization issues with the method proposed in this work, it is possible to train very deep transformers with improved generalization even on small datasets.",
"One advantage of pre-trained models is the reduced computational resources needed when fine-tuning on small datasets.",
"For instance, it allows practitioners to finetune on a single GPU and obtain strong performance on a downstream task.",
"However, the large size of pre-trained models limits the batch size that can be used in training new transformer layers on a small computational budget.",
"Despite their broad applications, training transformer models is known to be difficult (Popel and Bojar, 2018).",
"The standard transformer training approach leverages learning rate warm-up, layer normalization (Ba et al., 2016) and a large batch size, and models typically fail to learn when missing any one of these components.",
"The restricted batch size aggravates the training difficulties.",
"Even if a large batch size can be feasibly employed, poorer generalization results are often observed (Keskar et al., 2016), especially when the dataset size is only several times larger than the batch size.",
"Furthermore, many recent works noticed a performance gap in this training approach due to layer normalization (Xu et al., 2019; Nguyen and Salazar, 2019; Zhang et al., 2019a; Wang et al., 2019b; Liu et al., 2020; Huang et al., 2020).",
"Inspired by the recent T-Fixup by Huang et al. (2020), which eliminates the need for learning rate warm-up and layer normalization to train vanilla transformers, we derive a data-dependent initialization strategy by applying different analyses to address several key limitations of T-Fixup.",
"We call our method the D ata-dependent T ransformer Fix edup date initialization scheme, DT-Fixup .",
"In the mixed setup of additional yet-to-be-trained transformers on top of pre-trained models, DT-Fixup enables the training of significantly deeper transformers, and is generally applicable to different neural architectures.",
"Our derivation also extends beyond vanilla transformers to transformers with relational encodings (Shaw et al., 2018), allowing us to apply the results to one variant called relation-aware transformer (Wang et al., 2019a).",
"By applying DT-Fixup on different tasks, we show that the impression that deep transformers do not work on small datasets stems from the optimization procedure rather than the architecture.",
"With proper initialization and optimization, training extra transformer layers is shown to facilitate the learning of complex relations and structures in the data.",
"We verify the effectiveness of DT-Fixup on Spider (Yu et al., 2018), a complex and cross-domain Text-to-SQL semantic parsing benchmark, and ReColr (Yu et al., 2020b), a reading comprehension dataset requiring logical reasoning.",
"While Text-to-SQL semantic parsing is inherently different from reading comprehension, they share similar characteristics which require certain levels of reasoning and structural understanding ability.",
"Meanwhile, the sizes of both datasets are less than 10k training samples, which is tiny by deep learning standards and renders large-batch training undesirable due to poor generalization 2 .",
"On both datasets, DT-Fixup consistently outperforms the standard approach with better generalization and allows the training of significantly deeper transformer models.",
"For Spider, we successfully apply DT-Fixup to train a Text-to-SQL parser containing 48 transformer layers, with 24 relation-aware layers trained from scratch on top of 24 pre-trained layers from pre-trained RoBERTa 2 For a comparison, T-Fixup applies batch sizes of more than 1k on machine translation to stabilize the training, which would hurt the generalization significantly on our datasets whose sizes are less than 10k.",
"(Liu et al., 2019b).",
"Our parser achieves 70 .",
"9% exact match accuracy on the Spider test set, which is the state of the art at the time of writing.",
"At the same time, it requires less training steps and no task-specific pre-training as compared to the prior art (Yu et al., 2020a).",
"For ReClor, we rank the second on the public leaderboard by simply adding 4 transformer layers on top of RoBERTa.",
"Further error analysis shows that the performance improvements by increasing the depth mainly come from better generalization on the harder cases requiring reasoning and structural understanding.",
"Even the failed predictions from the deep models are more reasonable than from the shallow ones.",
"In this section, we present the necessary background by first introducing the relation-aware transformer layer, which outperforms the vanilla transformer layer with limited data by injecting additional inductive bias (Wang et al., 2019a).",
"Then, we introduce the T-Fixup technique (Huang et al., 2020) for optimizing deeper vanilla transformers and discuss why it does not directly apply in the mixed transformer optimization setup.",
"Consider a set of inputs X = [ xxx 1 , . . . ,xxx n ] where xxx i R d x .",
"A transformer , introduced by Vaswani et al. (2017), is a stack of blocks, with each block consisting of a multi-head self-attention layer , layer normalizations, a multi-layer perceptron and skip connections.",
"Each block (with one head in self-attention for notational simplicity) transforms each xxx i into yyy i R d x as follows: ij = softmax (cid:16) xxx i qqq ( xxx j kkk ) (cid:62) (cid:46)(cid:112) d z (cid:17) (1) zzz i = (cid:80) nj =1 ij xxx j vvv ; (2) y y y i = LayerNorm ( xxx i + zzz i www (cid:62) ) (3) yyy i = LayerNorm ( y y y i + MLP ( y y y i )) (4) where the softmax operation is applied across the index j , MLP is a two-layer perceptron, LayerNorm is a layer normalization (Ba et al., 2016) layer, and qqq,kkk,vvv R d x d z ,www R d x d z .",
"In order to bias the transformer toward some pre-existing relational features between the inputs, Shaw et al. (2018) described a way to represent relative position information in a self-attention layer by changing Equation 1-2 as follows: ij = softmax (cid:32) xxx i qqq ( xxx j kkk + rrr kij ) (cid:62) d z (cid:33) zzz i = (cid:80) nj =1 ij ( xxx j vvv + rrr vij ) (5) Here the rrr ij R d z terms encode the known relationship between two elements xxx i and xxx j in the input.",
"Wang et al. (2019a) adapted this framework to effectively encode the schema information using rrr ij 's for Text-to-SQL parsers, and called it relation-aware transformer (RAT).",
"Huang et al. (2020) found that the requirement for the warmup during the early stage training of the transformers comes from a combined effect of high variance in the Adam optimizer and back-propagation through layer normalization.",
"Bounding the gradient updates would reduce the variance and make training stable, which can be achieved by appropriately initializing the model weights.",
"They derived a weight initialization scheme called T-Fixup for the vanilla transformer that fully eliminates the need for layer normalization and learning rate warmup, and stabilizes the training to avoid harmful plateaus of poor generalization.",
"T-Fixup requires the inputs xxx to be Gaussian randomly initialized embeddings with variance d 12 where d is the embedding dimension.",
"Then, the input and parameters of the encoder, xxx , vvv , www in the vanilla self-attention blocks as well as the weight matrices in the MLP blocks defined in Eq.",
"1-4 are re-scaled by multiplying with a factor of 0 .",
"67 N 14 , where N are the number of transformer layers.",
"However, there are two restrictions of T-Fixup narrowing down the range of its application.",
"First, T-Fixup is only designed for vanilla transformer but not other variants like the relative position or relation-aware version described previously.",
"Second, they make the critical assumption that the inputs xxx can be freely initialized then scaled to the same magnitude as vvv , www and MLP weights.",
"This renders the method inapplicable for the mixed setup where the inputs to the yet-to-be-trained transformer layers depend on the outputs from the pretrained models.",
"The first issue can be addressed by re-deriving the scaling factor following the methodology of T-Fixup but taking into account the additional relational term.",
"However, to lift the second restriction requires changing the assumption and more dramatic modification to the analysis.",
"We now follow the analysis framework of T-Fixup (Huang et al., 2020), but derive the conditions to bound the gradient updates of the self-attention block in the presence of a pre-trained model.",
"Based on the derivation, we propose a data-dependent initialization strategy for the mixed setup of the new transformers on pre-trained encodings.",
"Our analysis applies to the general architecture type illustrated in Figure 1, where the input passes through a pre-transformer, a main transformer, and a post-transformer module before outputting.",
"The pre and post transformer modules can be any architectures that can be stably trained with Adam (Kingma and Ba, 2014), including MLP, LSTM, CNN, or a pre-trained deep transformer module which can be stably fine-tuned with a learning rate significantly smaller than the main learning rate used for the main transformer module.",
"For this work, we will just consider the case of the main transformer containing only the encoder for simplicity, while our decoder will be an LSTM which can be viewed as part of the post-transformer module.",
"Extending our analysis to include deep transformer decoder is straightforward following the framework of Huang et al. (2020).",
"We use f e to denote the pre-transformer module ( e for pre-trained encoder), and its parameters e ; similarly f o for post-transformer module ( o for output) with parameters o .",
"The main transformer module f G is a stack of L transformer blocks, each consisting of a self-attention block and a MLP block.",
"Let G l , l = 1 , . . . , 2 N denote individual self-attention or MLP layers in the blocks ( G l 's do not include the skip connections), with parameters l and let L = 2 N , f G 's parameters are denoted by G = L (cid:83) l =1 l .",
"Let the whole model with the output softmax layer(s) and all layer normalization blocks removed be denoted by f ( ; ) and the loss function by L , where are all the learnable parameters.",
"Following Huang et al. (2020), we aim to derive a condition under which, per each SGD update with learning rate , the model output changes by ( ) , i.e. (cid:107) f (cid:107) = ( ) where f = f ( ; L ) f ( ; ) .",
"By Taylor expansion, the SGD update is: f = f o o + f G G + f e e + O ( (cid:107) o (cid:107) 2 + (cid:107) G (cid:107) 2 + (cid:107) e (cid:107) 2 ) = ( f o o f o o (cid:62) L f o (cid:62) + f o f G f G G f G G (cid:62) f o f G (cid:62) L f o (cid:62) + f o f G f G f e f e e f e e (cid:62) f G f e (cid:62) f o f G (cid:62) L f o (cid:62) ) + O ( 2 ) (6) As assumed in Sec. 3.1, we can stably train f e and f o coupled with L , i.e, (cid:107) L f o (cid:107) = (cid:107) f o o (cid:107) = (cid:107) f e e (cid:107) = (cid:107) f o f G (cid:107) = (cid:107) f G f e (cid:107) = (1) , we only need to bound the magnitudes of f G G to bound the overall SGD update.",
"Since what we care is the magnitude of the update as it relates to the depth, we can assume all parameters to be scalars, i.e, qqq l ,kkk l ,vvv l ,www l ,rrr kl ,rrr vl reduce to scalars q l , k l , v l , w l , r kl , r vl R .",
"The next theorem states the condition under which, (cid:107) f G G (cid:107) is bounded by (1) , achieving the overall (cid:107) f (cid:107) = ( ) .",
"Theorem 3.1 Assuming (cid:107) xxx (cid:107) = ( ) for some (cid:29) 1 , then (cid:107) f G G (cid:107) = (1) if (cid:107) v l (cid:107) = (cid:107) w l (cid:107) = (cid:107) r vl (cid:107) = (cid:16) ((4 2 + 2 + 2) N ) 12 (cid:17) for all encoder layers l in relation-aware transformers; and (cid:107) v l (cid:107) = (cid:107) w l (cid:107) = (cid:16) (4 2 N ) 12 (cid:17) in the case of vanilla transformers.",
"The proof is in Appendix A. One important immediate observation is that our scaling as the depth N is to the power of 1 / 2 , whereas T-Fixup has a scaling with power of 1 / 4 .",
"While this theorem is all we need for deriving our DT-Fixup approach, it is not immediately intuitive.",
"So next we inspect what it takes to bound the change in a individual layer output (cid:107) G l (cid:107) to ( /L ) in each gradient update.",
"This will shine some light on the particular form of the expressions in Theorem 3.1: Theorem 3.2 Let xxxxxxxxx l = [ x l 1 , . . . , x ln ] be the input into l -th layer, and assume that (cid:107) L /G l (cid:107) = (1) , i.e. the gradient signal from the layers above is bounded, then G l = G l ( xxx l L xxx l ; l L l ) G l ( xxx l ; l ) satisfies (cid:107) G l (cid:107) = ( /L ) when for all i = 1 , . . . , n : 2 (cid:107) v l (cid:107) 2 (cid:107) x li (cid:107) 2 + 2 (cid:107) v l (cid:107)(cid:107) r vl (cid:107)(cid:107) x li (cid:107) + (cid:107) r vl (cid:107) 2 + (cid:107) w l (cid:107) 2 (1 + 2 (cid:107) x li (cid:107) 2 ) = (1 /N ) (7) for relation-aware transformers.",
"Alternatively, in the case of vannilla transformers: (cid:107) v l (cid:107) 2 (cid:107) x li (cid:107) 2 + (cid:107) w l (cid:107) 2 (cid:107) x li (cid:107) 2 = (1 /L ) (8) In this case, the proof is straightforward by taking partial derivatives of G l with respect to each parameter, and keep the terms with the lowest powers as they dominate the norm when the scale is smaller than one.",
"Appendix B gives the detailed proof.",
"The insight from this theorem is: if the input xxx l has the same norm as xxx , setting parameters v l , w l , r vl to have the same norm and solve the equations would yield the scale factors in Theorem 3.1.",
"Remark: In T-Fixup, the corresponding condition to Eq.",
"8 keeps the term (cid:107) v l (cid:107) 2 (cid:107) w l (cid:107) 2 which is dropped by ours.",
"It is due to the fact that T-Fixup assumes (cid:107) x i (cid:107) can be controlled to be the same scale as v l and w l , so the lowest power terms (which are dominating the norms here) are the quartic ( 4 th power) ones.",
"For us, (cid:107) xxx (cid:107) is treated separately by a constant to be estimated from data, so the lowest power terms are the quadratic ones in v l , w l , r vl in Eq.",
"7 and 8, and (cid:107) v l (cid:107) 2 (cid:107) w l (cid:107) 2 are dropped.",
"Another important distinction from T-Fixup is that we assume the estimated (cid:107) xxx (cid:107) to be much larger than the scale of v l and w l , unlike the case when they are also controlled to be the same scale.",
"As we will see next, these changes imply our proposed method employs more aggressive scaling for initialization as compared to T-Fixup, and the assumption that (cid:107) xxx (cid:107) has larger scale is satisfied naturally.",
"Unlike previous works (Zhang et al., 2019b; Huang et al., 2020), appropriate initialization is not enough to ensure Eq.",
"7 and 8 during the early stage of the training.",
"This is due to the fact that the input xxx often depends on the pre-trained model weights instead of being initialized by ourselves.",
"Empirically, we observe that the input norm (cid:107) xxx (cid:107) are relatively stable throughout the training but difficulty to control directly by re-scaling.",
"Based on this observation, we treat (cid:107) xxx (cid:107) as a constant and estimate it by a forward pass on all the training examples as = max j [ (cid:107) xxx j (cid:107) ] .",
"We then use this estimated in the factors of Theorem 3.1 to obtain the scaling needed for initialization.",
"Since parameters of all layers are initialized to the same scale, we drop index l for brevity in this section.",
"In practice, is on the order of 10 for pre-trained models, hence v , w and r vi are naturally two orders of magnitude smaller.",
"DT-Fixup is described as follows: Apply Xavier initialization (Glorot and Ben-gio, 2010) on all free parameters except loaded weights from the pre-training models; Remove the learning rate warm-up and all layer normalization in the transformer layers, except those in the pre-trained transformer; Forward-pass on all the training examples to get the max input norm = max j [ (cid:107) xxx j (cid:107) ] ; Inside each transformer layer, scale v, w, r v in the attention block and weight matrices in the MLP block by ( N (4 2 + 2 + 2)) 12 for relation-aware transformer layer; or scale v, w in the attention block and weight matrices in the MLP block by N 12 / (2 ) for vanilla transformer layer.",
"We first apply DT-Fixup on the task of cross-domain Text-to-SQL semantic parsing.",
"Given an unseen schema S for a database during training, our goal is to translate the natural question Q to the target SQLT .",
"The correct prediction depends on the interplay between the questions and the schema structures and the generalization over unseen schemas during inference.",
"As a result, reasoning and structural understanding are crucial to perform well on this task, especially for the more challenging cases.",
"We denote our baseline model as SQL-SP 3 and henceforth.",
"Implementation.",
"For modeling Text-to-SQL generation, we adopt the encoder-decoder framework which can be directly fit into the architecture shown in Fig. 1.",
"First, the pre-transformer module f e is a pre-trained language model which embeds the inputs Q and S into joint representations xxx i for each column, table s i S and question word q i Q respectively.",
"The joint representations are passed into a sequence of N relation-aware transformer layers.",
"The post-transformer module f o is a grammar-guided LSTM decoder, which uses the transformer output yyy i to predict the target SQL T .",
"We follow prior arts (Wang et al., 2019a; Guo et al., 2019; Yin and Neubig, 2018) to implement SQL-SP.",
"The implementation details and hyperparameter settings are described in Appendix C. Dataset.",
"We evaluate SQL-SP on Spider (Yu et al., 2018), a complex and cross-domain Text-to-SQL semantic parsing benchmark.",
"The dataset size is relatively small by deep learning standards, with only 10 , 181 questions and 5 , 693 queries covering 200 databases in 138 domains.",
"The second task where we apply DT-Fixup is multi-choice reading comprehension requiring logical reasoning.",
"Given a context, a question and four options, the task is to select the right or most suitable answer.",
"Rather than extracting relevant information from a long context, this task relies heavily on the logical reasoning ability of the models.",
"Implementation.",
"On top of the pre-trained encodings of the input context, question and options, a stack of N vanilla transformer layers are added before the final linear layer which gives the predictions.",
"The implementation details and hyper-paramter settings are described in Appendix D Dataset.",
"We evaluate on ReClor (Yu et al., 2020b), a newly curated reading comprehension dataset requiring logical reasoning.",
"The dataset contains logical reasoning questions taken from 3 SQL S emantic P arser.",
"standardized exams (such as GMAT and LSAT) that are designed for students who apply for admission to graduate schools.",
"Similar to Spider, this dataset is also small, with only 6 , 139 questions.",
"As the test set of Spider is only accessible through an evaluation server, most of our analyses are performed on the development set.",
"We use the exact match accuracy 4 on all examples following Yu et al. (2018), which omits evaluation of generated values in the SQL queries.",
"We present our results on the Spider leaderboard 5 in Table 1, where SQL-SP trained with DT-Fixup outperforms all the other approaches and 4 We use the evaluation script provided in this repo: https://github.com/taoyds/spider 5 https://yale-lily.github.io/spider achieves the new state of the art performance.",
"Notably, the top four submissions on the previous leaderboard are all occupied by models leveraging relation-aware transformers and task-specific pretraining.",
"Table 2 compares our proposed models with the publicly available works.",
"With enough training steps, our baseline model trained with the standard optimization strategy achieves the same level of performance as compared to RAT-SQL.",
"However, models trained with standard optimization strategy obtain much lower performance with the same epochs 6 of training as compared to models trained with DT-Fixup and require more training steps to achieve the best accuracy.",
"At the same time, by adding more relation-aware transformer layers, further gains can be obtained for models trained with DT-Fixup, which achieves the state-of-the-art performance without any task-specific pre-training on additional data sources.",
"As mentioned in Section 2.2, in the mixed setup, there is no way to apply T-Fixup as it was originally proposed.",
"The closest thing to compare is to drop its constraints on the inputs, but training then becomes highly unstable and fails to converge 4 times out of 5 runs.",
"These results demonstrate the necessity and effectiveness of DT-Fixup to improve and accelerate the transformer training for Text-to-SQL parsers.",
"Table 3 shows the accuracy of our best model as compared to other approaches 7 with different level of hardness defined by Yu et al. (2018).",
"We can see that a large portion of the improvement of our model comes from the medium level on both dev and test set.",
"Interestingly, while our model obtains similar performance for the extra hard level on the dev set, our model performs significantly better on the unseen test set.",
"As most of the extra 6 One epoch iterates over the whole training set once.",
"Wang et al. (2019a) trained with a batch size of 20 for 90 , 000 steps, which is around 200 epochs on the Spider training set.",
"Yu et al. (2020a) trained with a batch size of 24 for 40 , 000 steps, which is around 100 epochs on the Spider training set.",
"7 We choose the top two submissions which also report the breakdown of the accuracy on the test set.",
"hard cases involves implicit reasoning steps and complicated structures, it shows that our proposed models possess stronger reasoning and structural understanding ability, yielding better generalization over unseen domains and database schemas.",
"For ReClor, we choose the best model in Yu et al. (2020b) as the baseline which employs a linear classifier on top of RoBERTa.",
"From the results presented in Table 4, we can see that simply stacking additional vanilla transformer layers outperforms the baseline and adding DT-Fixup further improves the accuracy, which ranks the second on the public leaderboard at the time of this submission 8 .",
"The result further validates the benefit of adding extra transformer layers and the effectiveness of DT-Fixup.",
"For fair comparisons and better understanding, we conduct multiple sets of ablation with the same architecture and implementation to validate the advantages of DT-Fixup over the standard optimization strategy.",
"Note that, the batch sizes in our experiments are relatively small (16 for Spider and 24 for ReClor) due to the size of the pre-trained models, while batch sizes for masked language modelling (Liu et al., 2019b) and machine translation (Huang et al., 2020) are commonly larger than 1024 .",
"Deeper Models.",
"As we can see from Table 5, the standard optimization strategy fails completely to train deep transformers whose depths are larger than 8 on both Spider and ReClor, showing that it struggles to properly train the transformer model as the depth increases.",
"At the same time, DT-Fixup can successfully train deeper transformers up to 32 layers and consistently achieves better performance than models trained by the standard optimization strategy with the same depth on both Spider and ReClor.",
"With DT-Fixup, deep models generally 8 https://eval.ai/web/challenges/challenge-page/503/ achieve better performance than the shallow ones even there are only thousands of training examples.",
"It contradicts the common belief that increasing depth of the transformer model is helpful only when there are enough training data.",
"Faster Convergence.",
"Demonstrated by the validation curves on Spider plotted in Figure 2, models trained with DT-Fixup converges to the same level of performance much faster than models trained with the standard optimization strategy.",
"While standard optimization strategy struggles as the models become deeper, DT-Fixup can keep the model training smooth, showing that DT-Fixup can effectively accelerate the convergence of the transformer training, especially for the deep ones.",
"Batch Sizes When Dataset Size is Small.",
"As shown in Table 7, increasing batch size on Spider from 16 to 120, the average performance from five runs drops from 73.24 to 71.08 and the gap with the standard training approach becomes much narrower.",
"It empirically verifies that large-batch training has a negative impact on the generalization when the dataset size is small, confirming the need to stablize small batch training.",
"From the results on the Spider benchmark, we can see significant improvements by applying DT-Fixup and increasing the depth of the transformer model.",
"However, why and where they help Text-to-SQL semantic parsing are still unclear.",
"As an attempt to answer these questions, we investigate into the predicted results from three variants of our proposed model: Baseline , the best model ( N = 4 ) trained with the standard training approach; Shallow , a shallow model ( N = 4 ) trained with DT-Fixup; Deep , our best model ( N = 24 ) trained with DT-Fixup, which is much deeper.",
"To better understand the models' behavior, we manually examine all the failed cases predicted by these models and classify the errors into four categories: 1) Correct: equivalent in meaning but with different SQL syntax ( e.g. , ORDER BY X LIMIT 1 and SELECT MIN(X) ); 2) Column: the SQL structure is correct but there existed mispredicted columns; 3) Sketch: the SQL structure is predicted different from the ground truth, while the aligned column prediction are correct; 4) Both: there exist both sketch and column errors in the prediction.",
"Table 6 presents the overall statistics of our error analysis.",
"Due to logically equivalent N Standard DT-Fixup Spider 2 69 .",
"queries, there are a number of false negatives for all three models, confirming that the current Spider evaluation metric is not ideal.",
"At first glance, the improvements by applying DT-Fixup and increasing the depth seem to come from correcting Sketch and Both errors, while the three models make similar number of Column only errors.",
"It provides evidence that applying DT-Fixup and increasing the depth can help the transformer model handle hard examples which are mispredicted completely (errors in Both category) by the baseline model.",
"Typically, correct predictions on these hard examples require a certain level of reasoning and structural understanding ability.",
"Fine-grained Error Analysis.",
"In order to better understand the errors made, we look into the composition of error types by each model on mistaken examples common to all models, as well as on examples where at least one model is wrong.",
"In Fig. 3-4, Column means proportion with column errors ( i.e. , Column or Both ); Sketch means proportion with sketch errors ( i.e. , Sketch or Both ).",
"There are 190 examples mispredicted by all the three models and 387 examples which at least one of the three models mispredict.",
"Fig. 3-4 exclude false negatives due to equivalent logic queries, we can see the real improvements from the deep model are even more significant than what the exact match accuracy shows.",
"Furthermore, among the common mistakes to all three models, the deep model has a much smaller proportion in the sketch mistakes which usually involve more logic and structure understanding.",
"Some of column mistakes are due to missing domain knowledge or common sense, which is harder to improve without external data or knowledge.",
"This shows that even among the failed cases, deeper transformer model can make more reasonable predictions.",
"Many research efforts have been devoted to understanding the training and improving the optimization",
"optimization of the transformer models.",
"In particular, transformer models often fail to learn unless a gradual learning rate warm-up is applied at the beginning of training.",
"Chen et al. (2018); Nguyen and Salazar (2019); Wang et al. (2019b) noticed a performance gap due to layer normalization, and introduced various architecture changes as remedy.",
"Zhang et al. (2019b,a); Liu et al. (2020) proposed initialization schemes to stabilize training, allowing either to remove layer normalization or learning rate warmup.",
"Liu et al. (2019a) demonstrated the instability of the Adam optimizer during early stages of optimization.",
"Based on these results, Huang et al. (2020) proposed a weight initialization schema for the transformer that eliminates the need for layer normalization and warmup completely.",
"Despite the broad applications of the transformer model, it struggles to perform well for some NLP tasks with limited training data.",
"In this work, we propose a theoretically justified optimization strategy DT-Fixup to train deeper transformer model with improved generalization and faster convergence speed on small datasets, which is generally applicable to different neural architectures.",
"On two important tasks, Text-to-SQL semantic parsing and logical reading comprehension that require reasoning and structural understanding, applying DT-Fixup achieves SOTA or near-SOTA results by simplying using extra transformer layers on top of the pre-trained models.",
"Such observations suggest even boarder applicability of deeper transformers.",
"We thank all the anonymous reviewers and area chair for their valuable inputs."
] | [
"abstain",
"abstain",
"result",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"method",
"method",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"other"
] |
[
"Atomic clauses are fundamental text units for understanding complex sentences.",
"Identifying the atomic sentences within complex sentences is important for applications such as summarization, argument mining, discourse analysis, discourse parsing, and question answering.",
"Previous work mainly relies on rule-based methods dependent on parsing.",
"We propose a new task to decompose each complex sentence into simple sentences derived from the tensed clauses in the source, and a novel problem formulation as a graph edit task.",
"Our neural model learns to A ccept, B reak, C opy or D rop elements of a graph that combines word adjacency and grammatical dependencies.",
"The full processing pipeline includes modules for graph construction, graph editing, and sentence generation from the output graph.",
"We introduce DeSSE, a new dataset designed to train and evaluate complex sentence decomposition, and MinWiki, a subset of MinWikiSplit.",
"ABCD achieves comparable performance as two parsing baselines on MinWiki.",
"On DeSSE, which has a more even balance of complex sentence types, our model achieves higher accuracy on the number of atomic sentences than an encoder-decoder baseline.",
"Results include a detailed error analysis.",
"Atomic clauses are fundamental text units for understanding complex sentences.",
"The ability to decompose complex sentences facilitates research that aims to identify, rank or relate distinct predications , such as content selection in summarization (Fang et al., 2016; Peyrard and Eckle-Kohler, 2017), labeling argumentative discourse units in argument mining (Jo et al., 2019) or elementary discourse units in discourse analysis (Mann and Thompson, 1986; Burstein et al., 1998; Demir et al., 2010), or extracting atomic propositions for question answering (Pyatkin et al., 2020).",
"In this work, Orig Sokuhi was born in Fujian and was ordained at 17.",
"SS2 Sokuhi was ordained at 17.",
"Figure 1: Example of a complex sentence (Orig) rewritten as two simple sentences (SS1, SS2).",
"Underlined words in the source are preserved in the same order in the two outputs, the conjunction and (red font) is dropped, and the subject Sokuhi (blue font) is copied to the second simple sentence.",
"we propose a new task to decompose complex sentences into a covering set of simple sentences, with one simple output sentence per tensed clause in the source sentence.",
"We focus on tensed clauses rather than other constituents because they are syntactically and semantically more prominent, thus more essential in downstream tasks like argument mining, summarization, and question answering.",
"The complex sentence decomposition task we address has some overlap with related NLP algorithms, but each falls short in one or more respects.",
"Elementary discourse unit (EDU) segmentation segments source sentences into a sequence of non-overlapping spans (Carlson et al., 2003; Wang et al., 2018).",
"The output EDUs, however, are not always complete clauses.",
"Text simplification rewrites complex sentences using simpler vocabulary and syntax (Zhang and Lapata, 2017).",
"The output, however, does not preserve every tensed clause in the original sentence.",
"The split-and-rephrase (SPRP) task aims to rewrite complex sentences into sets of shorter sentences, where an output sentence can be derived from non-clausal constituents in the source (Narayan et al., 2017).",
"In contrast to the preceding methods, we convert each tensed clause in a source sentence, including each conjunct in a conjoined VP, into an independent simple sentence.",
"Unlike EDU segmentation, a belief verb and its that -complement do not lead to two output units.",
"Unlike text simplification, no propositions in the source are omitted from the output.",
"Unlike SPRP, a phrase that lacks a tensed verb in the source cannot lead to a distinct sentence in the output.",
"Figure 1 shows an example complex sentence (Orig) with conjoined verb phrases and its rewrite into two simple sentences (SSs).",
"Observe that besides producing two sentences from one, thus breaking the adjacency between words, words inside the verb phrases (underlined in the figure) remain in the same linear order in the output; the single subject Sokuhi in the source is copied to the more distant verb phrase.",
"Finally, the connective and is dropped.",
"We find that most rewrites of complex sentences into simple sentences that preserve the one-to-one mapping of source tensed predication with target simple sentence involve similar operations.",
"Building on these observations, we propose a neural model that learns to A ccept, B reak, C opy or D rop elements of a special-purpose sentence graph that represents word adjacency and grammatical dependencies, so the model can learn based on both kinds of graph proximity.",
"We also introduce DeSSE ( De composed S entences from S tudents E ssays ), a new annotated dataset to support our task.",
"The rest of the paper presents two evaluation datasets, our full pipeline, and our ABCD model.",
"Experimental results show that ABCD achieves comparable or better performance than baselines.",
"1 2 Related Work Related work falls largely into parsing-based methods, neural models that rewrite, and neural segmenters.",
"Gao et al. (2019) propose a decomposition parser (DCP) that extracts VP constituents and clauses from complex sentences as part of a summarization evaluation tool.",
"Niklaus et al. (2019a) present a system (DisSim) based on parsing to extract simple sentences from complex ones.",
"Jo et al. (2020) propose seven rules to extract complete propositions from parses of complex questions and imperatives for argumentation mining.",
"Though performance of these methods depends on parser quality, they often achieve very good performance.",
"We include two whose code is available (DCP, DisSim) among our baselines.",
"SPRP models are based on encoder-decoder architectures, and the output is highly depending on the training corpus.",
"Aharoni and Goldberg (2018) present a Copy-augmented network (Copy 512 ) based on (Gu et al., 2016) that encour-1 ABCD is available at https://github.com/ serenayj/ABCD-ACL2021 .",
"ages the model to copy most words from the original sentence to the output.",
"As it achieves improvement over an earlier encoder-decoder SPRP model (Narayan et al., 2017), we include Copy 512 among our baselines.",
"Finally, recent neural EDU segmenters (Wang et al., 2018; Li et al., 2018) achieve state-of-the-art performance on a discourse relation corpus, RST-DT (Carlson et al., 2003).",
"As they do not output complete sentences, we do not include any among our baselines.",
"Our ABCD model leverages the detailed information captured by parsing methods, and the powerful representation learning of neural models.",
"As part of a larger pipeline that converts input sentences to graphs, ABCD learns to predict graph edits for a post processor to execute.",
"Here we present DeSSE, a corpus we collected for our task, and MinWiki, a modification of an existing SPRP corpus (MinWikiSplit (Niklaus et al., 2019b)) to support our aims.",
"We also give a brief description of differences in their distributions.",
"Neural models are heavily biased by the distributions in their training data (Niven and Kao, 2019), and we show that DeSSE has a more even balance of linguistic phenomena.",
"DeSSE is collected in an undergraduate social science class, where students watched video clips about race relations, and wrote essays in a blog environment to share their opinions with the class.",
"It was created to support analysis of student writing, so that different kinds of feedback mechanisms can be developed regarding sentence organization.",
"Students have difficulty with revision to address lack of clarity in their writing (Kuhn et al., 2016), such as non-specific uses of connectives, run on sentences, repetitive statements and the like.",
"These make DeSSE different from corpus with expert written text, such as Wikipedia and newspaper.",
"The annotation process is unique in that it involves identifying where to split a source complex sentence into distinct clauses, and how to rephrase each resulting segment as a semantically complete simple sentence, omitting any discourse connectives.",
"It differs from corpora that identify discourse units within sentences, such as RST-DT (Carlson et al., 2003) and PTDB (Prasad et al., 2008), because Orig : (I believe that talking about race more in a civil way can only improve our society ), || but I can see why other people may have a different opinion.",
"Rephrase 1 : I believe that talking about race more in a civil way can only improve our society.",
"Rephrase 2 : I can see why other people may have a different opinion.",
"clauses are explicitly rewritten as simple sentences.",
"It differs from split-and-rephrase corpora such as MinWikiSplit, because of the focus in DeSSE on rephrased simple sentences that have a one-to-one correspondence to tensed clauses in the original complex sentence.",
"DeSSE is also used for connective prediction tasks, as in (Gao et al., 2021).",
"2 We perform our task on Amazon Mechanical Turk (AMT).",
"In a series of pilot tasks on AMT, we iteratively designed annotation instructions and an annotation interface, while monitoring quality.",
"Figure 2 illustrates two steps in the annotation: identification of n split points between tensed clauses, and rephrasing the source into n +1 simple clauses, where any connectives are dropped.",
"The instructions ask annotators to focus on tensed clauses occurring in conjoined or subordinate structures, relative clauses, parentheticals, and conjoined verb phrases, and to exclude gerundive phrases, infinti-val clauses, and clausal arguments of verbs.",
"The final version of the instructions describes the two annotation steps, provides a list of connectives, and illustrates a positive and negative example.",
"3 The training and tests sets contains 12K and 790 examples, respectively.",
"MinWikiSplit has 203K complex sentences and their rephrased versions (Niklaus et al., 2019b).",
"It is built from WikiSplit , a text simplification dataset derived from Wikipedia revision histories (Narayan et al., 2017), modified to focus on minimal propositions that cannot be further decomposed.",
"It was designed for simplifying complex sentences into multiple simple sentences, where the simple sentences can correspond to a very wide range of structures from the source sentences, such as prepositional or adjectival phrases.",
"To best utilize this corpus for our purposes, we selected a subsample where the number of tensed verb phrases in the source sentences matches the number of rephrased propositions.",
"The resulting MinWiki corpus has an 18K/1,075 train/test split.",
"Table 1 presents prevalence of syntactic patterns characterizing complex sentences in the two datasets.",
"Four are positive examples of one-to-one correspondence of tensed clauses in the source with simple sentences in the rephrasings: discourse connectives (Disc. Conn.), VP-conjunction, clauses introduced by whsubordinating conjunctions (e.g., when, whether, how ) combined with non-restrictive relative clauses ( wh& Rel. Cl.), and restrictive relative clauses (Restric. Rel. Cl.).",
"The sixth column (negative examples) covers clausal arguments, which are often thatcomplements of verbs that express belief, speaking, attitude, emotion, and so on.",
"MinWiki has few of the latter, presumably due to the genre difference between opinion essays (DeSSE) and Wikipedia (MinWiki).",
"We formulate the problem of converting complex sentences into covering sets of simple sentences as a graph segmentation problem.",
"Each sentence is represented as a Word Relation Graph (WRG), a directed graph constructed from each input sentence with its dependency parse.",
"Every word token and its positional index becomes a WRG vertex.",
"For every pair of words, one or more edges are added as follows: a neighbor edge that indicates that the pair of words are linearly adjacent; a dependency edge that shows every pair of words connected by a dependency relation, adding critical grammatical relations, such as subject .",
"Figure 3 shows an example sentence and a sim-plified version of its WRG (edge directions are not shown, for readability).",
"Vertices are labeled with word-index pairs in red font, and edges are labeled Figure 3: Example complex sentence (Orig), ground truth output (SS 1 and SS 2), and WRG (best seen in color; edge directions and punctuation omitted for readability).",
"as ngbh for neighboring words, or with the tags corresponding to their dependency relations, such as nsubj between Sokuhi-1 and ordained-13 .",
"An edge can have both types of relation, e.g. neighbor and dependency for was-12 and ordained-13 .",
"The graph is stored as an Edge Triple Set, a set of triples with (source node, target node, label) representing each pair of words connected by an edge, as shown in Figure 3, bottom left.",
"Given a sentence and its WRG, our goal is to decompose the graph into n connected components (CC) where each CC is later rewritten as an output simple sentence.",
"To perform the graph decomposition, decisions are made on every edge triple.We define four edit types: A ccept : retain the triple in the output B reak : break the edge between a pair of words C opy : copy a target word into a CC D rop : delete the word from the output CCs A training example consists of an input sentence, and one or more output sentences.",
"If the input sentence is complex, the ground truth output consists of multiple simple sentences.",
"The next section presents the ABCD pipeline.",
"Two initial modules construct the WRG graphs for each input sentence, and the ABCD labels for the Edge Triple Sets based on the ground truth output.",
"A neural model learns to assign ABCD labels to input WRG graphs, and a final graph segmenter generates simple sentences from the labeled WRG graphs.",
"Details about the neural model are in the subsequent section.",
"The full processing pipeline consists of five ma-jor components, as shown in Figure 4.",
"Three preprocessing modules handle the WRG graph construction, conversion of graph triples to vectors, and creation of distant supervision labels for the graph.",
"The fourth component is the ABCD neural model that learns to label a WRG graph, which is described in section 6.",
"The last part of the pipeline is a post-processing module to segment WRG graphs based on the labels learned by the ABCD model, and to map each graph segment to a simple sentence.",
"Graph Constructor The first module in the system is a Graph Constructor that converts an input sentence and its dependency parse into a collection of vertices and edges.",
"It is used during training and inference.",
"It first extracts words and their indices from the input sentences of the training examples for the vertices of each WRG graph.",
"A directed edge and ngbh label is assigned to all pairs of adjacent words.",
"A directed edge and label is also assigned to every governing and dependent word pair (cf. Figure 3).",
"Edge Triples DB The Edge Triples DB, which is used during training and inference, creates vector representations for the input Edge Triples Sets for each training instance, using latent representations learned by an encoder component of the ABCD model.",
"Using the word indices, a function maps the source and target words from every triple into its hidden representation learned by the encoder, and the triple's edge label is converted into a one-hot encoding with dimension d .",
"For an edge triples set with m triples, the source and target word hidden states are each stacked into an m h matrix, and the one-hot vectors for edge labels are stacked into an m d matrix.",
"These three source, target, edge matrices that represent an Edge Triple Set are then fed into an attention layer, as discussed in section 6.",
"Distant Supervision Label Creator The expected supervision for our task is the choice of edit type for each triple, where the ground truth consists of pairs of an input sentence, and one or more output simple sentences.",
"We use distant supervision where we automatically create edit labels for each triple based on the alignment between the original input sentence and the set of output simple sentences.",
"In the Distant Supervision Label Creator Figure 4: ABCD system overview during training (top) and inference (bottom).",
"module, for every triple, we check the following conditions: if the edge is a neighbor relation, and both source and target words are in the same output simple sentence, we mark this pair with edit type A ; if the source and target words of a triple occur in different output simple sentences, the corresponding edit is B ; if the source and target are in the same output simple sentence, and the only edge is a dependency label (meaning that they are not adjacent in the original sentence), we mark this pair as C ; finally, if a word is not in any output simple sentence, we mark the corresponding type as D .",
"Graph Segmenter This module segments the graph into connected components using predicted edits, and generates the output sentences, as part of the inference pipeline.",
"There are four stages consisting of: graph segmentation, traversal, subject copying, and output rearranging.",
"In the graph segmentation stage, the module first performs actions on every triple per the predicted edit: if the edit is A , no action is taken; if the edit is B , the edge between the pair of words is dropped; given C , the edge is dropped, and the edge triple is stored in a temporary list for later retrieval; if the edit is D , the target word is dropped from the output graphs.",
"After carrying out the predicted edits, we run a graph traversal algorithm on modified edge triples to find all CCs, using a modified version of the Depth-First-Search algorithm with linear time proposed in (Tarjan, 1972; Nuutila and Soisalon-Soininen, 1994).",
"For each CC, the vertices are kept and the edges are dropped.",
"Then we enter the subject copying stage: for each source, target pair in the temporary list mentioned earlier, we copy the word to the CC containing the target.",
"Finally for every CC, we arrange all words in their linear order by indices, and output a simple sentence.",
"The ABCD model consists of three neural modules depicted in Figure 5: a sentence encoder to learn a hidden representation for the input sentence, a self-attention layer to generate attention scores on every edge label, and a classifier that generates a predicted distribution over the four edit types, based on the word's hidden representation, the edge label representation, and the attention scores.",
"The sentence representation module has two components: a word embedding look up layer based on GloVe (Pennington et al., 2014), and a bidirectional LSTM (Hochreiter and Schmidhu-ber, 1997) (see Figure 5).",
"Given an input sentence length l , and the hidden state dimension M , the output from this module is l M .",
"For a word with index i in the input sentence, we generate its hidden representation h i such that it combines the hidden states from forward and backward LSTMs, with h i RM .",
"A positional encoding function is added to the word embeddings.",
"We found this particularly helpful in our task, presumably because the same word type at different positions might have different relations with other words, captured by distinct learned representations.",
"Our experiments compare biLSTM training from scratch to use of BERT (Devlin et al., 2019), to see if pre-trained representations are helpful.",
"To utilize the learned word representations in the context of the relational information captured in the WRG graph, we send the sentence representation to the Edge Triple DB and extract representations h i and h j for the source and target words, based on indices i and j .",
"A one-hot vector with dimensionality N encodes relations between pairs of source and target words; each edge triple is thus converted into three vectors: h src , h tgt , d rel .",
"We take position-wise summation over all one hot vectors if there is more than one label on an edge.",
"Attention has been useful for many NLP tasks.",
"In our model, we adapt the multi-head self attention mechanism (Vaswani et al., 2017) to learn importance weights on types of edit operations, as shown in the middle green block in Figure 5.",
"Given m edge triples, we first stack all source vectors h src into a matrix H src , and operate the same way on h tgt and d rel to obtain H tgt and D rel , such that H src , H tgt R m M , and D rel R m N .",
"These three matrices are the input to self-attention.",
"For every head of the multi-head attention, we first obtain a feature representation with the three parameters V, K, Q mapping to sources, targets and relations, respectively, then compute a co-efficient e with a learnable parameter W e as follows: e = LeakyRelu ( W e ( V H src ; KH tgt ; QD rel )) (1) where e R m 1 .",
"Finally, we concatenate all head attentions together, and pass them through a linear layer to learn the relations between heads, and generate the final attention scores:",
"= W ( concat (( head 1 , head 2 , . . . )) (3) R m 1 .",
"The attention scores are sent to the next module to help the classifier make its decision.",
"The last component of our neural model is a classifier, as shown at the right of Figure 5.",
"To aggregate the feature representation from the previous layer, we first concatenate the three matrices H src , H tgt , D rel into one representation, and multiply the attention scores as follows: H (cid:48) = ( H src ; H tgt , D rel ) (4) An MLP layer then takes H (cid:48) as its input and generates the output distribution over the four edit types for each edge triple: Out M = Softmax ( MLP ( H (cid:48) )) (5) where Out M R m 4 .",
"As an alternative to MLP, we also investigated a bilinear classifier, which has proved efficient in capturing fine-grained differences in features for classification task (Dozat and Manning, 2017).",
"The bilinear layer first takes H src and H tgt as input and generates transposed bilinear features : output bi = H (cid:124) src WAH tgt + b (6) where WA , b are learnable parameters.",
"Then we sum the bilinear features with the MLP decisions and apply softmax on the result to get the final distribution over the four edit labels: Out B = Softmax ( output bi + MLP ( H (cid:48) )) (7) where Out B R m 4 .",
"The class balance for our task is highly skewed: the frequency of class A is much higher than the other three classes, as shown in the top portion of Table 2.",
"To mitigate the impact on training, we adopt the inverse class weighting for cross entropy loss introduced in (Huang et al., 2016).",
"With this weighting, loss is weighted heavily towards rare classes, which forces the model to learn more about the rare cases.",
"Table 2 shows the weights for four edit labels on both datasets.",
"On MinWiki, A occurs the most and has the lowest weights as 0.0167, a sharp contrast to B,C,D .",
"On DeSSE, both A and D occur frequently while B and C have lower frequency with higher weights, at 0.6266 and 0.2658.",
"DeSSE has fewer B , and more C and D than MinWiki.",
"From this perspective, MinWiki is simpler than DeSSE because there are fewer edits on rewriting the sentences.",
"This might be due to the different distributions of linguistic phenomena in the two datasets (see Table 1).",
"In the next section, we will show that ABCD shows stronger improvements on complicated edits.",
"Training details are in the appendix.",
"We carry out two intrinsic evaluations of ABCD performance on MinWiki and DeSSE.",
"Section 7.1 presents an intrinsic evaluation of ABCD variants on edit prediction, with error analysis and ablation studies.",
"Section 7.2 compares the best ABCD model with several baselines on the quality of output propositions.",
"We discuss evaluation metrics in section 7.3.",
"Results show that ABCD models show consistently good performance compared to other baseline models on both datasets.",
"We report F1 scores on all four edit types from ABCD and its model variants.",
"We compare two classifiers as mentioned in previous sections and investigate the difference between using biLSTM and BERT with fine-tuning, to see if pre-trained knowledge is useful for the task.",
"Table 3 presents results on MinWiki and DeSSE from the four model settings.",
"All models perform better on MinWiki than DeSSE, and biL-STM+bilinear shows the best performance on both, with F1 scores of 0.82 and 0.67 on MinWiki and DeSSE respectively.",
"Presumably this reflects the greater linguistic diversity of DeSSE shown in Table 1.",
"The lower performance from BERT variants indicates the pre-trained knowledge is not helpful.",
"Among the four edit types, all models have high F1 scores on A across datasets, high F1 on C for MinWiki, but not on DeSSE.",
"B and D show lower scores, and all four models report lower F1 on B than D on both datasets.",
"analysis on pairs of gold labels and predictions for B and D , using predictions from biLSTM+mlp.",
"The model does poorly on B in both datasets, compared with predictions of 36.1% for A on MinWiki, on on DeSSE, 27.42% for A and 15.18% for C .",
"The model has high agreement on D from MinWiki, but predicts 42.63% A on DeSSE.",
"We suspect that improved feature representation could raise performance; that is, pairs of words and their relations might be a weak supervision signal for B and D .",
"We conducted an ablation study on the inverse class weights mentioned in section 6 on MinWiki.",
"After removing the weights, the model fails to learn other classes and only predicts A due to the highly imbalanced label distributions, which demonstrates the benefit of weighting the loss function.",
"We also ablate positional encoding which leads to F1 scores of 0.90 for A , 0.51 for C , and 0 for both B and D , indicating the importance of positional encoding.",
"For baselines, we use Copy 512 and DisSim, which both report performance on Wikisplit in previous work.",
"We also include DCP, which relies on three rules applied to token-aligned dependency and constituency parses: DCP vp extracts clauses with tensed verb phrases; DCP sbar extracts SBAR subtrees from constituency trees; DCP recur recursively applies the preceding rules.",
"For evaluation, we use BLEU with four-grams (BL4) (Papineni et al., 2002) and BERTScore (BS) (Zhang et al., 2019).",
"We also include descriptive measures specific to our task.",
"To indicate whether a model retains roughly the same number of words as the source sentence in the target output, we report average number of tokens per simple sentence (#T/SS).",
"To capture the correspondence between the number of target simple sentences in the ground truth and model predictions, we use percentage of samples where the model predicts the correct number of simple sentences (Match #SS).",
"BL4 captures the 4-gram alignments between candidate and reference word strings, but fails to assess similarity of latent meaning.",
"BS applies token-level matching through contextualized word embeddings, therefore evaluates candidates on their word meanings.",
"For each example, we first align each simple sentence in the ground truth with a prediction, compute the pairwise BL4 and BS scores, and take the average as the score for the example.",
"A predicted output sentence with no Category MinWiki DeSSE biLSTM BERT biLSTM BERT mlp bilinear mlp bilinear mlp bilinear mlp bilinear A 0.98 0.98 0.93 0.86 0.91 0.88 0.88 0.87 B 0.48 0.48 0.41 0.36 0.34 0.42 0.31 0.28 C 0.99 0.99 0.95 0.98 0.89 0.78 0.89 0.55 D 0.80 0.84 0.39 0.75 0.49 0.54 0.45 0.45 All 0.78 0.82 0.72 0.74 0.66 0.67 0.63 0.57 Table 3: Performance (F1) of our model and its variants on MinWiki (N=1075) and DeSSE (N=790).",
"correspondent in the ground truth, or a ground truth sentence with no correspondent in the predicted, will add 0 to the numerator and 1 to the denominator of this average.",
"Table 5 presents results from the baselines and our ABCD best variant, biLSTM with two classifiers.",
"None of the models surpasses all others on both datasets.",
"All models show lower performance on DeSSE than MinWiki, again an indication that DeSSE is more challenging.",
"On MinWiki, ABCD is competitive with Copy 512 , the best performing model, with a narrow gap on Match#SS (0.65%) and BLEU4 (4.58).",
"On DeSSE, ABCD BL4 and BS surpass all baselines.",
"ABCD performance is 2.34% less than DCP recur on Match #SS, but biL-STM+mlp output sentences have an average length of 8.85, which is closer to the gold average length of 9.07, in contrast to much longer output from DCP recur of 14.16.",
"To summarize, ABCD achieves competitive results on both datasets.",
"While Table 4 presents error analysis on predictions of B that lead to an incorrect number of outputs, here we examine test sentences from both datasets where the prediction and ground truth have the same number of outputs.",
"Table 6 shows the total number of examples for MinWiki (1,075) and for the positive examples in DeSSE (DeSSE pos , 521).",
"The M columns for each dataset give the number of examples where the number of targets in the ground truth matches the number of targets predicted by the model.",
"On MinWiki, ABCD has marginally better BL4 and BS scores than Copy 512 , but Copy 512 has 7 more cases with the correct number of outputs.",
"For DeSSE, we restrict attention to the positive examples (MinWiki has no negative examples), because Copy 512 and ABCD perform equally well on the negative examples.",
"By the BL4 and BS scores on DeSSE pos , Copy 512 appears to perform much better than ABCD, but these scores are on 20 out of 521 examples (3.8%).",
"Although ABCD's scores are lower, it produces the correct number of output sentences in 47.4% of cases for the mlp, and 48.1% for the bilin.",
"Figure 6 shows three complex sentences from DeSSE with the annotated rewriting, and predicted propositions from Copy 512 and ABCD mlp .",
"Copy 512 correctly decomposes only one of the examples and copies the original input on the other two samples.",
"On the one example where Copy produces two simple sentences, it alters the sentence meaning by replacing the word genetics with the word interesting.",
"This exposes a drawback of encoder-decoder architectures on the proposition identification task, that is, the decoder can introduce words that are not in the input sentence, therefore failing to preserve the original meaning.",
"In contrast, ABCD shows good performance on all three sentences by producing the same number of simple sentences as in the annotated rewriting.",
"Especially for the third sentence, which contains an embedded clause, which has been the main mission since 9/11 , the first proposition written by the annotator is not grammatically correct, and the subject of the second proposition is a pronoun it , referring to the semantic subject Our main mission .",
"Nonetheless, ABCD generates two propositions, both of which are grammatically correct and meaning preserving.",
"In this section, we discuss limitations of ABCD to guide future work.",
"The first limitation is the low performance of ABCD on B .",
"We observe that in DeSSE, some annotators did not break the sentences appropriately.",
"We randomly selected 50 samples, and found 13 out of 50 (26%) examples Group Model MinWiki DeSSE #T Match BLEU4 BERTSc #T Match BLEU4 BERTSc /SS #SS(%) /SS #SS(%) Parsing DisSim 8.50 68.46 64.20 94.42 9.59 40.00 37.89 89.54 DCP vp 14.82 45.49 28.80 64.50 15.99 42.40 47.25 60.18 DCP sbar 19.07 17.49 19.35 49.07 17.24 44.81 48.02 59.89 DCP recur 16.30 67.90 31.78 58.08 14.16 55.63 34.44 61.37 Encoder-decoder COPY 9.37 79.26 80.96 95.96 18.13 36.20 45.91 88.71 ABCD biLSTM mlp 9.37 78.61 75.80 92.91 8.85 53.29 53.42 90.23 bilin 9.53 76.72 76.38 90.28 8.10 52.66 41.57 94.78 Table 5: Performance of baselines and our models on Minwiki test set (N=1075, #T/SS = 10.03), and DeSSE test set (N=790, #T/SS =9.07).",
"where annotators add breaks to rewrite NPs and infinitives as clauses.",
"This introduces noise into the data.",
"Another reason of lower performance on B might be attributed to the current design of ABCD that neglects sequential relations among all words.",
"Among all edge triples where it fails to assign B, 67% and 27.42% are with ngbh relations on MinWiki and DeSSE, respectively.",
"Two possibilities for improving performance to investigate are enhancements to the information in the WRG graph, and re-formulating the problem into sequence labeling of triples.",
"The second limitation pertains mainly to DeSSE.",
"In the training data, 34.7% of sentences have OOV words.",
"For example, we noticed that annotators sometimes introduced personal pronouns (e.g. he/she/they ) in their rewrites of VP-conjunction, instead of copying the subjects, or they substituted a demonstrative pronoun (e.g. this/these ) for clausal arguments.",
"This could be addressed by expanding the edit types to include the ability to INSERT words from a restricted insertion vocabulary.",
"Nevertheless, our model has a small performance gap with Copy 512 on MinWiki, and outperforms the baselines on DeSSE.",
"A third issue is whether ABCD would generalize to other languages.",
"We expect ABCD would perform well on European languages with existing dependency and constituency parsers, and with an annotated dataset.",
"We presented a new task to decompose complex sentences into simple ones, along with DeSSE, a new dataset designed for this task.",
"We proposed the neural ABCD model to predict four edits operations on sentence graphs, as part of a larger pipeline from our graph-edit problem formulation.",
"ABCD performance comes close to or outperforms the parsing-based and encoder-decoder baselines.",
"Our work selectively integrates modules to capitalize on the linguistic precision of parsing-based methods, and the expressiveness of graphs for encoding different aspects of linguistic structure, while still capitalizing on the power of neural networks for representation learning."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"objective",
"objective",
"abstain",
"method"
] |
[
"Ideal point models analyze lawmakers' votes to quantify their political positions, or ideal points.",
"But votes are not the only way to express a political position.",
"Lawmakers also give speeches, release press statements, and post tweets.",
"In this paper, we introduce the text-based ideal point model ( tbip ), an unsupervised probabilistic topic model that analyzes texts to quantify the political positions of its authors.",
"We demonstrate the tbip with two types of politicized text data: U.S. Senate speeches and senator tweets.",
"Though the model does not analyze their votes or political aliations, the tbip separates lawmakers by party, learns interpretable politicized topics, and infers ideal points close to the classical vote-based ideal points.",
"One benefit of analyzing texts, as opposed to votes, is that the tbip can estimate ideal points of anyone who authors political texts, including non-voting actors.",
"To this end, we use it to study tweets from the 2020 Democratic presidential candidates.",
"Using only the texts of their tweets, it identifies them along an interpretable progressive-to-moderate spectrum.",
"Ideal point models are widely used to help characterize modern democracies, analyzing lawmakers' votes to estimate their positions on a political spectrum (Poole and Rosenthal, 1985).",
"But votes aren't the only way that lawmakers express political preferencespress releases, tweets, and speeches all help convey their positions.",
"Like votes, these signals are recorded and easily collected.",
"This paper develops the text-based ideal point model ( tbip ), a probabilistic topic model for analyzing unstructured political texts to quantify the political preferences of their authors.",
"While classical ideal point models analyze how dierent people vote on a shared set of bills, the tbip analyzes how dierent authors write about a shared set of latent topics.",
"The tbip is inspired by the idea of political framing: the specific words and phrases used when discussing a topic can convey political messages (Entman, 1993).",
"Given a corpus of political texts, the tbip estimates the latent topics under discussion, the latent political positions of the authors of texts, and how per-topic word choice changes as a function of the political position of the author.",
"A key feature of the tbip is that it is unsupervised.",
"It can be applied to any political text, regardless of whether the authors belong to known political parties.",
"It can also be used to analyze non-voting actors, such as political candidates.",
"Figure 1 shows a tbip analysis of the speeches of the 114th U.S. Senate.",
"The model lays the senators out on the real line and accurately separates them by party.",
"(It does not use party labels in its analysis.)",
"Based only on speeches, it has found an interpretable spectrumSenator Bernie Sanders is liberal, Senator Mitch McConnell is conservative, and Senator Susan Collins is moderate.",
"For comparison, Figure 2 also shows ideal points estimated from the voting record of the same senators; their language and their votes are closely correlated.",
"The tbip also finds latent topics, each one a vocabulary-length vector of intensities, that describe the issues discussed in the speeches.",
"For each topic, the tbip involves both a neutral vector of intensities and a vector of ideological adjustments that describe how the intensities change as a function of the political position of the author.",
"Illustrated in Table 1 are discovered topics about immigration, health care, and gun control.",
"In the gun control topic, the neutral intensities focus on words like gun and firearms.",
"As the author's ideal point becomes more negative, terms like gun violence and background checks increase in intensity.",
"As the author's ideal point becomes more positive, terms like constitutional rights increase.",
"ization topic models (Canny, 2004; Gopalan et al., 2015).",
"The latent variables are the ideal points of the authors, the topics discussed in the corpus, and how those topics change as a function of ideal point.",
"To approximate the posterior, we use an e-cient black box variational inference algorithm with stochastic optimization.",
"It scales to large corpora.",
"We develop the details of the tbip and its variational inference algorithm.",
"We study its performance on three sessions of U.S. Senate speeches, and we compare the tbip to other methods for scaling political texts (Slapin and Proksch, 2008; Lauderdale and Herzog, 2016a).",
"The tbip performs best, recovering ideal points closest to the vote-based ideal points.",
"We also study its performance on tweets by U.S. senators, again finding that it closely recovers their vote-based ideal points.",
"(In both speeches and tweets, the dierences from vote-based ideal points are also qualitatively inter-esting.)",
"Finally, we study the tbip on tweets by the 2020 Democratic candidates for President, for which there are no votes for comparison.",
"It lays out the candidates along an interpretable progressive-to-moderate spectrum.",
"We develop the text-based ideal point model ( tbip ), a probabilistic model that infers political ideology from political texts.",
"We first review Bayesian ideal points and Poisson factorization topic models, two probabilistic models on which the tbip is built.",
"Ideal points quantify a lawmaker's political preferences based on their roll-call votes (Poole and Rosenthal, 1985; Jackman, 2001; Clinton et al., 2004).",
"Consider a group of lawmakers voting yea or nay on a shared set of bills.",
"Denote the vote of lawmaker i on bill j by the binary variable v ij .",
"The Bayesian ideal point model posits scalar per-lawmaker latent variables x i and scalar per-bill latent variables",
". j ; (cid:17) j / .",
"It assumes the votes come from a factor model, x i (cid:24) N .0; 1/ j ; (cid:17) j (cid:24) N .0; 1/ v ij (cid:24) Bern",
".(cid:27). j C x i (cid:17) j //: (1) where",
"The latent variable x i is called the lawmaker's ideal point ; the latent variable (cid:17) j is the bill's polarity .",
"When x i and (cid:17) j have the same sign, lawmaker i is more likely to vote for bill j ; when they have opposite sign, the lawmaker is more likely to vote against it.",
"The per-bill intercept term j is called the popularity .",
"It captures that some bills are uncontroversial, where all lawmakers are likely to vote for them (or against them) regardless of their ideology.",
"Using data of lawmakers voting on bills, political scientists approximate the posterior of the Bayesian ideal point model with an approximate inference method such as Markov Chain Monte Carlo (MCMC) (Jackman, 2001; Clinton et al., 2004) or expectation-maximization (EM) (Imai et al., 2016).",
"Empirically, the posterior ideal points of the lawmakers accurately separate political parties and capture the spectrum of political preferences in American politics (Poole and Rosenthal, 2000).",
"Poisson factorization is a class of non-negative matrix factorization methods often employed as a topic model for bag-of-words text data (Canny, 2004; Cemgil, 2009; Gopalan et al., 2014).",
"Poisson factorization factorizes a matrix of doc-ument/word counts into two positive matrices: a matrix (cid:18) that contains per-document topic intensities, and a matrix that contains the topics.",
"Denote the count of word v in document d by y dv .",
"Poisson factorization posits the following probabilistic model over word counts, where a and b are hyperparameters: (cid:18) dk (cid:24) Gamma",
".a; b/ kv (cid:24) Gamma",
".a; b/ y dv (cid:24) Pois (cid:0)P k (cid:18) dk kv (cid:1) : (2) Given a matrix y , practitioners approximate the posterior factorization with variational inference (Gopalan et al., 2015) or MCMC (Cemgil, 2009).",
"Note that Poisson factorization can be interpreted as a Bayesian variant of nonnegative matrix factorization, with the so-called KL loss function (Lee and Seung, 1999).",
"When the shape parameter a is less than 1, the latent vectors (cid:18) d and k tend to be sparse.",
"Consequently, the marginal likelihood of each count places a high mass around zero and has heavy tails (Ranganath et al., 2015).",
"The posterior components are interpretable as topics (Gopalan et al., 2015).",
"The text-based ideal point model ( tbip ) is a probabilistic model that is designed to infer political preferences from political texts.",
"There are important dierences between a dataset of votes and a corpus of authored political language.",
"A vote is one of two choices, yea or nay.",
"But political language is high dimensionala lawmaker's speech involves a vocabulary of thousands.",
"A vote sends a clear signal about a lawmaker's opinion about a bill.",
"But political speech is noisythe use of a word might be irrelevant to ideology, provide only a weak signal about ideology, or change signal depending on context.",
"Finally, votes are organized in a matrix, where each one is unambiguously attached to a specific bill and nearly all lawmakers vote on all bills.",
"But political language is unstructured and sparse.",
"A corpus of political language can discuss any number of issueswith speeches possibly involving several issuesand the issues are unlabeled and possibly unknown in advance.",
"The tbip is based on the concept of political framing.",
"Framing is the idea that a communicator will emphasize certain aspects of a message implicitly or explicitly to promote a perspective or agenda (Entman, 1993; Chong and Druckman, 2007).",
"In politics, an author's word choice for a particular issue is aected by the ideological message she is trying to convey.",
"A conservative discussing abortion is more likely to use terms such as life and unborn, while a liberal discussing abortion is more likely to use terms like choice and body.",
"In this example, a conservative is framing the issue in terms of morality, while a liberal is framing the issue in terms of personal liberty.",
"The tbip casts political framing in a probabilistic model of language.",
"While the classical ideal point model infers ideology from the dierences in votes on a shared set of bills, the tbip infers ideology from the dierences in word choice on a shared set of topics.",
"The tbip is a probabilistic model that builds on Poisson factorization.",
"The observed data are word counts and authors: y dv is the word count for term v in document d , and a d is the author of the document.",
"Some of the latent variables in the tbip are inherited from Poisson factorization: the nonnegative K -vector of per-document topic intensities is (cid:18) d and the topics themselves are non-negative V -vectors k , where K is the number of topics and V is the vocabulary size.",
"We refer to as the neutral topics .",
"Two additional latent variables capture the politics: the ideal point of an author s is a real-valued scalar x s , and the ideological topic is a real-valued V -vector (cid:17) k .",
"The tbip uses its latent variables in a generative model of authored political text, where the ideological topic adjusts the neutral topicand thus the word choiceas a function of the ideal point of the author.",
"Place sparse Gamma priors on (cid:18) and , and normal priors on (cid:17) and x , so for all documents d , words v , topics k , and authors s , (cid:18) dk (cid:24) Gamma",
"For a topic k and term v , a non-zero (cid:17) kv will increase the Poisson rate of the word count if it shares the same sign as the ideal point of the author x a d , and decrease the Poisson rate if they are of opposite signs.",
"Consider a topic about gun control and suppose (cid:17) kv > 0 for the term constitution.",
"An author with an ideal point x s > 0 , say a conservative Ideology Top Words Liberal dreamers, dream, undocumented, daca, comprehensive immigration reform, deport, young, deportation Neutral immigration, united states, homeland security, department, executive, presidents, law, country Conservative laws, homeland security, law, department, amnesty, referred, enforce, injunction Liberal aordable care act, seniors, medicare, medicaid, sick, prescription drugs, health insurance Neutral health care, obamacare, aordable care act, health insurance, insurance, americans, coverage, percent Conservative health care law, obamacare, obama, democrats, obamacares, deductibles, broken promises Liberal gun violence, gun, guns, killed, hands, loophole, background checks, close Neutral gun, guns, second, orlando, question, firearms, shooting, background checks Conservative second, constitutional rights, rights, due process, gun control, mental health, list, mental illness Table 1.",
"The tbip learns topics from Senate speeches that vary as a function of the senator's political positions.",
"The neutral topics are for an ideal point of 0; the ideological topics fix ideal points at (cid:0) 1 and C 1 .",
"We interpret one extreme as liberal and the other as conservative.",
"Data is from the 114th U.S. Senate.",
"author, will be more likely to use the term con-stitution when discussing gun control; an author with an ideal point x s < 0 , a liberal author, will be less likely to use the term.",
"Suppose (cid:17) kv < 0 for the term violence.",
"Now the liberal author will be more likely than the conservative to use this term.",
"Finally suppose (cid:17) kv D 0 for the term gun.",
"This term will be equally likely to be used by the authors, regardless of their ideal points.",
"To build more intuition, examine the elements of the sum in the Poisson rate of Equation (3) and rewrite slightly to (cid:18) dk exp .",
"log kv C x a d (cid:17) kv / .",
"Each of these elements mimics the classical ideal point model in Equation (1), where (cid:17) kv now measures the polarity of term v in topic k and log kv is the intercept or popularity.",
"When (cid:17) kv and x a d have the same sign, term v is more likely to be used when discussing topic k .",
"If (cid:17) kv is near zero, then the term is not politicized, and its count comes from a Poisson factorization.",
"For each document d , the elements of the sum that contribute to the overall rate are those for which (cid:18) dk is positive; that is, those for the topics that are being discussed in the document.",
"The posterior distribution of the latent variables provides estimates of the ideal points, neutral topics, and ideological topics.",
"For example, we estimate this posterior distribution using a dataset of senator speeches from the 114th United States Senate session.",
"The fitted ideal points in Figure 1 show that the tbip largely separates lawmakers by political party, despite not having access to these labels or votes.",
"Table 1 depicts neutral topics (fixing the fitted O (cid:17) kv to be 0) and the corresponding ideological topics by varying the sign of O (cid:17) kv .",
"The topic for immigration shows that a liberal framing emphasizes Dreamers and DACA, while the conservative frame emphasizes laws and homeland security.",
"We provide more details and empirical studies in Section 5.",
"Most ideal point models focus on legislative roll-call votes.",
"These are typically latent-space factor models (Poole and Rosenthal, 1985; McCarty et al., 1997; Poole and Rosenthal, 2000), which relate closely to item-response models (Bock and Aitkin, 1981; Bailey, 2001).",
"Researchers have also developed Bayesian analogues (Jackman, 2001; Clinton et al., 2004) and extensions to time series, particularly for analyzing the Supreme Court (Martin and Quinn, 2002).",
"Some recent models combine text with votes or party information to estimate ideal points of legislators.",
"Gerrish and Blei (2011) analyze votes and the text of bills to learn ideological language.",
"Gerrish and Blei (2012) and Lauderdale and Clark (2014) use text and vote data to learn ideal points adjusted for topic.",
"The models in Nguyen et al. (2015) and Kim et al. (2018) analyze votes and floor speeches together.",
"With labeled political party aliations, machine learning methods can also help map language to party membership.",
"Iyyer et al. (2014) use neural networks to learn partisan phrases, while the models in Tsur et al. (2015) and Gentzkow et al. (2019) use political party labels to analyze dierences in speech patterns.",
"Since the tbip does not use votes or party information, it is applicable to all political texts, even when votes and party labels are not present.",
"Moreover, party labels can be restrictive because they force hard membership in one of two groups (in American politics).",
"The tbip can infer how topics change smoothly across the political spectrum, rather than simply learning topics for each political party.",
"Annotated text data has also been used to predict ideological positions.",
"Wordscores (Laver et al., 2003; Lowe, 2008) uses texts that are hand-labeled by political position to measure the conveyed positions of unlabeled texts; it has been used to measure the political landscape of Ireland (Benoit and Laver, 2003; Herzog and Benoit, 2015).",
"Ho et al. (2008) analyze hand-labeled editorials to estimate ideal points for newspapers.",
"The ideological topics learned by the tbip are also related to political frames (Entman, 1993; Chong and Druckman, 2007).",
"Historically, these frames have either been hand-labeled by annotators (Baumgartner et al., 2008; Card et al., 2015) or used annotated data for supervised prediction (Johnson et al., 2017; Baumer et al., 2015).",
"In contrast to these methods, the tbip is completely unsupervised.",
"It learns ideological topics that do not need to conform to pre-defined frames.",
"Moreover, it does not depend on the subjectivity of coders.",
"wordfish (Slapin and Proksch, 2008) is a model of authored political texts about a single issue, similar to a single-topic version of tbip .",
"wordfish has been applied to party manifestos (Proksch and Slapin, 2009; Lo et al., 2016) and single-issue dialogue (Schwarz et al., 2017).",
"wordshoal (Lauderdale and Herzog, 2016a) extends wordfish to multiple issues by analyzing a collection of labeled texts, such as Senate speeches labeled by debate topic.",
"wordshoal fits separate wordfish models to the texts about each label, and combines the fitted models in a one-dimensional factor analysis to produce ideal points.",
"In contrast to these models, the tbip does not require a grouping of the texts into single issues.",
"It naturally accommodates unstructured texts, such as tweets, and learns both ideal points for the authors and ideology-adjusted topics for the (latent) issues under discussion.",
"Furthermore, by relying on stochastic optimization, the tbip algorithm scales to large data sets.",
"In Section 5 we empirically study how the tbip ideal points compare to both of these models.",
"The tbip involves several types of latent variables: neutral topics k , ideological topics (cid:17) k , topic intensities (cid:18) d , and ideal points x s .",
"Conditional on the text, we perform inference of the latent variables through the posterior distribution",
"p. (cid:18) ; ; (cid:17) ; x j y / .",
"But calculating this distribution is intractable.",
"We rely on approximate inference.",
"We use mean-field variational inference to fit an approximate posterior distribution (Jordan et al., 1999; Wainwright et al., 2008; Blei et al., 2017).",
"Variational inference frames the inference problem as an optimization problem.",
"Set q (cid:30) .",
"(cid:18) ; ; (cid:17) ; x / to be a variational family of approximate posterior distributions, indexed by variational parameters (cid:30) .",
"Variational inference aims to find the setting of (cid:30) that minimizes the KL divergence between q (cid:30) and the posterior.",
"Minimizing this KL divergence is equivalent to maximizing the evidence lower bound ( elbo ), E q (cid:30) log",
"The elbo sums the expectation of the log joint (here broken up into the log prior and log likelihood) and the entropy of the variational distribution.",
"To approximate the tbip posterior we set the variational family to be the mean-field family.",
"The mean-field family factorizes over the latent variables, where d indexes documents, k indexes topics, and s indexes authors: q (cid:30) .",
"We use lognormal factors for the positive variables and Gaussian factors for the real variables,",
"q.(cid:18) d / D LogNormal K",
".(cid:22) (cid:18) d ; I(cid:27) 2(cid:18) d /",
"q. k / D LogNormal V",
".(cid:22) k ; I(cid:27) 2 k /",
"q.(cid:17) k / D NV",
".(cid:22) (cid:17) k ; I(cid:27) 2(cid:17) k /",
"q.x s / DN",
".(cid:22) x s ; (cid:27) 2x s /: Our goal is to optimize the elbo with respect to (cid:30) D f (cid:22) (cid:18) ; (cid:27) 2(cid:18) ; (cid:22) ; (cid:27) 2 ; (cid:22) (cid:17) ; (cid:27) 2(cid:17) ; (cid:22) x ; (cid:27) 2x g .",
"We use stochastic gradient ascent.",
"We form noisy gradients with Monte Carlo and the reparameteri-zation trick (Kingma and Welling, 2014; Rezende et al., 2014), as well as with data subsampling (Ho-man et al., 2013).",
"To set the step size, we use Adam (Kingma and Ba, 2015).",
"We initialize the neutral topics and topic intensities with a pre-trained model.",
"Specifically, we pre-train a Poisson factorization topic model using the algorithm in Gopalan et al. (2015).",
"The tbip algorithm uses the resulting factorization to initialize the variational parameters for (cid:18) d and k .",
"The full procedure is described in Appendix A. For the corpus of Senate speeches described in Section 2, training takes 5 hours on a single V o t e s S p e e c h e s T w e e t s Chuck Schumer (D-NY) Bernie Sanders (I-VT) Joe Manchin (D-WV) Susan Collins (R-ME) Jeff Sessions (R-AL) Deb Fischer (R-NE) Correlation to vote ideal points 0.88 0.94 Mitch McConnell (R-KY) Figure 2. The ideal points learned by the tbip for senator speeches and tweets are highly correlated with the classical vote ideal points.",
"NVIDIA Titan V GPU.",
"We have released open source software for Tensorflow and PyTorch.",
"1 5 Empirical studies We study the text-based ideal point model ( tbip ) on several datasets of political texts.",
"We first use the tbip to analyze speeches and tweets (separately) from U.S. senators.",
"For both types of texts, the tbip ideal points, which are estimated from text, are close to the classical ideal points, which are estimated from votes.",
"We also compare the tbip to existing methods for scaling political texts (Slapin and Proksch, 2008; Lauderdale and Herzog, 2016a).",
"The tbip performs better, finding ideal points closer to the vote-based ideal points.",
"Finally, we use the tbip to analyze a group that does not vote: 2020 Democratic presidential candidates.",
"Using only tweets, it estimates ideal points for the candidates on an interpretable progressive-to-moderate spectrum.",
"We analyze Senate speeches provided by Gentzkow et al. (2018), focusing on the 114th session of Congress (2015-2017).",
"We compare ideal points found by the tbip to the vote-based ideal point model from Equation (1).",
"(Appendix B provides details about the comparison.)",
"We use approximate posterior means, learned with variational inference, to estimate the latent variables.",
"The estimated ideal points are O x ; the estimated neutral topics are O ; the estimated ideological topics are O (cid:17) .",
"Figure 2 compares the tbip ideal points on 1 http://github.com/keyonvafa/tbip speeches to the vote-based ideal points.",
"2 Both models largely separate Democrats and Republicans.",
"In the tbip estimates, progressive senator Bernie Sanders (I-VT) is on one extreme, and Mitch McConnell (R-KY) is on the other.",
"Susan Collins (R-ME), a Republican senator often described as moderate, is near the middle.",
"The correlation between the tbip ideal points and vote ideal points is high, 0:88 .",
"Using only the text of the speeches, the tbip captures meaningful information about political preferences, separating the political parties and organizing the lawmakers on a meaningful political spectrum.",
"We next study the topics.",
"For selected topics, Table 1 shows neutral terms and ideological terms.",
"To visualize the neutral topics, we list the top words based on O k .",
"To visualize the ideological topics, we calculate term intensities for two poles of the political spectrum, x s D (cid:0) 1 and x s D C 1 .",
"For a fixed k , the ideological topics thus order the words by E kv exp .",
"(cid:0) (cid:17) kv /(cid:141) and E kv exp",
".(cid:17) kv /(cid:141) .",
"Based on the separation of political parties in Figure 1, we interpret negative ideal points as liberal and positive ideal points as conservative.",
"Table 1 shows that when discussing immigration, a senator with a neutral ideal point uses terms like immigra-tion and United States.",
"As the author moves left, she will use terms like Dreamers and DACA.",
"As she moves right, she will emphasize terms like laws and homeland security.",
"The tbip also captures that those on the left refer to health care legislation as the Aordable Care Act, while those on the right call it Obamacare.",
"Additionally, a liberal 2 Throughout our analysis, we appropriately rotate and standardize ideal points so they are visually comparable.",
"senator discussing guns brings attention to gun control: gun violence and background checks are among the largest intensity terms.",
"Meanwhile, conservative senators are likely to invoke gun rights, emphasizing constitutional rights.",
"Comparison to Wordfish and Wordshoal.",
"We next treat the vote-based ideal points as ground-truth labels and compare the tbip ideal points to those found by wordfish and wordshoal .",
"wordshoal requires debate labels, so we use the labeled Senate speech data provided by Lauderdale and Herzog (2016b) on the 111th113th Senates to train each method.",
"Because we are interested in comparing models, we use the same variational inference procedure to train all methods.",
"See Appendix B for more details.",
"We use two metrics to compare text-based ideal points to vote-based ideal points: the correlation between ideal points and Spearman's rank correlation between their orderings of the senators.",
"With both metrics, when compared to vote ideal points from Equation (1), the tbip outperforms wordfish and wordshoal ; see Table 2. Comparing to another vote-based method, dw-nominate (Poole, 2005), produces similar results; see Appendix C. 5.2 The tbip on U.S. Senate tweets We use the tbip to analyze tweets from U.S. senators during the 114th Senate session, using a corpus provided by VoxGovFEDERAL (2020).",
"Tweet-based ideal points almost completely separate Democrats and Republicans; see Figure 2. Again, Bernie Sanders (I-VT) is the most extreme Democrat, and Mitch McConnell (R-KY) is one of the most extreme Republicans.",
"Susan Collins (R-ME) remains near the middle; she is among the most moderate senators in vote-based, speech-based, and tweet-based models.",
"The correlation between vote-based ideal points and tweet-based ideal points is 0:94 .",
"We also use senator tweets to compare the tbip to wordfish (we cannot apply wordshoal because tweets do not have debate labels).",
"Again, the tbip learns closer ideal points to the classical vote ideal points; see Table 2. 5.3 Using the tbip as a descriptive tool As a descriptive tool, the tbip provides hints about the dierent ways senators use speeches or tweets to convey political messages.",
"We use a likelihood ratio to help identify the texts that influenced the tbip ideal point.",
"Consider the log likelihood of a document using a fixed ideal point Q x and fitted values for the other latent variables, ` d .",
"Ratios based on this likelihood can help point to why the tbip places a lawmaker as extreme or moderate.",
"For a document d , if ` d .",
"O x a d / (cid:0) ` d .0/ is high then that document was (statistically) influential in making O x a d more extreme.",
"If ` d .",
"O x a d / (cid:0) ` d .",
"max s .",
"O x s // or ` d .",
"O x a d / (cid:0) ` d .",
"min s .",
"O x s // is high then that document was influential in making O x a d less extreme.",
"We emphasize this diagnostic does not convey any causal information, but rather helps understand the relationship between the data and the tbip inferences.",
"Bernie Sanders (I-VT).",
"Bernie Sanders is an Independent senator who caucuses with the Democratic party; we refer to him as a Democrat.",
"Among Democrats, his ideal point changes the most between one estimated from speeches and one estimated from votes.",
"Although his vote-based ideal point is the 17th most liberal, the tbip ideal point based on Senate speeches is the most extreme.",
"We use the likelihood ratio to understand this dierence in his vote-based and speech-based ideal Bernie Sanders Elizabeth Warren Tulsi Gabbard Kamala Harris Bill de Blasio Julian Castro Kirsten Gillibrand Cory Booker Beto O'Rourke Joe Biden Pete Buttigieg Tom Steyer Tim Ryan Mike Bloomberg Amy Klobuchar Michael Bennet John Hickenlooper John Delaney Steve Bullock Figure 3. Based on tweets, the tbip places 2020 Democratic presidential candidates along an interpretable progressive-to-moderate spectrum.",
"points.",
"His speeches with the highest likelihood ratio are about income inequality and universal health care, which are both progressive issues.",
"The following is an excerpt from one such speech: The United States is the only major country on Earth that does not guarantee health care to all of our people... At a time when the rich are getting richer and the middle class is getting poorer, the Republicans take from the middle class and working families to give more to the rich and large corporations.",
"Sanders is considered one of the most liberal senators; his extreme speech ideal point is sensible.",
"That Sanders' vote-based ideal point is not more extreme appears to be a limitation of the vote-based method.",
"Applying the likelihood ratio to votes helps illustrate the issue.",
"(Here a bill takes the place of a document.)",
"The ratio identifies H.R. 2048 as influential.",
"This bill is a rollback of the Patriot Act that Sanders voted against because it did not go far enough to reduce federal surveillance capabilities (RealClearPolitics, 2015).",
"In voting nay, he was joined by one Democrat and 30 Republicans, almost all of whom voted against the bill because they did not want surveillance capabilities curtailed at all.",
"Vote-based ideal points, which only model binary values, cannot capture this nuance in his opinion.",
"As a result, Sanders' vote-based ideal point is pulled to the right.",
"Deb Fischer (R-NE).",
"Turning to tweets, Deb Fischer's tweet-based ideal point is more liberal than her vote-based ideal point; her vote ideal point is the 11th most extreme among senators, while her tweet ideal point is the 43rd most extreme.",
"The likelihood ratio identifies the following tweets as responsible for this moderation: I want to empower women to be their own best advocates, secure that they have the tools to negotiate the wages they deserve. #EqualPay FACT: 1963 Equal Pay Act enables women to sue for wage discrimination. #GetitRight #EqualPayDay The tbip associates terms about equal pay and women's rights with liberals.",
"A senator with the most liberal ideal point would be expected to use the phrase #EqualPay 20 times as much as a senator with the most conservative ideal point and women 9 times as much, using the topics in Fischer's first tweet above.",
"Fischer's focus on equal pay for women moderates her tweet ideal point.",
"Je Sessions (R-AL).",
"The likelihood ratio can also point to model limitations.",
"Je Sessions is a conservative voter, but the tbip identifies his speeches as moderate.",
"One of the most influential speeches for his moderate text ideal point, as identified by the likelihood ratio, criticizes Deferred Actions for Childhood Arrivals (DACA), an immigration policy established by President Obama that introduced employment opportunities for undocumented individuals who arrived as children: The President of the United States is giving work authorizations to more than 4 million people, and for the most part they are adults. Almost all of them are adults. Even the so-called DACA proportion, many of them are in their thirties. So this is an adult job legalization program.",
"This is a conservative stance against DACA.",
"So why does the tbip identify it as moderate?",
"As depicted in Table 1, liberals bring up DACA when discussing immigration, while conservatives emphasize laws and homeland security.",
"The fitted Ideology Top Words Progressive class, billionaire, billionaires, walmart, wall street, corporate, executives, government Neutral economy, pay, trump, business, tax, corporations, americans, billion Moderate trade war, trump, jobs, farmers, economy, economic, taris, businesses, promises, job Progressive #medicareforall, insurance companies, profit, health care, earth, medical debt, health care system, profits Neutral health care, plan, medicare, americans, care, access, housing, millions Moderate healthcare, universal healthcare, public option, plan, universal coverage, universal health care, away, choice Progressive green new deal, fossil fuel industry, fossil fuel, planet, pass, #greennewdeal, climate crisis, middle ground Neutral climate change, climate, climate crisis, plan, planet, crisis, challenges, world Moderate solutions, technology, carbon tax, climate change, challenges, climate, negative, durable Table 3. The tbip learns topics from 2020 Democratic presidential candidate tweets that vary as a function of the candidate's political positions.",
"The neutral topics are for an ideal point of 0; the ideological topics fix ideal points at (cid:0) 1 and C 1 .",
"We interpret one extreme as progressive and the other as moderate.",
"expected count of DACA using the most liberal ideal point for the topics in the above speech is 1:04 , in contrast to 0:04 for the most conservative ideal point.",
"Since conservatives do not focus on DACA, Sessions even bringing up the program sways his ideal point toward the center.",
"Although Sessions refers to DACA disapprovingly, the bag-of-words model cannot capture this negative sentiment.",
"We also analyze tweets from Democratic presidential candidates for the 2020 election.",
"Since all of the candidates running for President do not vote on a shared set of issues, their ideal points cannot be estimated using vote-based methods.",
"Figure 3 shows tweet-based ideal points for the 2020 Democratic candidates.",
"Elizabeth Warren and Bernie Sanders, who are often considered progressive, are on one extreme.",
"Steve Bullock and John Delaney, often considered moderate, are on the other.",
"The selected topics in Table 3 showcase this spectrum.",
"Candidates with progressive ideal points focus on: billionaires and Wall Street when discussing the economy, Medicare for All when discussing health care, and the Green New Deal when discussing climate change.",
"On the other extreme, candidates with moderate ideal points focus on: trade wars and farmers when discussing the economy, universal plans for health care, and technological solutions to climate change.",
"We developed the text-based ideal point model ( tbip ), an ideal point model that analyzes texts to quantify the political positions of their authors.",
"It estimates the latent topics of the texts, the ideal points of their authors, and how each author's political position aects her choice of words within each topic.",
"We used the tbip to analyze U.S. Senate speeches and tweets.",
"Without analyzing the votes themselves, the tbip separates lawmakers by party, learns interpretable politicized topics, and infers ideal points close to the classical vote-based ideal points.",
"Moreover, the tbip can estimate ideal points of anyone who authors political texts, including non-voting actors.",
"When used to study tweets from 2020 Democratic presidential candidates, the tbip identifies them along a progressive-to-moderate spectrum.",
"Acknowledgments This work is funded by ONR N00014-17-1-2131, ONR N00014-15-1-2209, NIH 1U01MH115727-01, NSF CCF-1740833, DARPA SD2 FA8750-18-C-0130, Amazon, NVIDIA, and the Simons Foundation.",
"Keyon Vafa is supported by NSF grant DGE-1644869.",
"We thank Voxgov for providing us with senator tweet data.",
"We also thank Mark Arildsen, Naoki Egami, Aaron Schein, and anonymous reviewers for helpful comments and feedback."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain"
] |
[
"State-of-the-art abstractive summarization systems often generate hallucinations ; i.e., content that is not directly inferable from the source text.",
"Despite being assumed incorrect, we find that much hallucinated content is factual, namely consistent with world knowledge.",
"These factual hallucinations can be beneficial in a summary by providing useful background information.",
"In this work, we propose a novel detection approach that separates factual from non-factual hallucinations of entities.",
"Our method utilizes an entity's prior and posterior probabilities according to pre-trained and finetuned masked language models, respectively.",
"Empirical results suggest that our approach outperforms five baselines and strongly correlates with human judgments.",
"Furthermore, we show that our detector, when used as a reward signal in an off-line reinforcement learning (RL) algorithm, significantly improves the factuality of summaries while maintaining the level of abstractiveness.",
"1 1 Introduction State-of-the-art abstractive summarization systems can generate fluent summaries with high automatic evaluation scores in terms of ROUGE (Lin, 2004).",
"However, recent studies have shown that these systems are prone to hallucinate content that is not supported by the source document (Maynez et al., 2020; Kang and Hashimoto, 2020; Durmus et al., 2020; Zhao et al., 2020; Filippova, 2020; Kryscinski et al., 2020).",
"For instance, Maynez et al. (2020) discovered that 64.1% of the summaries generated by a BERT-based abstractive summarization model on XSUM (Narayan et al., 2018a) contain hallucinations.",
"Previous studies commonly assume that hallucination is an undesirable behavior in abstractive summarization systems.",
"They investigate the 1 https://github.com/mcao516/EntFA Source : Under the proposals, 120,000 additional asylum seekers will be distributed among EU nations, with binding quotas.",
"(...)",
"Mr Juncker told the European Parliament it was not a time to take fright.",
"(...)",
"He said tackling the crisis was a matter of humanity and human dignity.",
"It is true that Europe cannot house all the misery in the world. But we have to put it into perspective. (...) Generation : European Commission President Jean-Claude Juncker has set out his proposals for dealing with the migrant crisis in a speech to MEPs, saying Europe cannot house all the misery in the world.",
"cause of model hallucination (Kang and Hashimoto, 2020; Wang and Sennrich, 2020) and propose methods that reduce the frequency of all hallucinations (Filippova, 2020; Zhao et al., 2020; Nan et al., 2021; Narayan et al., 2021).",
"Our stance in this paper is that hallucinations are not always undesirable : many factual hallucinations provide additional world knowledge that is important for summary comprehension.",
"Table 1 presents one such example from XSUM : the hallucinated content European Commission President provides additional background information on the role of Mr. Juncker .",
"Factual hallucinations refer to content that is verifiable by world knowledge but not inferable from source text.",
"We thus argue that not all hallucinations should be treated equally; in particular, factual hallucinations may be less deleterious or even potentially beneficial to to be included in a summary, as opposed to non-factual ones.",
"We propose a method to classify entities according to whether they are hallucinations and whether they are factual (if hal-lucinated).",
"We focus on entities (e.g., persons, locations, dates, cardinal numbers) because they are necessary to express the most salient pieces of in-3340 formation in a summary.",
"Moreover, entity hallucinations are common in generated summaries.",
"As we will show later in our work, about 30% of entities generated by BART (Lewis et al., 2020) on XSUM test set are hallucinated.",
"Our approach is inspired by the observation that many hallucinated entities are generated with low probabilities.",
"This observation suggests that the summarization model's confidence correlates with the factuality statuses of generated entities.",
"In other words, the uncertainty is indicative of the likelihood of whether generated entities are hallucinated and non-factual.",
"We refer to the probability of an entity being in a summary without considering the source document as its prior probability, and its probability given the document as its posterior probability.",
"Our as-sumption is that if an entity in a generated summary results in a factual error, giving the source should not provide more evidence for it, resulting in a small change in probability between the prior and the posterior.",
"Based on this assumption, we propose to use the prior and posterior probabilities as the key features in a simple classifier that predicts an entity's hallucination status and factuality.",
"Due to the lack of fine-grained hallucination annotation, we create an entity-level hallucination and factuality annotation on the XSUM dataset.",
"We evaluate our detection method on this annotated dataset as well as annotations from Maynez et al. (2020).",
"On both datasets, our approach outperforms five baseline models at identifying nonfactual hallucinations.",
"In addition, our approach has a strong correlation with the factuality scores given by human judges.",
"Besides, we show that our detector, when used as a reward signal in training neural-based summarizers with the off-line RL algorithm, significantly improves the factuality of generated summaries even when the underlying dataset is noisy.",
"Our contributions are the following: ( i ) We demonstrate that an entity's prior and posterior probabilities can be used to infer whether it is hallucinated and factual.",
"Based on this hypothesis, we propose a novel approach for entity-level hallucination detection and factuality checking.",
"Our approach outperforms five baselines from previous work on two human-annotated datasets, in addition to having a strong correlation with summary-level factuality scores given by human judges.",
"( ii )",
"We empirically demonstrate that our classifier can provide reliable reward signals for RL algorithms, leading to improved factuality while maintaining the level of abstractiveness in generated summaries.",
"( iii )",
"We create a set of entity-level hallucination annotations.",
"The correctness of summarization systems' outputs has been evaluated as one aspect of content selection in the past, for example using the Pyramid method (Nenkova and Passonneau, 2004).",
"As neural abstractive summarizers have become popular, their issues with correctness have sparked much recent work that focus specifically on model hallucinations and summary factuality (Kryscinski et al., 2020).",
"Maynez et al. (2020) conducted a large-scale human evaluation of several neural abstractive summarization systems, and found that hallucinations are common among the outputs of different summarization models.",
"Recently, many methods have been proposed to reduce model hallucination.",
"Kang and Hashimoto (2020) propose a loss truncation training algorithm that filters out noisy training samples which may lead a model to hallucinate.",
"Zhao et al. (2020) use a verification system to recognize non-factual quantities in summaries and adopt a re-ranking system to reduce the number of hallucinated quantities in the final output summary.",
"Narayan et al. (2021) use entity chains to mitigate the hallucination problem in the generation of abstractive summaries.",
"Nan et al. (2021) show that data filtering and use a summary-worthy entity classification task as an auxiliary training objective can help improve model's entity-level factuality.",
"Filippova (2020) proposed a method for controlling hallucination in data-to-text generation task.",
"They suggest that a conditional language model (CLM) will put more probability mass on a nonhallucinated entity than an unconditional language model (LM).",
"Our work differs in that we focus on both hallucination and factuality.",
"Also, our method works at the entity-level rather than the sentence-level, and is geared towards text summarization.",
"Another line of work focuses on evaluating the factual consistency of abstractive summarization",
"systems.",
"Kryscinski et al. (2020) train models on an artificially corrupted dataset for factual errors detection.",
"Cao et al. (2020) induce artificial perturbations in text to train a summary error correction system, but find that there is a large gap between such artificial perturbations and the type of hallucinations that are generated by abstractive summarizers.",
"(Goyal and Durrett, 2020) measure factual consistency by checking whether the semantic relationship manifested by individual dependency arcs in the generated summary is supported by the source document.",
"Wang et al. (2020); Dong et al. (2020); Durmus et al. (2020) measure and improve the factual consistency of summaries by asking and answering questions based on generated summaries and input documents.",
"In this section, we propose a novel detection approach that separates factual from non-factual hallucinations of entities (Section 3.2), and present a factuality-aware training framework for summarization models trained on noisy dataset (Sec-tion 3.3).",
"Let ( S, R ) be a pair of a source document and a reference summary, where S = ( s 1 , ..., s M ) is the source document with M tokens, and R = ( r 1 , ..., r L ) is the reference summary with L tokens.",
"Let G = ( g 1 , ..., g N ) be the model-generated summary with N tokens.",
"For each named entity e k , which we assume to be a span of tokens g i k , ..., g i k + | e k | 1 ( | e k | 1) starting at position i k in G , the task is to determine whether e k is hallucinated, and whether it is factual.",
"We define an entity as hallucinated if it is not directly inferable in its generated context given the input document S .",
"If an entity is hallucinated, we further classify it into two subtypes: factual hallucinations and non-factual hallucinations .",
"Factual hallucinations cannot be directly entailed in their generated context from the source document but can be based on world knowledge (see Table 1).",
"Non-factual hallucinations are entities that are neither inferable from the source nor based on world knowledge.",
"We now define the prior and posterior probabilities of an entity, which we will use to predict its",
"hallucination and factuality statuses.",
"For entity e k , we define its prior probability p prior ( e k ) as the probability of its generation by a language model that does not have access to the source text.",
"If e k spans multiple tokens, we compute its probability auto-regressively.",
"Let c k be the context of entity e k in G , excluding the tokens in e k .",
"Then: p prior ( e k ) = fP MLM ( e k | c k ) (1) = | e k | (cid:89) t =1 PMLM ( e tk | e 1 ...t 1 k , c k ) (2) which we compute using a masked language model PMLM .",
"The posterior probability p pos ( e k ) of entity e k is the conditional probability of the entity given the context and the source text: p pos ( e k ) = PCMLM ( e k | c k , S ) (3) = | e k | (cid:89) t =1 PCMLM ( e tk | e 1 ...t 1 k , c k , S ) , (4) where CMLM is a conditional masked language model.",
"CMLM is an encoder-decoder model that is trained with a masked language model objective on a parallel dataset.",
"Specifically, a CMLM predicts a target sequence T given a source text S and part of the target T masked , where T masked is the target sequence with a random entity being masked.",
"In order to correctly generate the missing part of the sentence, the model needs to condition on both T masked and S .",
"Alternatively, we can calculate the entity's posterior probability using a conditional language model (CLM) instead of a CMLM.",
"In this case, the entity's posterior probability is defined as PCLM ( e k | c e k , S ) where c e k = g 1 , ..., g i 1 .",
"Note that CLM is only conditioned on the left context.",
"Training a Discriminator To classify the hallucination and factuality statuses of a given entity, we need to train a discriminator model.",
"We use the K-Nearest Neighbors (KNN) algorithm since it requires no training and makes minimal assumptions about the form of the decision boundary, as a non-parametric method.",
"It also offers adequate interpretability.",
"The KNN classifier is trained using the prior and posterior probabilities as features on our labeled dataset.",
"Since the classifier is used for entity hallucination and factuality assessment, we refer to it as ENTFA .",
"Besides using 3342 the prior/posterior probability as features, we also add a binary overlap feature that indicates whether the entity appears in the document.",
"We train two classifiers for hallucination detection and factuality checking tasks respectively.",
"We now propose a factuality-aware training approach for summarization systems that combines our factuality assessment model with the latest offline RL technique.",
"RL for Text Generation Sequence generation of the tokens in the summary text can be viewed as a finite Markov Decision Process (MDP).",
"At each time-step t , the state s t consists of the source text x and the previously generated tokens y <t , s t = ( y <t , x ) .",
"The agent, which is the summarization model, takes an action by generating a new token a t .",
"Depending on the action taken, the agent gets a reward r t = R ( s t , a t ) and deterministically transitions to the next state s t +1 = ( y <t +1 , x ) .",
"The probability of each action (i.e., token) is specified by the policy ( a t | s t ) .",
"The goal of the agent is to maximize the discounted cumulative reward throughout the trajectory: J ( ) = E (cid:104) (cid:80) Tt =0 t r t (cid:105) .",
"When training the summarization model with human-written reference summaries, we can frame the training process as an off-line RL problem with expert demonstrations (i.e., the reference sum-maries).",
"In this setting, since we are sampling trajectories from a behavior policy, we need an importance sampling term w t to correct the gradient estimation.",
"Following Pang and He (2021)'s work, we approximate w t with ( a t | s t ) and this gives us the following gradient expression of the objective function: J ( ) = E b (cid:104) (cid:88) t =0 ( a t | s t ) log ( a t | s t ) Q ( a t , s t ) (cid:105) (5) where Q ( a t , s t ) = (cid:80) Tt = t t t r t is the estimated return from state s t and b is the behavior policy from which we draw trajectories .",
"In our case, b is the (noisy) summarization dataset.",
"Training with a Factuality-based Reward One problem in the off-line RL setting is that expert demonstrations, which in our case are the reference summaries, are often noisy and contain content that cannot be inferred from the source document.",
"The commonly used teacher forcing training encourages the model to blindly imitate the training data, which leads to model hallucination at inference time (Kang and Hashimoto, 2020).",
"To discourage the model from overfitting to the noise in the training set, we use the predictions from our classifier as factuality reward signals to guide the training of the summarization model.",
"In the off-policy learning stage, we use our factuality classifier to label all the entities in the training set.",
"If an entity is classified by our classifier as non-factual, we consider it noise and give it a negative reward r nfe .",
"For factual entities and other tokens, we use the posterior probability from a MLE-trained model as token-level rewards, as in (Pang and He, 2021).",
"Formally, we have: R ( s t , a t ) = (cid:40) r nfe , if a t is non-factual p MLE ( a t | s t ) , otherwise 4 Dataset 4.1 XENT dataset To study entity hallucination and factuality in abstractive summarization, we need annotations of entityor token-level hallucination.",
"To the best of our knowledge, there is no such dataset available.",
"Therefore, we create a dataset ourselves, which we call the XENT dataset.",
"We 2 annotate 800 summaries generated by BART, which is one of the state-of-the-art abstractive summarization models.",
"The input documents are randomly selected from XSUM test set.",
"We choose XSUM because it is more abstractive than other summarization datasets.",
"We extract 2,838 entities from the 800 generated summaries.",
"We randomly select 30% of the samples as our test set.",
"We manually labeled each entity with one of the following three tags: non-hallucinated, factual hallucination, and non-factual hallucination.",
"First, we extract entities from the given summary using automatic NER tools (Honnibal and Montani, 2017).",
"Then, we check whether each property associated with the identified entity can be directly entailed using the information from the source document.",
"If so, then the property is non-hallucinated.",
"For instance, consider the entity European Commission President Jean-Claude Juncker in Table",
"1. The last name Juncker can be directly entailed from 2 Two coauthors and three graduate students.",
"the source document.",
"Therefore, it is not a hallucination.",
"However, the first name Jean-Claude and the position information European Commission President are not mentioned in the source.",
"In the next step, we need to decide whether these information is factual or not using world knowledge.",
"This often requires external resources such as Wikipedia or Google Search.",
"In this case, European Commission President and Jean-Claude are both factual.",
"If there is no information found online to prove or disprove the hallucinated entity, it is labeled as non-factual.",
"There is a special case where the entity misrepresents information from the document.",
"For instance, the summary might include a number from the document but that number is actually related to a different event.",
"In this case, the entity is considered as an intrinsic hallucination (Maynez et al., 2020).",
"In this work, we will focus on extrinsic hallucinations, so we discarded all intrinsic hallucinations in our experiments.",
"Table 3 shows the distribution of entities by hallucination and factuality status in our labeled dataset.",
"We show an example for each hallucination type in Table",
"2. Inter-Annotator Agreement We report Fleiss's Kappa ( ) to access the reliability of agreement between annotators.",
"We compute agreement on a subset of 800 entities and obtain almost perfect agreement ( 0 . 80 1 . 00 ) with = 0 .",
"809 .",
"Following Pagnoni et al. (2021), we also report the percentage of annotators that agree with the majority class.",
"We obtain = 0 .",
"931 of annotators agreeing with the majority class on the four-category annotation which shows substantial agreement.",
"Recently, Maynez et al. (2020) released a set of factuality and hallucination annotations for XSUM .",
"For each generated summary, they labeled the hallucinated spans as well as the overall factuality of the summary.",
"Compared with our labeling approach, their annotation has a lower granularity and does not distinguish between factual and non-factual hallucination.",
"Therefore, we have to convert their dataset first before using it for evaluation.",
"To perform entity-level factuality checking on their dataset, we do the following: First, we extract entities from the annotated summaries.",
"For entities that are extracted from factual summaries, we label them as factual entities.",
"For each entity from non-factual summary, if it is inside an extrinsic hallucinated span, then we assume the entity is non-factual.",
"Otherwise the entity is labeled as a factual.",
"This process gives us a new dataset that has the same format as ours for entity-level factuality evaluation.",
"We refer to this new dataset as the MENT dataset.",
"However, it is worth pointing out that the converted dataset is noisy.",
"For instance, in Maynez et al. (2020)'s annotation, the entire generated summary is often labeled as a hallucinated span if it 3344 does not capture the meaning of the document well.",
"In this case, the hallucinated span could still contain faithful entities with respect to the source document.",
"This could result in false-positive non-factual entities after the conversion.",
"Therefore, we filter out entities in the extrinsic hallucination span that also appear in the source document.",
"We evaluate our method on entity-level hallucination and factuality classification tasks on XENT and MENT .",
"For each entity in the summary, the model predicts a hallucination label and a factuality label.",
"We will conduct factual and hallucination assessments separately for comparison with the baselines.",
"We compare our method with five baselines models, which are discussed in detail in Section 6.1.",
"In addition to entity-level classification performance, we also evaluate our methods by correlating them against human judgments of factuality.",
"Previous work has collected summary-level judgments of factuality from human annotators, which are then correlated with automatic evaluation measures applied to those summaries.",
"To apply our entity-level method, we use the lowest classifier confidence for the factual class among its entities as the factuality score for the entire summary.",
"We evaluate correlation on two datasets by Pagnoni et al. (2021) and Wang et al. (2020).",
"To evaluate our factuality-aware training approach proposed in Section 3.3, we train a summarization model with factuality rewards and evaluate model's predictions on XSUM test set.",
"To evaluate the faithfulness of generated summaries, we use automatic faithfulness evaluation tools FEQA (Durmus et al., 2020) and DAE (Goyal and Durrett, 2020) 3 .",
"We also calculate ROUGE scores, and the percentage of n -grams and percentage of entities in the generated summaries that are not found in the source document (ENFS).",
"The percentage of novel n -grams 3 In this work, we define the faithfulness of the summary as whether it is faithful with respect to the source.",
"reflects the extractiveness of summarization model.",
"Training CMLM & MLM For training the CMLM, we use both XSUM , Narayan et al. (2018b)) and the CNN/Dailymail dataset (Hermann et al., 2015) dataset.",
"To build a training corpus for CMLM, we randomly select one entity in each reference summary and mask it with a special [ MASK ] token.",
"We append a [ S ] token at the beginning of each summary.",
"The document and summary are concatenated together (separated by [ \\S ] token) as CMLM's input.",
"The training target is the reference summary without any masking.",
"If there is no specification, we use the CMLM trained on XSUM .",
"For the MLM, we use the large BART model.",
"BART is pre-trained on five different reconstruction tasks including token masking and text infilling.",
"For more experimental setup and hyper-parameter setting details, see Appendix A.3.",
"Baselines We compare with five baseline methods: (1) The overlap-based method checks the word overlap between the summary and the source document.",
"In our case, we check whether a given entity in the generated summary also exist in the source document.",
"If it does not, the entity is clas-3345 sified as both hallucinated and non-factual.",
"(2) The synonym-based baseline extends the overlap-based baseline by checking the overlap of summary synonyms and source synonyms.",
"See Zhou et al. (2020) for more details.",
"(3) The alignment-based baseline is based on the unsupervised word alignment method SimAlign by Jalili Sabet et al. (2020).",
"SimAlign extracts word alignments from similarity matrices induced from pretrained embed-dings.",
"In our task, we treat all unaligned entities in summaries as hallucinated and non-factual.",
"(4) The LM-based method is proposed by Filippova (2020).",
"The LM-based method uses LM and CLM to compute the token's prior and posterior probability.",
"In Filippova (2020)'s work, they compare the value of p prior and p pos .",
"If the generated token does not match the reference and p prior is greater than p pos , the token is classified as hallucinated.",
"Since we are evaluating the generated summary but not the reference, we modify their method to the following: if the entity is not found in the source and p prior > p pos , then the entity is classified as non-factual and hallucinated.",
"(5) Zhou et al. (2020) frame the hallucination detection task as a sequence labeling task.",
"They train a hallucination labeling model on synthetic data.",
"We adapt their model to our task by finetuning their model on XENT .",
"Evaluation Results on XENT Table 4 shows the evaluation results of our classifiers and baselines in terms of both entity factuality and hallucination status classification.",
"The results show that our approach outperforms five baselines on the factuality classification task.",
"To show that our model is statistically better than the baselines, we run a 10 -fold cross-validated paired t-test comparing our model with five baselines.",
"The results show that our model is better than the baseline models with p -value less than 3 .",
"27 e 5 .",
"On the hallucination detection task, the overlap-based and synonym-based baselines achieve relatively high accuracy.",
"However, these methods cannot distinguish between factual and non-factual hallucinations.",
"This is the reason for their performance degradation on factuality classification task.",
"For hallucination classification, the reason computing word overlap with the source does not completely solve the hallucination detection problem is that hallucination is defined based on the semantic relationship between the source and the summary.",
"There can exist words that are not in the source document but which can nevertheless be inferred from it.",
"shows the evaluation results on MENT .",
"ENTFA are learned on our annotated training set with k set to 20.",
"The performance of all models is lower on this dataset.",
"This may be due to fact that the converted dataset is noisier than the XENT dataset (see Section 4.2).",
"For the factuality classification task, our model outperforms five baseline models.",
"This demonstrates the generalizability of our approach.",
"Table 6 presents the correlation evaluation results.",
"On Pagnoni et al. (2021)'s benchmark dataset, our approach has the highest partial Pearson correlation coefficient = 0.183 ( p < 1 e 8 ).",
"On Wang et al. (2020)'s dataset (right column), our approach outperforms all other automatic metrics significantly.",
"These results indicate that our model can be used for automatic factuality evaluation of summaries at both the entity and sentence levels.",
"Baselines We compare our approach with four baselines: a teacher forcing trained summarizer (MLE), a RL-based summarizer (RL) (Pang and He, 2021) and a summarizer trained with the loss truncation technique from Kang and Hashimoto (2020).",
"We also replace our factuality assessment model ENTFA with Filippova (2020)'s approach (LM-based) for entity factuality labeling as another baseline model (see Section 3.3).",
"Table 7 shows the evaluation results on XSUM .",
"The results show that our approach outperforms all baselines with fewer non-factual entities and higher faithfulness scores.",
"Note that our approach has the lowest ENFS rate while having the highest percentage of factual hallucinations.",
"Compared with the loss truncation baseline, our method also produces more novel n -grams.",
"These show that our method does not improve the factuality of the model by simply making the model more extractive.",
"Figure 1 shows the factuality and abstractiveness trade-off curves of our model compared to the loss truncation baseline.",
"At the same level of ROUGE performance, our method can obtain a higher factuality score.",
"This further proves that our model can generate both factual and high-quality summaries compared with the loss truncation baseline.",
"To explore the effect of each feature, we conduct an ablation study by training the KNN classifier with",
"fewer features.",
"The results are illustrated in Table 8 and show that all the proposed features are useful.",
"For factuality classification, The performance w/o posterior drops significantly from 81.82 to 70.30.",
"This result suggests that the posterior probability is crucial for factuality classification.",
"For hallucination classification, the overlap-based feature has the most significant impact on model performance.",
"Figure 2 plots entities in the XENT dataset according to their prior and posterior probabilities and shows the KNN classification boundaries of ENTFA w/o overlap.",
"In Figure 2a, we find that the non-factual hallucinated entities are clustered around the origin.",
"This is in line with our expectations since non-factual hallucinations have lower prior and posterior probabilities.",
"Both factual hallucinated and non-hallucinated entities are gathered in the top area with high posterior probabilities.",
"In Figure 2b, the KNN classifier separates the factual and non-factual entities with clear boundaries.",
"A large part of the factual hallucinated entities are correctly identified by CMLMXSUM with relatively high posterior probabilities.",
"This explains our model's superior performance on factuality checking.",
"The top and right histograms in Figure 2b show the entity distribution over prior and posterior probability value respectively.",
"As shown in 2b's histogram, factual entities have significantly higher posterior probability than that of non-factual entities on average.",
"Figure 3 shows histograms of the prior and posterior probabilities of entities from MLM and CMLMXSUM , separated by their class (i.e., whether they are hallucinated and/or factual).",
"Nonhallucinated entities have higher posterior probability than factual and non-factual hallucinations on average.",
"In this paper, we investigate the hallucination and factuality problems in abstractive summarization.",
"We show that about 30% of entities generated by state-of-the-art summarization model are hallucinated.",
"More interestingly, more than half of the hallucinated entities are factual with respect to the source document and world knowledge.",
"We propose a novel method based on the entity's prior and posterior probabilities according to masked language models.",
"Our approach outperforms five base-3347 ROUGE % of novel n-gram Faithfulness ENTFA System R1 RL unigrams bigrams % ENFS FEQA DAE % Factual Ent % Factual Hal MLE 45.1 37.3 27.86 74.47 42.0 25.9 34.6 82.8 21.4 RL 45.8 37.6 28.14 74.73 43.2 25.6 33.3 82.8 21.6 LM-based 43.2 34.6 29.75 75.86 38.2 24.2 31.3 87.4 21.7 Loss trunc (c=0.3) 44.1 36.0 26.82 73.39 41.3 26.3 36.4 83.9 21.3 Loss trunc (c=0.7) 42.7 34.8 26.61 73.19 40.6 26.7 38.8 84.1 20.7 Ours ( r nfe = 2 . 0 ) 44.6 36.2 27.71 74.90 37.2 26.5 37.3 90.1 24.0 Ours ( r nfe = 4 . 0 ) 43.0 34.9 26.87 74.11 32.8 27.3 40.8 92.5 22.4 Table 7: Comparison of different summarization models.",
"line models on both factuality classification and hallucination detection tasks on human-annotated datasets.",
"In addition, using our classifier as a reward signal vastly improves the factuality of summarization systems.",
"Our approach is limited to entity-level hallucination and factuality classification.",
"In the future, we are interested in extending our work to arbitrary text spans.",
"This research was supported by the Canada CIFAR AI Chair program and Samsung Electronics.",
"We would also like to thank Compute Canada for providing us computing resources."
] | [
"abstain",
"result",
"abstain",
"objective",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"method",
"abstain",
"abstain",
"method",
"result",
"objective",
"method",
"method",
"result",
"abstain",
"result",
"objective",
"objective",
"result",
"abstain",
"objective",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"objective",
"other",
"abstain",
"result",
"method",
"objective",
"other",
"other"
] |
[
"The goal of stance detection is to identify whether the author of a text is in favor of, neutral or against a specific target.",
"Despite substantial progress on this task, one of the remaining challenges is the scarcity of annotations.",
"Data augmentation is commonly used to address annotation scarcity by generating more training samples.",
"However, the augmented sentences that are generated by existing methods are either less diversified or inconsistent with the given target and stance label.",
"In this paper, we formulate the data augmentation of stance detection as a conditional masked language modeling task and augment the dataset by predicting the masked word conditioned on both its context and the auxiliary sentence that contains target and label information.",
"Moreover, we propose another simple yet effective method that generates target-aware sentence by replacing a target mention with the other.",
"Experimental results show that our proposed methods significantly outperforms previous augmentation methods on 11 targets.",
"Nowadays, people often take to social media to express their stances toward specific targets (e.g., political figures or abortion).",
"These stances in an aggregate can provide valuable information for obtaining insight into some important events such as presidential elections.",
"The goal of the stance detection task is to determine from a piece of text whether the author of the text is in favor of, neutral or against toward a specific target (Mohammad et al., 2016; Lin et al., 2019), which indicates that all elements, the sentence, the target, and the label, are used to train a stance detection model.",
"We can further classify the task as single-target stance detection and multi-target stance detection (Kk and Can, 2020; AlDayel and Magdy, 2020) where we need to detect the stances toward two different targets simultaneously.",
"One of the biggest challenges for stance detection tasks is the scarcity of annotated data.",
"Data augmentation (DA) is an effective strategy for handling scarce data situations.",
"However, we face three main obstacles when applying the existing augmentation methods to the stance detection tasks.",
"First, existing augmentation methods do not generalize well, which means some methods are tailored to specific tasks and models, and thus difficult to be extended to the stance detection tasks.",
"Second, consider an original sample that is against to the target Legalization of Abortion in Table 1.",
"Using previous augmentation methods we may end up with the first generation example ( G 1 ) that deviates from its original meaning due to the unawareness of target and label information during augmentation.",
"Third, previous augmentation methods could generate the sentence ( G 2 ) with less diversified patterns.",
"To address these issues, we propose an augmentation method that can generate more diversified sentence ( G 3 ) that is consistent with target and label information.",
"Moreover, we expect the proposed method to generalize well to other tasks.",
"A common data augmentation strategy is based on word replacement.",
"Zhang et al. (2015) augmented a sentence by substituting the replaceable words with synonyms from WordNet (Miller, 1995).",
"However, synonym replacement can only generate limited diversified patterns.",
"Wu et al. (2019) formulated the text data augmentation as a Conditional Masked Language Modeling (C-MLM) task and proposed a Conditional BERT (CBERT) where segmentation embeddings of BERT (Devlin et al., 2019) are replaced with the annotated label during augmentation.",
"This method seems to be able to generate label-compatible sentences, yet it does not consider the target information for stance detection.",
"Moreover, CBERT can hardly be extended to other pre-trained language models that do not use segmentation embeddings in inputs, and cannot be applied to the multi-target stance detection due to the inability to encode two stance labels in segmentation embeddings.",
"Wei and Zou (2019) proposed a simple effective method that uses operations such as random deletion or swap to help train more robust model.",
"However, similar to the above methods, it fails to take target information into considerations.",
"Another commonly used strategy for augmentation is back-translation (Yu et al., 2018), however, it is less controllable and may change the target information unpredictably.",
"Inspired by the recent advances of applying auxiliary sentence to aspect-based sentiment analysis (Sun et al., 2019) and the task of recognising agreement and disagreement between stances (Xu et al., 2019), in this paper, we propose an Auxiliary Sentence based Data Augmentation (ASDA) method that generates target-relevant and label-consistent data samples based on the C-MLM task.",
"Specifically, we fine-tune a pre-trained BERTweet (Nguyen et al., 2020) model through C-MLM task in which the masked word is conditioned on both its context and the prepended auxiliary sentence that contains target and label information.",
"The same task is also adopted in the augmentation stage to generate data samples.",
"Besides, we propose a simple Target Replacement (TR) method that generates target-aware sentence by replacing a target mention in a sentence with the other.",
"Our contributions include the following:",
"1) In this paper, we propose a novel data augmentation method called ASDA.",
"As far as we know, this is the first attempt to explore the conditional data augmentation of stance detection.",
"Our proposed ASDA significantly outperforms strong baselines on three different stance detection datasets with 11 targets in total, demonstrating its effectiveness.",
"Experimental results show that prepending auxiliary sentence contributes to the performance gain;",
"2) We further propose a simple yet effective method called Target Replacement (TR) that achieves highly competitive performance even without fine-tuning during the augmentation;",
"3) Our proposed ASDA can be also employed on other baseline to help improve the performance, which indicates that ASDA is not tailored to specific model.",
"Most previous studies on stance detection focused on the detection of stance from text that contains expressions of stance towards one single target, i.e., single-target stance detection.",
"Mohammad et al. (2016) presented the SemEval-2016 dataset that contains 5 independent targets, e.g., Legalization of Abortion and Hillary Clinton.",
"Conforti et al. (2020) constructed WT-WT, a financial dataset on which the task is to detect whether two companies (e.g., Cigna and Express Scripts) will merge or not.",
"Inspired by the attention mechanism (Bahdanau et al., 2015), various target-specific attention-based approaches (Du et al., 2017; Sun et al., 2018; Wei et al., 2018b; Li and Caragea, 2019; Siddiqua et al., 2019; Sobhani et al., 2019) were proposed to connect the target with the sentence representation.",
"Moreover, gated mechanism (Dauphin et al., 2017) and BERT (Devlin et al., 2019) have drawn a lot attention these years and achieved promising performance on aspect-based sentiment analysis (Xue and Li, 2018; Huang and Carley, 2018).",
"We used the models from Du et al. (2017), Huang and Carley (2018) and Devlin et al. (2019) as strong base classifiers for our evaluation.",
"Sobhani et al. (2017) introduced the multi-target stance detection task and presented the Multi-Target stance dataset.",
"The task is to detect the stances toward two presidential candidates (e.g., Donald Trump and Ted Cruz) simultaneously.",
"They also proposed an attention-based encoder-decoder (Seq2Seq) model that predicts stance labels by focusing on different parts of a tweet.",
"Wei et al. (2018a) proposed a dynamic memory network for detecting stance.",
"We used the above three datasets (Mohammad et al., 2016; Sobhani et al., 2017; Conforti et al., 2020) for our evaluation.",
"One of the main challenges for stance detection tasks is the scarcity of annotated training data, which is costly to obtain.",
"Therefore, data augmentation becomes appealing, particularly when the training models become increasingly large.",
"Generative models are commonly used for data augmentation in previous studies, including variational autoencoders (VAE) (Kingma and Welling, 2014), generative adversarial networks (GAN) (Goodfel-low et al., 2014) and pre-trained language generation models (Anaby-Tavor et al., 2020; Li et al., 2020; Kumar et al., 2020).",
"Besides, Sennrich et al. (2016) and Yu et al. (2018) generated the data by using back-translation, which first translates the English sentence into another language (e.g., French) and then translates it back to English.",
"Another commonly used way for data augmentation is to substitute local words.",
"Zhang et al. (2015) and Wang and Yang (2015) substituted the replaceable words with synonyms from WordNet (Miller, 1995) and Word2Vec (Mikolov et al., 2013), respectively.",
"Kobayashi (2018) proposed a contextual data augmentation method.",
"A bidirectional language model is used to predict the word given the context surrounding the original word.",
"Wu et al. (2019) formulated the text data augmentation as a C-MLM task, retrofitting BERT (Devlin et al., 2019) to predict the masked word based on its context and annotated label.",
"Wei and Zou (2019) boosted the performance on text classification by using simple operations such as random deletion or insertion, and received substantial attention from the research community recently.",
"However, the augmentation methods mentioned above mostly focus on the sentence-level natural language processing tasks and the resulting augmented sentence can either change the stance toward the given target unexpectedly or generate only limited diverse patterns for stance detection tasks.",
"Suppose a given training dataset of size n is D train = { ( x i , t i , y i ) } ni =1 where x i = [ x 1 i , x 2 i , ..., x li ] is a sequence of l words, t i is the corresponding target and y i { 1 , ..., c } is the label.",
"The objective of our data augmentation task is to generate an augmented sentence x i that is consistent with the target t i and label y i .",
"Note that t i = [ t 1 i , t 2 i ] and y i = [ y 1 i , y 2 i ] for multi-target stance detection, which makes the augmentation task more challenging.",
"Previous conditional data augmentation methods such as (Wu et al., 2019) could generate target-unaware data samples and cannot be applied to the multi-target stance detection task.",
"In this paper, we propose an Auxiliary Sentence based Data Augmentation (ASDA) method that can generate target-relevant and label-consistent data samples based on the C-MLM task.",
"ASDA generates augmented sentence by predicting the masked word that is conditioned on both its context and the auxiliary sentence.",
"We propose the following method to construct the auxiliary sentence.",
"ASDA: Given a training sample E 1 , we prepend both another training sample E 2 with the same target and label as E 1 and the description sentence that contains target and label information to E 1 .",
"The complete sentence is: The authors of the following tweets are both [Label] [Target].",
"The first tweet is: E 2 .",
"The second tweet is: E 1 .",
"The sentences before E 1 are the auxiliary sentences we construct.",
"Target and Label are the target name and stance label with regard to the given training sample.",
"E 2 that contains the same target and stance label with E 1 is sampled from the training dataset.",
"Specifically, suppose we are given a training example in the SemEval-2016 dataset: We all have a duty to protect the sanctity of life.",
"Target: Legalization of Abortion ; Label: Against .",
"We can have the following masked words and auxiliary sentences in fine-tuning or augmentation stage: The authors of the following tweets are both against to legalization of abortion.",
"The first tweet is: Every human life is worth the same, and worth saving.",
"The second tweet is: We all have a [MASK] to protect the [MASK] of life.",
"Target: Legalization of Abortion ; Label: Against .",
"With the auxiliary sentence, the masked word is not only conditioned on its context in the second tweet, but also conditioned on the first tweet of same target Legalization of Abortion and label Against.",
"We expect the agreement between stances to ben-efit the data augmentation by adding a reference sentence E 2 .",
"The introduction of the E 2 not only generates more diversified samples for fine-tuning the pre-trained language model, but also provides a strong guideline to help generate target-relevant and label-compatible sentences in the augmentation stage.",
"Moreover, ASDA is not tailored to specific model because it does not rely on the model architecture, and thus can be easily extended to different language models.",
"BERTweet (Nguyen et al., 2020) is a large-scale language model pre-trained on 850M English tweets.",
"BERTweet follows the training procedure of RoBERTa (Liu et al., 2019) and uses the same model configuration with BERT-base (Devlin et al., 2019).",
"We fine-tune the pre-trained BERTweet via C-MLM on stance detection tasks.",
"The fine-tuning step is summarized in Algorithm 1.",
"Note that words of auxiliary sentence A are never masked (see Algorithm 1 lines 4-6) because we want to preserve all target and label information.",
"After fine-tuning the BERTweet on the training dataset for a few epochs, we use the well-trained model for augmentation.",
"Similar to the fine-tuning procedure as shown in Algorithm 1, for a training sentence s from D train , we randomly mask words in s and prepend the corresponding auxiliary sentence A to obtain the masked sentence s .",
"Then, the BERTweet model is used to predict the masked words and we repeat these steps over all training data to get D train .",
"The above steps can be implemented multiple times with different masked positions, and hence, different augmented samples can be generated from the original training dataset.",
"Finally, we merge the D train with D train and perform classification task on this combined dataset.",
"Besides ASDA, we propose a Target Replacement (TR) method to increase the size of the training set by replacing a target mention in a sentence with the other, which improves model robustness so that meaningful lexical patterns are learned by the model instead of learning undesirable correlation between a target and its contexts.",
"In case a target is mentioned more than once, we continue to replace the target until all targets are replaced.",
"Hash-tags and mentions that contain target information (e.g., #Cigna) are also considered for replacement.",
"Consider the following example in single-target stance detection: #CI Shareholders vote to approve merger Cigna and Express Scripts.",
"Target: Cigna and Express Scripts ; Label: Support .",
"After applying TR, we have: #ESRX Shareholders vote to approve merger Express Scripts and Cigna.",
"Target: Cigna and Express Scripts ; Label: Support .",
"CI and ESRX represent Cigna and Express Scripts, respectively.",
"TR can be also applied to the multi-target stance detection with minor changes.",
"Consider the following example: #Cruz supporters want people to think his words alone are good enough.",
"#Don-aldTrump has created jobs and businesses we need in this country.",
"Target1: Donald Trump ; Target2: Ted Cruz ; Label1: Favor ; Label2: Against .",
"TR could potentially generate contradictory content with the labels if we only replace the target mentions since the task is to detect the stances toward two different targets simultaneously.",
"Therefore, we replace the target mentions and swap the stance labels for multi-target stance detection.",
"Consider the same example as above after applying the target replacement and label swap: #DonaldTrump supporters want people to think his words alone are good enough.",
"#Cruz has created jobs and businesses we need in this country.",
"Target1: Donald Trump ; Target2: Ted Cruz ; Label1: Against ; La-bel2: Favor .",
"In this section, we first describe three stance detection datasets used for evaluation and several baseline methods of data augmentation and stance detection.",
"Then, we introduce the evaluation metrics and report the experimental results.",
"stance dataset (Sobhani et al., 2017).",
"Three stance detection datasets, the SemEval-2016 dataset (Mohammad et al., 2016), the WT-WT financial dataset (Conforti et al., 2020) and the Multi-Target election dataset (Sobhani et al., 2017), are used to evaluate the performance of augmentation methods.",
"The SemEval-2016 dataset and WT-WT dataset are both single-target stance datasets and the third dataset is a multi-target stance dataset, which contains stances toward two targets in each tweet.",
"Summary statistics of three datasets are shown in Tables 2, 3, 4, respectively.",
"SemEval-2016 SemEval-2016 is a benchmark dataset containing five different targets: Atheism, Climate Change is a Real Concern, Feminist Movement, Hillary Clinton and Legalization of Abortion.",
"The dataset is annotated for detecting whether the author is against to, neutral or in favor of a given target.",
"We split the train set in a 5:1 ratio into train and validation sets and removed the target Climate Change because of the limited and highly skewed data.",
"The test set of each target is the same as provided by the authors.",
"WT-WT WT-WT is a financial dataset and the task aims at detecting the stance toward mergers and acquisition operations between companies.",
"This dataset consists of four target pairs in the healthcare domain and each data is annotated with four labels (refute, comment, support and unre-lated).",
"Multi-Target Multi-Target stance dataset consists of three sets of tweets corresponding to target pairs: Donald Trump and Hillary Clinton, Donald Trump and Ted Cruz, Hillary Clinton and Bernie Sanders.",
"The task aims at detecting the stances (against, none or favor) toward two targets for each data.",
"We used the train, validation and test sets as provided by the authors.",
"We compare the proposed augmentation methods with the following baselines:",
"Synonym Replacement (SR): A data augmentation method that randomly replaces words with their synonyms from WordNet.",
"EDA (Wei and Zou, 2019): A simple data augmentation method that consists of four operations: synonym replacement, random deletion, random swap and random insertion.",
"BT (Yu et al., 2018): A back-translation method that first translates the English sentence into French and then translates back to English.",
"CBERT (Wu et al., 2019): A C-MLM method that generates label-compatible words by replacing the segmentation embeddings of BERT with label embeddings.",
"PGCNN (Huang and Carley, 2018): A parameterized convolutional neural network that uses target-sensitive filters and gated mechanism to incorporate the target information.",
"TAN (Du et al., 2017): An attention-based LSTM model that extracts target specific features.",
"BERT (Devlin et al., 2019): A pre-trained language model that predicts the stance by appending a linear classification layer to the hidden representation of [ CLS ] token.",
"We fine-tune the BERT-base on various stance detection tasks.",
"The proposed methods are listed as follows: Target Replacement (TR): A method that replaces target words with the other.",
"CBERT-ASDA: The CBERT that uses our proposed auxiliary sentences during fine-tuning and augmentation.",
"ASDA-base: A variation of ASDA that only prepends the description sentence to the given training sample.",
"The complete sentence is: The author of the following tweet is [Label] [Target].",
"E 1 .",
"ASDA: The full method that uses both description and reference sentences as auxiliary sentences during fine-tuning and augmentation.",
"F avg is adopted to evaluate the performance of the proposed model.",
"First, the F1-score of label Favor and Against is calculated as follows: F favor = 2 P favor R favor P favor + R favor (1) F against = 2 P against R against P against + R against (2) where P and R are precision and recall respectively.",
"After that, the F avg is calculated as: F avg = F favor + F against 2 (3) We calculate the F avg for each target.",
"The same evaluation metric was used in SemEval-2016 dataset and Multi-Target stance datasets.",
"To be consistent with the previous work, we evaluate the performance of augmentation methods on WT-WT dataset by using the same evaluation metric F avg , which is calculated by averaging the F1-scores of label Support and Refute.",
"Moreover, we get avgF 1 by calculating the average of F avg across all targets for each dataset.",
"We use the pre-trained uncased BERTweet model for fine-tuning and augmentation under the PyTorch framework.",
"When fine-tuning, the batch Method CI_ESRX AET_HUM CVS_AET ANTM_CI avgF 1 PGCNN 71.34 77.43 73.73 71.70 73.55 +SR 71.96 77.25 73.54 71.78 73.63 +EDA 70.97 77.28 73.85 71.90 73.50 +BT 71.57 77.59 74.17 71.56 73.72 +CBERT 71.57 77.31 73.53 71.59 73.50 +TR 73.51 77.85 75.42 72.57 74.84 +CBERT-ASDA 73.02 78.65 74.30 72.19 74.54 +ASDA-base 71.61 77.97 73.77 72.14 73.87 +ASDA 74.25 78.36 74.63 72.63 74.97 TAN 68.39 76.06 69.83 68.72 70.75 +SR 67.88 75.46 70.37 69.02 70.68 +EDA 68.02 75.40 69.86 69.06 70.59 +BT 67.69 75.19 70.57 67.99 70.36 +CBERT 68.55 75.75 70.89 68.88 71.02 +TR 68.02 75.85 69.66 69.10 70.66 +CBERT-ASDA 70.40 76.35 71.50 69.87 72.03 +ASDA-base 67.19 76.29 70.83 69.32 70.91 +ASDA 70.13 77.53 71.73 70.18 72.39 BERT 71.12 78.47 75.28 74.11 74.75 +SR 73.24 78.57 75.65 73.80 75.32 +EDA 73.44 78.51 75.85 73.98 75.45 +BT 72.30 77.47 75.97 73.80 74.89 +CBERT 72.83 77.99 75.49 73.66 74.99 +TR 74.17 78.80 76.30 74.24 75.88 +CBERT-ASDA 74.58 78.95 76.46 74.46 76.11 +ASDA-base 72.49 78.76 76.09 74.01 75.34 +ASDA 75.45 78.99 76.41 74.48 76.33 Table 6: Performance comparisons of applying different augmentation methods to the base model on the WT-WT stance dataset.",
"size is 32, maximum sequence length is 128, learning rate is 2e-5 and proportion of sentence to mask is 15%.",
"For classification, we train our PGCNN and TAN models using a mini-batch of 128 and the learning rate of Adam optimizer (Kingma and Ba, 2015) is 1e-3.",
"Maximum sequence length is 50 and word vectors are initialized using fastText embeddings (Bojanowski et al., 2017) with dimension 300.",
"For BERT classifier, we fine-tune the pre-trained BERT to predict the stance by appending a linear classification layer to the hidden representation of the [ CLS ] token.",
"The maximum sequence length is set to 128 and the learning rate is 2e-5.",
"We generate one augmented sentence for each training data, doubling the original train set in size for fair comparison.",
"Experimental results on SemEval-2016, WT-WT and Multi-Target datasets are shown in Tables 5, 6 and 7, respectively.",
"Bold scores are best two results for each classifier.",
"Each result is the average of ten runs with different initializations.",
"Since CBERT and TR cannot be applied to the Multi-Target and SemEval-2016 datasets, respectively, we didn't report the results of these methods.",
"First, we can observe that our proposed ASDA performs the best in avgF 1 on almost all datasets.",
"Moreover, ASDA has better performance than ASDA-base on all targets, demonstrating the effectiveness of adding reference sentences.",
"Second, CBERT can be only used in single-target stance detection tasks due to the segmentation embeddings.",
"In contrast, ASDA-base that achieves similar performance with CBERT can be applied to all datasets, which indicates that constructing auxiliary sentence contributes to the C-MLM task.",
"Third, Tables 5 and 6 show that constructing the auxiliary sentence can not only perform well on the BERTweet model, but also help improve the baseline CBERT, indicating that our proposed method is not tailored to specific masked language model.",
"Fourth, TR achieves promising improvements on WT-WT and Multi-Target datasets, outperforming the EDA in the average of avgF 1 on three classifiers by 0.61% and 1.54%, respectively.",
"Further comparison between TR and Random Swap of EDA is discussed later in this section.",
"At last, we can observe that improvements brought by the baselines are limited on three datasets, verifying that target-based stance detection tasks are more challenging.",
"We further explore the effect of the auxiliary sentence by comparing the proposed ASDA with other Prepending based Data Augmentation (PDA) (Schick and Schtze, 2020; Kumar et al., 2020) in which no description sentence is constructed and the complete sentence is: [Label] [Target] E 1 .",
"Moreover, we consider the reference sample E 2 as mentioned in Section 3.2.1 for PDA and the complete sentence is [Label] [Target] E 2 E 1 .",
"Comparison results on SemEval-2016 dataset are shown in Table",
"8. We can observe that both ASDA and PDA-ASDA show better performance over their base models, which indicates that the reference sentence contributes to the performance improvement and our proposed method is not tailored to specific auxiliary sentence.",
"We compare the proposed methods with other augmentation methods in Table",
"9. We can observe that both ASDA and TR consider the target information during augmentation.",
"However, TR cannot be applied to SemEval-2016 dataset because unlike WT-WT dataset that corresponds to the merger of two target companies, only single target is available in SemEval-2016 dataset.",
"swaps their positions.",
"However, Random Swap can potentially generate augmented sentences that contain contradictory content with the labels.",
"Since TR shares similar features with Random Swap by swapping the target mentions in some cases, we compare our proposed TR with Random Swap on WT-WT and Multi-Target datasets in Table",
"10. The results show that TR achieves better performance on 6, 5 and 4 targets for PGCNN, TAN and BERT, respectively, demonstrating the effectiveness of this method.",
"Note that TR does not perform well on the target pair Clinton-Sanders; one possible reason is that there is more target-related information in this target pair.",
"Since only target words (e.g., Hillary Clinton) are swapped in TR, target-related words like feminism and Benghazi still appear in the same position in the generated sentence, which may lead to the inconsistency of target information.",
"In this section, we present several augmented examples in Table 11 to show the effectiveness of our",
"proposed methods.",
"Synonym Replacement, Random Deletion and Random Swap of EDA are applied to the targets Feminist Movement, Cigna and Express Scripts and Donald Trump and Ted Cruz, respectively.",
"We can observe that the generated words of ASDA and TR are more consistent with the target and label information.",
"In contrast, the augmented words of baseline methods especially EDA could be incompatible with the labels of the original sentences.",
"In this paper, we presented two data augmentation methods, called ASDA and TR, for stance detection.",
"Different from the existing augmentation methods that are either unaware of target information or hard to be applied to different stance detection tasks, ASDA performs better in generating target-relevant and label-compatible sentences and can be easily applied to various tasks.",
"Results show that ASDA can not only achieve best performance on BERTweet model but also help improve the existing augmentation method such as CBERT.",
"Unlike other rule-based word replacement methods that may produce undesirable correlation between a target and its contexts, TR replaces a target mention with the other, generating qualified sentences with meaningful lexical patterns.",
"In addition, both ASDA and TR will be applicable if we need to detect the stances toward more than two targets simultaneously in the future.",
"Future work includes extending the proposed methods to various directions, e.g., argument mining, aspect-based sentiment analysis and hate-speech detection, and generating more diversified samples through conditional generation.",
"This work is partially supported by the NSF Grants IIS-1912887 and IIS-1903963.",
"We thank our reviewers for their insightful comments."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"We explore the utilities of explicit negative examples in training neural language models.",
"Negative examples here are incorrect words in a sentence, such as barks in * The dogs barks .",
"Neural language models are commonly trained only on positive examples, a set of sentences in the training data, but re-cent studies suggest that the models trained in this way are not capable of robustly handling complex syntactic constructions, such as long-distance agreement.",
"In this paper, we first demonstrate that appropriately using negative examples about particular constructions (e.g., subject-verb agreement) will boost the model's robustness on them in English, with a negligible loss of perplexity.",
"The key to our success is an additional margin loss between the log-likelihoods of a correct word and an incorrect word.",
"We then provide a detailed analysis of the trained models.",
"One of our findings is the difficulty of object-relative clauses for RNNs.",
"We find that even with our direct learning signals the models still suffer from resolving agreement across an object-relative clause.",
"Augmentation of training sentences involving the constructions somewhat helps, but the accuracy still does not reach the level of subject-relative clauses.",
"Although not directly cognitively appealing, our method can be a tool to analyze the true architectural limitation of neural models on challenging linguistic constructions.",
"Despite not being exposed to explicit syntactic supervision, neural language models (LMs), such as recurrent neural networks, are able to generate fluent and natural sentences, suggesting that they induce syntactic knowledge about the language to some extent.",
"However, it is still under debate whether such induced knowledge about grammar is robust enough to deal with syntactically challenging constructions such as long-distance subject-verb agreement.",
"So far, the results for RNN language models (RNN-LMs) trained only with raw text are overall negative; prior work has reported low performance on the challenging test cases (Mar-vin and Linzen, 2018) even with the massive size of the data and model (van Schijndel et al., 2019), or argue the necessity of an architectural change to track the syntactic structure explicitly (Wilcox et al., 2019b; Kuncoro et al., 2018).",
"Here the task is to evaluate whether a model assigns a higher likelihood on a grammatically correct sentence (1a) over an incorrect sentence (1b) that is minimally different from the original one (Linzen et al., 2016).",
"In this paper, to obtain a new insight into the syntactic abilities of neural LMs, in particular RNN-LMs, we perform a series of experiments under a different condition from the prior work.",
"Specifi-cally, we extensively analyze the performance of the models that are exposed to explicit negative examples.",
"In this work, negative examples are the sentences or tokens that are grammatically incorrect, such as (1b) above.",
"Since these negative examples provide a direct learning signal on the task at test time it may not be very surprising if the task performance goes up.",
"We acknowledge this, and argue that our motivation for this setup is to deepen understanding, in particular the limitation or the capacity of the current architectures, which we expect can be reached with such strong supervision.",
"Another motivation is engineering: we could exploit negative examples in different ways, and establishing a better way will be of practical importance toward building an LM or generator that can be robust on particular linguistic constructions.",
"The first research question we pursue is about this latter point: what is a better method to utilize negative examples that help LMs to acquire robustness on the target syntactic constructions?",
"Regarding this point, we find that adding additional token-level loss trying to guarantee a margin between log-probabilities for the correct and incorrect words (e.g., log p ( laughs | h ) and log p ( laugh | h ) for (1a)) is superior to the alternatives.",
"On the test set of Marvin and Linzen (2018), we show that LSTM language models (LSTM-LMs) trained by this loss reach near perfect level on most syntactic constructions for which we create negative examples, with only a slight increase of perplexity about 1.0 point.",
"Past work conceptually similar to us is Enguehard et al. (2017), which, while not directly exploiting negative examples, trains an LM with additional explicit supervision signals to the evaluation task.",
"They hypothesize that LSTMs do have enough capacity to acquire robust syntactic abilities but the learning signals given by the raw text are weak, and show that multi-task learning with a binary classification task to predict the upcoming verb form (singular or plural) helps models aware of the target syntax (subject-verb agreement).",
"Our experiments basically confirm and strengthen this argument, with even stronger learning signals from negative examples, and we argue this allows us to evaluate the true capacity of the current architectures.",
"In our experiments (Section 4), we show that our margin loss achieves higher syntactic performance than their multi-task learning.",
"Another relevant work on the capacity of LSTM-LMs is Kuncoro et al. (2019), which shows that by distilling from syntactic LMs (Dyer et al., 2016), LSTM-LMs can improve their robustness on various agreement phenomena.",
"We show that our LMs with the margin loss outperform theirs in most of the aspects, further strengthening the argument about a stronger capacity of LSTM-LMs.",
"The latter part of this paper is a detailed analysis of the trained models and introduced losses.",
"Our second question is about the true limitation of LSTM-LMs: are there still any syntactic constructions that the models cannot handle robustly even with our direct learning signals?",
"This question can be seen as a fine-grained one raised by Enguehard et al. (2017) with a stronger tool and improved evaluation metric.",
"Among tested constructions, we find that syntactic agreement across an object relative clause (RC) is challenging.",
"To inspect whether this is due to the architectural limitation, we train another LM on a dataset, on which we unnaturally augment sentences involving object RCs.",
"Since it is known that object RCs are relatively rare compared to subject RCs (Hale, 2001), frequency may be the main reason for the lower performance.",
"Interestingly, even when increasing the number of sentences with an object RC by eight times (more than twice of sentences with a subject RC), the accuracy does not reach the same level as agreement across a subject RC.",
"This result suggests an inherent difficulty in tracking a syntactic state across an object RC for sequential neural architectures.",
"We finally provide an ablation study to understand the encoded linguistic knowledge in the models learned with the help of our method.",
"We experiment under reduced supervision at two different levels: (1) at a lexical level, by not giving negative examples on verbs that appear in the test set; (2) at a construction level, by not giving negative examples about a particular construction, e.g., verbs after a subject RC.",
"We observe no huge score drops by both.",
"This suggests that our learning signals at a lexical level (negative words) strengthen the abstract syntactic knowledge about the target constructions, and also that the models can generalize the knowledge acquired by negative examples to similar constructions for which negative examples are not explicitly given.",
"The result also implies that negative examples do not have to be complete and can be noisy, which will be appealing from an engineering perspective.",
"The most common evaluation metric of an LM is perplexity.",
"Although neural LMs achieve impressive perplexity (Merity et al., 2018), it is an average score across all tokens and does not inform the models' behaviors on linguistically challenging structures, which are rare in the corpus.",
"This is the primary motivation to separately evaluate the models' syntactic robustness by a different task.",
"As introduced in Section 1, the task for a model is to assign a higher probability to the grammatical sentence over the ungrammatical one, given a pair of minimally different sentences at a critical position affecting the grammaticality.",
"For example, (1a) and (1b) only differ at a final verb form, and to assign a higher probability to (1a), models need to be aware of the agreement dependency between author and laughs over an RC.",
"Marvin and Linzen (2018) test set While initial work (Linzen et al., 2016; Gulordava et al., 2018) has collected test examples from naturally occurring sentences, this approach suffers from the coverage issue, as syntactically challenging examples are relatively rare.",
"We use the test set compiled by Marvin and Linzen (2018), which consists of synthetic examples (in English) created by a fixed vocabulary and a grammar.",
"This approach allows us to collect varieties of sentences with complex structures.",
"The test set is divided by the syntactic constructions appearing in each example.",
"Many constructions are different types of subject-verb agreement, including local agreement on different sentential positions (2), and non-local agreement across different types of phrases.",
"Intervening phrases include prepositional phrases, subject RCs, object RCs, and coordinated verb phrases (3).",
"(1) is an example of agreement across an object RC.",
"(2) The senators smile/*smiles.",
"and are/*is twenty three years old.",
"Previous work has shown that non-local agreement is particularly challenging for sequential neural models (Marvin and Linzen, 2018).",
"The other patterns are reflexive anaphora dependencies between a noun and a reflexive pronoun (4), and on negative polarity items (NPIs), such as ever , which requires a preceding negation word (e.g., no and none ) at an appropriate scope (5): (4) The authors hurt themselves/*himself.",
"(5) No/*Most authors have ever been popular.",
"Note that NPI examples differ from the others in that the context determining the grammaticality of the target word (No/*Most) does not precede it.",
"Rather, the grammaticality is determined by the following context.",
"As we discuss in Section 3, this property makes it difficult to apply training with negative examples for NPIs for most of the methods studied in this work.",
"All examples above (15) are actual test sentences, and we can see that since they are synthetic some may sound somewhat unnatural.",
"The main argument behind using this dataset is that even not very natural, they are still strictly grammatical, and an LM equipped with robust syntactic abilities should be able to handle them as a human would do.",
"We use the original test set used in Marvin and Linzen (2018).",
"1 See the supplementary materials of this for the lexical items and example sentences in each construction.",
"Training data Following the practice, we train LMs on the dataset not directly relevant to the test set.",
"Throughout the paper, we use an English Wikipedia corpus assembled by Gulordava et al. (2018), which has been used as training data for the present task (Marvin and Linzen, 2018; Kuncoro et al., 2019), consisting of 80M/10M/10M tokens for training/dev/test sets.",
"It is tokenized and rare words are replaced by a single unknown token, amounting to the vocabulary size of 50,000.",
"Baseline LSTM-LM Since our focus in this paper is an additional loss exploiting negative examples (Section 3), we fix the baseline LM throughout the experiments.",
"Our baseline is a three-layer LSTM-LM with 1,150 hidden units at internal layers trained with the standard cross-entropy loss.",
"Word embeddings are 400-dimensional, and input and output embeddings are tied (Inan et al., 2016).",
"Deviating from some prior work (Mar-vin and Linzen, 2018; van Schijndel et al., 2019), we train LMs at sentence level as in sequence-to-sequence models (Sutskever et al., 2014).",
"This setting has been employed in some previous work (Kuncoro et al., 2018, 2019).",
"2 Parameters are optimized by SGD.",
"For regularization, we apply dropout on word embeddings and outputs of every layer of LSTMs, with weight decay of 1.2e-6, and anneal the learning rate by 0.5 if the validation perplexity does not improve successively, checking every 5,000 mini-batches.",
"Mini-batch size, dropout weight, and initial learning rate are tuned by perplexity on the dev set of Wikipedia dataset.",
"3 Note that we tune these values for the baseline LSTM-LM and fix them across the experiments.",
"1 We use the EMNLP2018 templates in https://github.com/BeckyMarvin/LM syneval.",
"2 On the other hand, the LSTM-LM of Marvin and Linzen (2018), which is prepared by Gulordava et al. (2018), is trained at document level through truncated backpropagation through time (BPTT) (Mikolov et al., 2011).",
"Since our training regime is more akin to the task setting of syntactic evaluation, it may provide some advantage at test time.",
"3 Following values are found: mini-batch size: 128; initial learnin rate: 20.0; dropout weight on the word embedding layer and each output layer of LSTM: 0.1.",
"The size of our three-layer LM is the same as the state-of-the-art LSTM-LM at document-level (Merity et al., 2018).",
"Marvin and Linzen (2018)'s LSTM-LM is two-layer with 650 hidden units and word embeddings.",
"Comparing two, since the word embeddings of our models are smaller (400 vs. 650) the total model sizes are comparable (40M for ours vs. 39M for theirs).",
"Nonetheless, we will see in the first experiment that our carefully tuned three-layer model achieves much higher syntactic performance than their model (Section 4), being a stronger baseline to our extensions, which we introduce next.",
"Now we describe four additional losses for exploiting negative examples.",
"The first two are existing ones, proposed for a similar purpose or under a different motivation.",
"As far as we know, the latter two have not appeared in past work.",
"4 We note that we create negative examples by modifying the original Wikipedia training sentences, not sentences in the test set.",
"As a running example, let us consider the case where sentence (6a) exists in a mini-batch, from which we create a negative example (6b).",
"(6)",
"a. An industrial park with several companies is located in the close vicinity.",
"b. * An industrial park with several companies are located in the close vicinity.",
"Notations By a target word, we mean a word for which we create a negative example (e.g., is ).",
"We distinguish two types of negative examples: a negative token and a negative sentence ; the former means a single incorrect word (e.g., are ), while the latter means an entire ungrammatical sentence.",
"Binary-classification loss This is proposed by Enguehard et al. (2017) to complement a weak inductive bias in LSTM-LMs for learning syntax.",
"It is multi-task learning across the cross-entropy loss ( L lm ) and an additional loss ( L add ): L = L lm + L add , (1) where is a relative weight for L add .",
"Given outputs of LSTMs, a linear and binary softmax layers 4 The loss for large-margin language models (Huang et al., 2018) is similar to our sentence-level margin loss.",
"Whereas their formulation is more akin to the standard large-margin setting, aiming to learn a reranking model, our margin loss is simpler, just comparing two log-likelihoods of predefined positive and negative sentences.",
"predict whether the next token is singular or plural.",
"L add is a loss for this classification, only defined for the contexts preceding a target token x i : L add = (cid:88) x 1: i h log p ( num ( x i ) | x 1: i 1 ) , where x 1: i = x 1 x i is a prefix sequence and h is a set of all prefixes ending with a target word (e.g., An industrial park with several companies is ) in the training data.",
"num ( x ) { singular, plural } is a function returning the number of x .",
"In practice, for each mini-batch for L lm , we calculate L add for the same set of sentences and add these two to obtain a total loss for updating parameters.",
"As we mentioned in Section 1, this loss does not exploit negative examples explicitly; essentially a model is only informed of a key position (target word) that determines the grammaticality.",
"This is rather an indirect learning signal, and we expect that it does not outperform the other approaches.",
"Unlikelihood loss This is recently proposed (Welleck et al., 2020) for resolving the repetition issue, a known problem for neural text generators (Holtzman et al., 2019).",
"Aiming at learning a model that can suppress repetition, they introduce an unlikelihood loss, which is an additional loss at a token level and explicitly penalizes choosing words previously appeared in the current context.",
"We customize their loss for negative tokens x i (e.g., are in (6b)).",
"Since this loss is added at token-level, instead of Eq.",
"1 the total loss is L lm , which we modify as: (cid:88) x D (cid:88) x i x log p ( x i | x 1: i 1 ) + (cid:88) x i neg t ( x i ) g ( x i ) , g ( x i ) = log(1 p ( x i | x 1: i 1 )) , where neg t ( ) returns negative tokens for a target x i .",
"5 controls the weight.",
"x is a sentence in the training data D .",
"The unlikelihood loss strengthens the signal to penalize undesirable words in a context by explicitly reducing the likelihood of negative tokens x i .",
"This is a more direct learning signal than the binary classification loss.",
"a different loss, in which the likelihoods for correct and incorrect sentences are more tightly coupled.",
"As in 5 Empty for non-target tokens.",
"It may return multiple tokens sometimes, e.g., themselves { himself, herself } .",
"where is a margin value between the log-likelihood of original sentence x and negative sentences { x j } .",
"neg s ( ) returns a set of negative sentences by modifying the original one.",
"Note that we change only one token for each x j , and thus may obtain multiple negative sentences from one x when it contains multiple target tokens (e.g., she leaves there but comes back ... ).",
"6 Comparing to the unlikelihood loss, not only decreasing the likelihood of a negative example, this loss tries to guarantee a certain difference between the two likelihoods.",
"The learning signal of this loss seems stronger in this sense; however, the token-level supervision is missing, which may provide a more direct signal to learn a clear contrast between correct and incorrect words.",
"This is an empirical problem we pursue in the experiments.",
"Token-level margin loss Our final loss is a combination of the previous two, by replacing g ( x i ) in the unlikelihood loss by a margin loss: g ( x i ) = max(0 , (log p ( x i | x 1: i 1 ) log p ( x i | x 1: i 1 )) .",
"We will see that this loss is the most advantageous in the experiments (Section 4).",
"Each method employs a few additional hyperparam-eters ( for the binary classification loss, for the unlikelihood loss, and for the margin losses).",
"We preliminary select and from { 1 , 10 , 100 , 1000 } that achieve the best average syntactic performance and find = 1 and = 1000 .",
"For the two margin losses, we fix = 1 .",
"0 and = 1 .",
"0 and only see the effects of margin value .",
"6 In principle, one can cumulate this loss within a single mini-batch for L lm as we do for the binary-classification loss.",
"However, obtaining L add needs to run an LM entirely on negative sentences as well, which demands a lot of GPU memories.",
"We avoid this by separating mini-batches for L lm and L add .",
"We precompute all possible pairs of ( x , x j ) and create a mini-batch by sampling from them.",
"We make the batch size for L add (the number of pairs) as the half of that for L lm , to make the number of sentences contained in both kinds of batches equal.",
"Finally, in each epoch, we only sample at most the half mini-batches of those for L lm to reduce the total amount of training time.",
"Since our goal is to understand to what extent LMs can be sensitive to the target syntactic constructions by giving explicit supervision via negative examples, we only prepare negative examples on the constructions that are directly tested at evaluation.",
"Specifically, we mark the following words in the training data, and create negative examples: Present verb To create negative examples on subject-verb agreement, we mark all present verbs and change their numbers.",
"These two are both related to the syntactic number of a target word.",
"For binary classification we regard both as a target word, apart from the original work that only deals with subject-verb agreement (Enguehard et al., 2017).",
"We use a single common linear layer for both constructions.",
"In this work, we do not create negative examples for NPIs.",
"This is mainly for technical reasons.",
"Among four losses, only the sentence-level margin loss can correctly handle negative examples for NPIs, essentially because other losses are token-level.",
"For NPIs, left contexts do not have information to decide the grammaticality of the target token (a quantifier; no, most, etc.) (Section 2.1).",
"Instead, in this work, we use NPI test cases as a proxy to see possible negative (or positive) impacts as compensation for specially targeting some constructions.",
"We will see that in particular for our margin losses, such negative effects are very small.",
"We first see the overall performance of baseline LSTM-LMs as well as the effects of additional losses.",
"Throughout the experiments, for each setting, we train five models from different random seeds and report the average score and standard deviation.",
"The code is available at https://github.com/aistairc/lm syntax negative.",
"7 We use Stanford tagger (Toutanova et al., 2003) to find the present verbs.",
"We change the number of verbs tagged by VBZ or VBP using inflect.py (https://pypi.org/project/inflect/).",
"notice that our baseline LSTM-LM (Section 2.2) performs much better than Marvin and Linzen (2018)'s LM.",
"A similar observation is recently made by Kuncoro et al. (2019).",
"8 This suggests that the original work underestimates the true syntactic ability induced by LSTM-LMs.",
"The table also shows the results by their distilled LSTM-LM from RNNGs (Section 1).",
"Higher margin value is effective For the two types of margin loss, which margin value should we use?",
"Figure 1 reports average accuracies within the same types of constructions.",
"For both token and sentence-levels, the task performance increases along , but a too large value (15) causes a nega-8 We omit the comparison but the scores are overall similar.",
"tive effect, in particular on reflexive anaphora.",
"Increases (degradations) of perplexity are observed in both methods but this effect is much smaller for the token-level loss.",
"In the following experiments, we fix the margin value to 10 for both, which achieves the best syntactic performance.",
"Which additional loss works better?",
"We see a clear tendency that our token-level margin loss achieves overall better performance.",
"Unlikelihood loss does not work unless we choose a huge weight parameter ( = 1000 ), but it does not outperform ours, with a similar value of perplexity.",
"The improvements by binary-classification loss are smaller, indicating that the signals are weaker than other methods with explicit negative exam-0.1M 0.37M 0.5M 0.8M # ORCs 75 80 85 90 95 100 A cc u r a c y o n ' A c r o ss a n ORC ' with that (all cases) LSTM-LM margin (sent.) margin (token) 0.1M 0.37M 0.5M 0.8M # ORCs with that (animate only) 0.1M 0.37M 0.5M 0.8M # ORCs no that (all cases) 0.1M 0.37M 0.5M 0.8M # ORCs no that (animate only) Figure 2: Accuracies on Across an ORC (with and without complementizer that) by models trained on augmented data with additional sentences containing an object RC.",
"ples.",
"Sentence-level margin loss is conceptually advantageous in that it can deal with any type of sentence-level grammaticality including NPIs.",
"We see that it is overall competitive with token-level margin loss but suffers from a larger increase of perplexity (4.9 points), which is observed even with smaller margin values (Figure 1).",
"Understanding the cause of this degradation as well as alleviating it is an important future direction.",
"In Table 1, the accuracies on dependencies across an object RC are relatively low.",
"The central question in this experiment is whether this low performance is due to the limitation of current architectures, or other factors such as frequency.",
"We base our discussion on the contrast between object (7) and subject (8) RCs: (7) The authors (that) the chef likes laugh.",
"Importantly, the accuracies for a subject RC are more stable, reaching 99.8% with the token-level margin loss, although the content words used in the examples are common.",
"9 It is known that object RCs are less frequent than subject RCs (Hale, 2001; Levy, 2008), and it could be the case that the use of negative examples still does not fully alleviate this factor.",
"Here, to understand the true limitation of the current LSTM architecture, we try to eliminate such other factors as much as possible under a controlled experiment.",
"9 Precisely, they are not the same.",
"Examples of object RCs are divided into two categories by the animacy of the main subject ( animate or not), while subject RCs only contain animate cases.",
"If we select only animate examples from object RCs the vocabularies for both RCs are the same, remaining only differences in word order and inflection, as in (7, 8).",
"Setup We first inspect the frequencies of object and subject RCs in the training data, by parsing them with the state-of-the-art Berkeley neural parser (Kitaev and Klein, 2018).",
"In total, while subject RCs occur 373,186 times, object RCs only occur 106,558 times.",
"We create three additional training datasets by adding sentences involving object RCs to the original Wikipedia corpus (Sec-tion 2.2).",
"To this end, we randomly pick up 30 million sentences from Wikipedia (not overlapped to any sentences in the original corpus), parse by the same parser, and filter sentences containing an object RC, amounting to 680,000 sentences.",
"We create augmented training sets by adding a subset, or all of these sentences to the original training sentences.",
"Among the test cases about object RCs we only report accuracies on subject-verb agreement, on which the portion for subject RCs also exists.",
"This allows us to compare the difficulties of two types of RCs for the present models.",
"We also evaluate on animate only subset, which has a correspondence to the test cases for subject RCs with only differences in word order and inflection (like (7) and (8); see footnote 9).",
"Of particular interest to us is the accuracy on these animate cases.",
"We expect that the main reason for lower performance for object RCs is due to frequency, and with our augmentation the accuracy will reach the same level as that for subject RCs.",
"Results However, for both all and animate cases, accuracies are below those for subject RCs (Fig-ure 2).",
"Although we see improvements from the original score (93.7), the highest average accuracy by the token-level margin loss on the animate subset is 97.1 (with that), not beyond 99%.",
"This result indicates some architectural limitations of LSTM-LMs in handling object RCs robustly at a near perfect level.",
"Answering why the accuracy Across a PP 80 85 90 95 100 A cc u r a c y Across a SRC Across an ORC Long VP coord.",
"does not reach (almost) 100%, perhaps with other empirical properties or inductive biases (Khandel-wal et al., 2018; Ravfogel et al., 2019) is future work.",
"One distinguishing property of our margin loss, in particular token-level loss, is that it is highly lexical, making a contrast explicitly between correct and incorrect words.",
"This direct signal may make models acquire very specialized knowledge about each target word, not very generalizable one across similar words and occurring contexts.",
"In this section, to get insights into the transferability of syntactic knowledge induced by our margin losses, we provide an ablation study by removing certain negative examples during training.",
"Setup We perform two kinds of ablation.",
"For token-level ablation (-TOKEN ), we avoid creating negative examples for all verbs that appear as a target verb 10 in the test set.",
"Another is construction-level (-PATTERN ), by removing all negative examples occurring in a particular syntactic pattern.",
"We ablate a single construction at a time for PATTERN , from four non-local subject-verb dependencies (across a prepositional phrase (PP), sub-10 swim, smile, laugh, enjoy, hate, bring, interest, like, write, admire, love, know , and is .",
"ject RC, object RC, and long verb phrase (VP)).",
"11 We hypothesize that models are less affected by token-level ablation, as knowledge transfer across words appearing in similar contexts is promoted by language modeling objective.",
"We expect that construction-level supervision would be necessary to induce robust syntactic knowledge, as perhaps different phrases, e.g., a PP and a VP, are processed differently.",
"Results Figure 3 is the main results.",
"Across models, we restrict the evaluation on four nonlocal dependency constructions, which we select as ablation candidates as well.",
"For a model with -PATTERN , we evaluate only on examples of construction ablated in training (see caption).",
"To our surprise, both -TOKEN and -PATTERN have similar effects, except Across an ORC, on which the degradation by -PATTERN is larger.",
"This may be related to the inherent difficulty of object RCs for LSTM-LMs that we verified in Section 5.",
"For such particularly challenging constructions, models may need explicit supervision signals.",
"We observe lesser score degradation by ablating prepositional phrases and subject RCs.",
"This suggests that, for example, the syntactic knowledge strengthened for prepositional phrases with negative examples could be exploited to learn the syntactic patterns about 11 We identify all these cases from the parsed training data, which we prepared for the analysis in Section 5.",
"subject RCs, even when direct learning signals on subject RCs are missing.",
"We see approximately 10.0 points score degradation on long VP coordination by both ablations.",
"Does this mean that long VPs are particularly hard in terms of transferability?",
"We find that the main reasons for this drop, relative to other cases, are rather technical, essentially due to the target verbs used in the test cases.",
"See Table 2, 3, which show that failed cases for the ablated models are often characterized by the existence of either like or likes .",
"Excluding these cases (other verbs in Table 3), the accuracies reach 99.2 and 98.0 by -TOKEN and -PATTERN , respectively.",
"These verbs do not appear as a target verb in the test cases of other tested constructions.",
"This result suggests that the transferability of syntactic knowledge to a particular word may depend on some characteristics of that word.",
"We conjecture that the reason for weak transferability to likes and like is that they are polysemous; e.g., in the corpus, like is much more often used as a preposition and being used as a present tense verb is rare.",
"This type of issue due to frequency may be one reason for lessening the transferability.",
"In other words, like can be seen as a challenging verb to learn its usage only from the corpus, and our margin loss helps for such cases.",
"Our results with explicit negative examples are overall positive.",
"We have demonstrated that models exposed to these examples at training time in an appropriate way will be capable of handling the targeted constructions at near perfect level except a few cases.",
"We found that our new token-level margin loss is superior to the other approaches and the remaining challenging cases are dependencies across an object relative clause.",
"Object relative clauses are known to be harder for a human as well, and our results may indicate some similarities in the sentence processing behaviors by a human and RNN, though other studies also find some dissimilarities between them (Linzen and Leonard, 2018; Wilcox et al., 2019a).",
"The difficulty of object relative clauses for RNN-LMs has also been observed in the prior work (Marvin and Linzen, 2018; van Schijndel et al., 2019).",
"A new insight provided by our study is that this difficulty holds even after alleviating the frequency effects by augmenting the target structures along with direct supervision signals.",
"This indicates that RNNs might inherently suffer from some memory limitation like a human subject, for which the difficulty of particular constructions, including center-embedded object relative clauses, are known to be incurred due to memory limitation (Gibson, 1998; Demberg and Keller, 2008) rather than purely frequencies of the phenomena.",
"In terms of language acquisition, the supervision provided in our approach can be seen as direct negative evidence (Marcus, 1993).",
"Since human learners are known to acquire syntax without such direct feedback we do not claim that our proposed learning method itself is cognitively plausible.",
"One limitation of our approach is that the scope of negative examples has to be predetermined and fixed.",
"Alleviating this restriction is an important future direction.",
"Though it is challenging, we believe that our final analysis for transferability, which indicates that the negative examples do not have to be complete and can be noisy, suggests a possibility of a mechanism to induce negative examples themselves during training, perhaps relying on other linguistic cues or external knowledge.",
"We would like to thank Naho Orita and the members of Computational Psycholinguistics Tokyo for their valuable suggestions and comments.",
"This paper is based on results obtained from projects commissioned by the New Energy and Industrial Technology Development Organization (NEDO)."
] | [
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"result",
"method",
"abstain",
"method",
"result",
"abstain",
"result",
"method",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"objective",
"objective",
"result",
"abstain",
"objective",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"method",
"other",
"other"
] |
[
"Propaganda can be defined as a form of communication that aims to influence the opinions or the actions of people towards a specific goal; this is achieved by means of well-defined rhetorical and psychological devices.",
"Propaganda, in the form we know it today, can be dated back to the beginning of the 17th cen-tury.",
"However, it is with the advent of the Internet and the social media that it has started to spread on a much larger scale than before, thus becoming major societal and political issue.",
"Nowadays, a large fraction of propaganda in social media is multimodal, mixing textual with visual content.",
"With this in mind, here we propose a new multi-label multimodal task: detecting the type of propaganda techniques used in memes.",
"We further create and release a new corpus of 950 memes, carefully annotated with 22 propaganda techniques, which can appear in the text, in the image, or in both.",
"Our analysis of the corpus shows that understanding both modalities together is essential for detecting these techniques.",
"This is further confirmed in our experiments with several state-of-the-art multimodal models.",
"Social media have become one of the main communication channels for information dissemination and consumption, and nowadays many people rely on them as their primary source of news (Perrin, 2015).",
"Despite the many benefits that social media offer, sporadically they are also used as a tool, by bots or human operators, to manipulate and to mislead unsuspecting users.",
"Propaganda is one such communication tool to influence the opinions and the actions of other people in order to achieve a predetermined goal (IPA, 1938).",
"Propaganda is not new.",
"It can be traced back to the beginning of the 17th century, as reported in (Margolin, 1979; Casey, 1994; Martino et al., 2020), where the manipulation was present at public events such as theaters, festivals, and during games.",
"In the current information ecosystem, it has evolved to computational propaganda (Wool-ley and Howard, 2018; Martino et al., 2020), where information is distributed through technological means to social media platforms, which in turn make it possible to reach well-targeted communities at high velocity.",
"We believe that being aware and able to detect propaganda campaigns would contribute to a healthier online environment.",
"Propaganda appears in various forms and has been studied by different research communities.",
"There has been work on exploring network structure, looking for malicious accounts and coordinated inauthentic behavior (Cresci et al., 2017; Yang et al., 2019; Chetan et al., 2019; Pacheco et al., 2020).",
"In the natural language processing community, propaganda has been studied at the document level (Barron-Cedeno et al., 2019; Rashkin et al., 2017), and at the sentence and the fragment levels (Da San Martino et al., 2019).",
"There have also been notable datasets developed, including ( i ) TSHP-17 (Rashkin et al., 2017), which consists of document-level annotation labeled with four classes (trusted, satire, hoax, and propaganda); ( ii ) QProp (Barron-Cedeno et al., 2019), which uses binary labels (propaganda vs. non-propaganda), and ( iii ) PTC (Da San Martino et al., 2019), which uses fragment-level annotation and an inventory of 18 propaganda techniques.",
"While that work has focused on text, here we aim to detect propaganda techniques from a multimodal perspective.",
"This is a new research direction, even though large part of propagandistic social media content nowadays is multimodal, e.g., in the form of memes.",
"They can easily become viral, and thus it is important to detect malicious ones quickly, and also to understand the nature of propaganda, which can help human moderators, but also journalists, by offering them support for a higher level analysis.",
"Figure 1 shows some examples of memes 1 and propaganda techniques.",
"Example",
"(a) applies transfer , using symbols (hammer and sickle) and colors (red), that are commonly associated with communism, in relation to the two Republicans shown in the image; it also uses Name Calling ( traitors , Moscow Mitch , Moscow's bitch ).",
"The meme in",
"(b) uses both Smears and Glittering Generalities .",
"The one in",
"(c) expresses Smears and suggest that Joe Biden's campaign is only alive because of mainstream media.",
"The examples in the second row show some less common techniques.",
"Example",
"(d) uses Appeal to authority to give credibility to a statement that rich politicians are crooks, and there is also a Thought-terminating clich e used to discourage critical thought on the statement in the form of the phrase WE KNOW , thus implying that the Clintons are crooks, which is also Smears .",
"1 In order to avoid potential copyright issues, all memes we show in this paper are our own recreation of existing memes, using images with clear licenses.",
"Then, example",
"(e) uses both Appeal to (Strong) Emotions and Flag-waving as it tries to play on patriotic feelings.",
"Finally, example",
"(f) has Reduction ad hitlerum as Ilhan Omars' actions are related to such of a terrorist (which is also Smears ; moreover, the word HATE expresses Loaded language ).",
"The above examples illustrate that propaganda techniques express shortcuts in the argumentation process, e.g., by leveraging on the emotions of the audience or by using logical fallacies to influence it.",
"Their presence does not necessarily imply that the meme is propagandistic.",
"Thus, we do not annotate whether a meme is propagandistic (just the propaganda techniques it contains), as this would require, among other things, to determine its intent.",
"Our contributions can be summarized as follows: We formulate a new multimodal task: propaganda detection in memes, and we discuss how it relates and differs from previous work.",
"We develop a multi-modal annotation schema, and we create and release a new dataset for the task, consisting of 950 memes, which we manually annotate with 22 propaganda techniques.",
"2 2 The corpus and the code used in our experiments are available at https://github.com/di-dimitrov/ propaganda-techniques-in-memes .",
"We perform manual analysis, and we show that both modalities (text and images) are important for the task.",
"We experiment with several state-of-the-art textual, visual, and multimodal models, which further confirm the importance of both modalities, as well as the need for further research.",
"Computational Propaganda Computational propaganda is defined as the use of automatic approaches to intentionally disseminate misleading information over social media platforms (Woolley and Howard, 2018).",
"The information that is distributed over these channels can be textual, visual, or multi-modal.",
"Of particular importance are memes, which can be quite effective at spreading multimodal propaganda on social media platforms (Diresta, 2018).",
"The current information ecosystem and virality tools, such as bots, enable memes to spread easily, jumping from one target group to another.",
"As of present, attempts to limit the spread of such memes have focused on analyzing social networks and looking for fake accounts and bots to reduce the spread of such content (Cresci et al., 2017; Yang et al., 2019; Chetan et al., 2019; Pacheco et al., 2020).",
"Textual Content Most research on propaganda detection has focused on analyzing textual content (Barron-Cedeno et al., 2019; Rashkin et al., 2017; Da San Martino et al., 2019; Martino et al., 2020).",
"Rashkin et al. (2017) developed the TSHP-17 corpus, which uses document-level annotation and is labeled with four classes: trusted , satire , hoax , and propaganda .",
"TSHP-17 was developed using distant supervision, i.e., all articles from a given news outlet share the label of that outlet.",
"The articles were collected from the English Gigaword corpus and from seven other unreliable news sources.",
"Among them two were propagandistic.",
"They trained a model using word n -gram representation with logistic regression and reported that the model performed well only on articles from sources that the system was trained on.",
"Barron-Cedeno et al. (2019) developed a new corpus, QProp , with two labels: propaganda vs. non-propaganda.",
"They also experimented on TSHP-17 and QProp corpora, where for the TSHP-17 corpus, they binarized the labels: propaganda vs. any of the other three categories.",
"They performed massive experiments, investigated writing style and readability level, and trained models using logistic regression and SVMs.",
"Their findings confirmed that using distant supervision, in conjunction with rich representations, might encourage the model to predict the source of the article, rather than to discriminate propaganda from non-propaganda.",
"Similarly, Habernal et al. (2017, 2018) developed a corpus with 1.3k arguments annotated with five fallacies, including ad hominem , red herring , and irrelevant authority , which directly relate to propaganda techniques.",
"A more fine-grained propaganda analysis was done by Da San Martino et al. (2019).",
"They developed a corpus of news articles annotated with 18 propaganda techniques.",
"The annotation was at the fragment level, and enabled two tasks: ( i ) binary classification given a sentence in an article, predict whether any of the 18 techniques has been used in it; ( ii ) multi-label multi-class classification and span detection task given a raw text, identify both the specific text fragments where a propaganda technique is being used as well as the type of the technique.",
"On top of this work, they proposed a multi-granular deep neural network that captures signals from the sentence-level task and helps to improve the fragment-level classifier.",
"Subsequently, a system was developed and made publicly available (Da San Martino et al., 2020).",
"Multimodal Content Previous work has explored the use of multimodal content for detecting misleading information (Volkova et al., 2019), deception (Glenski et al., 2019), emotions and propaganda (Abd Kadir et al., 2016), hateful memes (Kiela et al., 2020; Lippe et al., 2020; Das et al., 2020), antisemitism (Chandra et al., 2021) and propaganda in images (Seo, 2014).",
"Volkova et al. (2019) proposed models for detecting misleading information using images and text.",
"They developed a corpus of 500,000 Twitter posts consisting of images labeled with six classes: disinformation, propaganda, hoaxes, conspiracies, clickbait, and satire.",
"Then, they modeled textual, visual, and lexical characteristics of the text.",
"Glenski et al. (2019) explored multilingual multimodal content for deception detection.",
"They had two multi-class classification tasks: ( i ) classifying social media posts into four categories (propaganda, conspiracy, hoax, or clickbait), and ( ii ) classifying social media posts into five categories (disinformation, propaganda, conspiracy, hoax, or clickbait).",
"Multimodal hateful memes have been the target of the popular Hateful Memes Challenge, which the participants addressed using fine-tuned state-of-art multi-modal transformer models such as ViLBERT (Lu et al., 2019), Multimodal Bitransformers (Kiela et al., 2019), and VisualBERT (Li et al., 2019) to classify hateful vs. not-hateful memes (Kiela et al., 2020).",
"Lippe et al. (2020) explored different early-fusion multimodal approaches and proposed various methods that can improve the performance of the detection systems.",
"Our work differs from the above research in terms of annotation, as we have a rich inventory of 22 fine-grained propaganda techniques, which we annotate separately in the text and then jointly in the text+image, thus enabling interesting analysis as well as systems for multi-modal propaganda detection with explainability capabilities.",
"Propaganda comes in many forms and over time a number of techniques have emerged in the literature (Torok, 2015; Miller, 1939; Da San Martino et al., 2019; Shah, 2005; Abd Kadir and Sauf-fiyan, 2014; IPA, 1939; Hobbs, 2015).",
"Different authors have proposed inventories of propaganda techniques of various sizes: seven techniques (Miller, 1939), 24 techniques Weston (2018), 18 techniques (Da San Martino et al., 2019), just smear as a technique (Shah, 2005), and seven techniques (Abd Kadir and Sauffiyan, 2014).",
"We adapted the techniques discussed in (Da San Martino et al., 2019), (Shah, 2005) and (Abd Kadir and Sauffiyan, 2014), thus ending up with 22 propaganda techniques.",
"Among our 22 techniques, the first 20 are used for both text and images, while the last two Appeal to (Strong) Emotions and Transfer are reserved for labeling images only.",
"Below, we provide the definitions of these techniques, which are included in the guidelines the annotators followed (see appendix A.2) for more detal.",
"1. Loaded language: Using specific words and phrases with strong emotional implications (ei-ther positive or negative) to influence an audience.",
"2. Name calling or labeling: Labeling the object of the propaganda campaign as something that the target audience fears, hates, finds undesirable or loves, praises.",
"3. Doubt: Questioning the credibility of someone or something.",
"4. Exaggeration / Minimisation: Either representing something in an excessive manner: making things larger, better, worse (e.g., the best of the best , quality guaranteed ) or making something seem less important or smaller than it really is (e.g., saying that an insult was actually just a joke).",
"5. Appeal to fear / prejudices: Seeking to build support for an idea by instilling anxiety and/or panic in the population towards an alternative.",
"In some cases, the support is built based on preconceived judgements.",
"6. Slogans: A brief and striking phrase that may include labeling and stereotyping.",
"Slogans tend to act as emotional appeals.",
"7. Whataboutism: A technique that attempts to discredit an opponent's position by charging them with hypocrisy without directly disproving their argument.",
"8. Flag-waving: Playing on strong national feeling (or to any group; e.g., race, gender, political preference) to justify or promote an action or an idea.",
"9. Misrepresentation of someone's position (Straw man): Substituting an opponent's proposition with a similar one, which is then refuted in place of the original proposition.",
"10. Causal oversimplification: Assuming a single cause or reason when there are actually multiple causes for an issue.",
"This includes transferring blame to one person or group of people without investigating the complexities of the issue.",
"11. Appeal to authority: Stating that a claim is true simply because a valid authority or expert on the issue said it was true, without any other supporting evidence offered.",
"We also include here the special case where the reference is not an authority or an expert, which is referred to as Testimonial in the literature.",
"12. Thought-terminating cliche: Words or phrases that discourage critical thought and meaningful discussion about a given topic.",
"They are typically short, generic sentences that offer seemingly simple answers to complex questions or that distract the attention away from other lines of thought.",
"13. Black-and-white fallacy or dictatorship: Presenting two alternative options as the only possibilities, when in fact more possibilities exist.",
"As an the extreme case, tell the audience exactly what actions to take, eliminating any other possible choices (Dictatorship).",
"14. Reductio ad hitlerum: Persuading an audience to disapprove an action or an idea by suggesting that the idea is popular with groups hated in contempt by the target audience.",
"It can refer to any person or concept with a negative connotation.",
"15. Repetition: Repeating the same message over and over again, so that the audience will eventually accept it.",
"16. Obfuscation, Intentional vagueness, Confusion: Using words that are deliberately not clear, so that the audience may have their own interpretations.",
"For example, when an unclear phrase with multiple possible meanings is used within an argument and, therefore, it does not support the conclusion.",
"17. Presenting irrelevant data (Red Herring): Introducing irrelevant material to the issue being discussed, so that everyone's attention is diverted away from the points made.",
"18. Bandwagon Attempting to persuade the target audience to join in and take the course of action because everyone else is taking the same action. 19. Smears: A smear is an effort to damage or call into question someone's reputation, by propounding negative propaganda.",
"It can be applied to individuals or groups.",
"20. Glittering generalities (Virtue): These are words or symbols in the value system of the target audience that produce a positive image when attached to a person or an issue.",
"21. Appeal to (strong) emotions: Using images with strong positive/negative emotional implications to influence an audience.",
"22. Transfer: Also known as association , this is a technique that evokes an emotional response by projecting positive or negative qualities (praise or blame) of a person, entity, object, or value onto another one in order to make the latter more acceptable or to discredit it.",
"We collected memes from our own private Facebook accounts, and we followed various Facebook public groups on different topics such as vaccines, politics (from different parts of the political spec-trum), COVID-19, gender equality, and more.",
"We wanted to make sure that we have a constant stream of memes in the newsfeed.",
"We extracted memes at different time frames, i.e., once every few days for a period of three months.",
"We also collected some old memes for each group in order to make sure we covered a larger variety of topics.",
"We annotated the memes using the 22 propaganda techniques described in Section 3 in a multilabel setup.",
"The motivation for multilabel annotation is that the content in the memes often expresses multiple techniques, even though such a setting adds complexity both in terms of annotation and of classification.",
"We also chose to consider annotating spans because the propaganda techniques can appear in the different chunk(s), which is also in line with recent research (Da San Martino et al., 2019).",
"We could not consider annotating the visual modality independently because all memes contain the text as part of the image.",
"The annotation team included six members, both female and male, all fluent in English, with qualifi-cations ranging from undergrad, to MSc and PhD degrees, including experienced NLP researchers; this helped to ensure the quality of the annotation.",
"No incentives were provided to the annotators.",
"The annotation process required understanding the textual and the visual content, which poses a great challenge for the annotator.",
"Thus, we divided it into five phases, as discussed below and as shown in Figure 2. Among these phases there were three stages, ( i ) pilot annotations to train the annotators to recognize the propaganda techniques, ( ii ) inde-pendent annotations by three annotators for each meme (phase 2 and 4), ( iii ) consolidation (phase 3 and 5), where the annotators met with the other three team members, who acted as consolidators, and all six discussed every single example in detail (even those for which there was no disagreement).",
"We chose PyBossa 3 as our annotation platform as it provides the functionality to create a custom annotation interface that can fit our needs in each phase of the annotation.",
"Phase 1 is about filtering some of the memes according to our guidelines, e.g., low-quality memes, and such containing no propaganda technique.",
"We automatically extracted the textual content using OCR, and then post-edited it to correct for potential OCR errors.",
"We filtered and edited the text manually, whereas for extracting the text, we used the Google Vision API.",
"4 We presented the original meme and the extracted text to an annotator, who had to filter and to edit the text in phase 1 as shown in Figure 2. For filtering and editing, we defined a set of rules, e.g., we removed hard to understand, or low-quality images, cartoons, memes with no picture, no text, or for which the textual content was strongly dominant and the visual content was minimal and uninformative, e.g., a single-color background.",
"More details about filtering and editing are given in Appendix A.1.1 and A.1.2.",
"In phase 2, we presented the edited textual content of the meme to the annotators as shown in Figure 2. We asked the annotators to identify the propaganda techniques in the text and to select the corresponding text spans for each of them.",
"Phase 3 is the consolidation step of the annotations from phase 2 as shown in Figure 2. This phase was essential for ensuring the quality, and it further served as an additional training opportunity for the entire team, which we found very useful.",
"Step 4 is multimodal meme annotation, i.e., considering both the textual and the visual content in the meme.",
"In this phase, we show the meme, the post-edited text, and the consolidated propaganda labels from phase 3 (text only) to the annotators, as shown in phase 4 from Figure 2. We intentionally provided the consolidated text labels to the annotators in this phase because we wanted them to focus on the techniques that require the presence of the image rather than to reannotate those from the text.",
"5 4.1.5 Phase 5: Multimodal Consolidation This is the consolidation phase for Phase 4; the setup is like for the consolidation at Phase 3, as shown in Figure 2. Note that, in the majority of the cases, the main reason why two annotations of the same meme might differ was due to one of the annotators not spotting some of the techniques, rather than because there was a disagreement on what technique should be chosen for a given textual span or what the exact boundaries of the span for a given technique instance should be.",
"In the rare cases in which there was an actual disagreement and no clear conclusion could be reached during the discussion phase, we resorted to discarding the meme (there were five such cases in total).",
"5 Ideally, we would have wanted to have also a phase to annotate propaganda techniques when showing the image only; however, this is hard to do in practice as the text is embedded as part of the pixels in the image.",
"We assessed the quality of the annotations for the individual annotators from phases 2 and 4 (thus, combining the annotations for text and images) to the final consolidated labels at phase 5, following the setting in (Da San Martino et al., 2019).",
"Since our annotation is multilabel, we computed Krippendorff's , which supports multi-label agreement computation (Artstein and Poesio, 2008; Passon-neau, 2006).",
"The results are shown in Table 1 and indicate moderate to perfect agreement (Landis and Koch, 1977).",
"After the filtering in phase 1 and the final consolidation, our dataset consists of 950 memes.",
"The maximum number of sentences per meme is 13, but most memes comprise only very few sentences, with an average of 1.68.",
"The number of words ranges between 1 and 73 words, with an average of 17.79 11.60.",
"In our analysis, we observed that some propaganda techniques were more textual, e.g., Loaded Language and Name Calling , while others, such as Transfer , tended to be more image-related.",
"Table 2 shows the number of instances of each technique when using unimodal (text only, i.e., after phase",
"3) vs. multimodal (text + image, i.e., after phase 5) annotations.",
"Note also that a total of 36 memes had no propaganda technique annotated.",
"We can see that the most common techniques are Smears , Loaded Language , and Name calling/Labeling , covering 63%, 51%, and 36% of the examples, respectively.",
"These three techniques also form the most common pairs and triples in the dataset as shown in Table 3. We further show the distribution of the number of propaganda techniques per meme in Figure 3. We can see that most memes contain more than one technique, with a maximum of 8 and an average of 2.61.",
"Table 2 shows that the techniques can be found both in the textual and in the visual content of the meme, thus suggesting the use of multimodal learning approaches to effectively exploit all information available.",
"Note also that different techniques have different span lengths.",
"For example, Loaded Language is about two words long, e.g., violence , mass shooter , and coward .",
"However, techniques such as Whataboutism need much longer spans with an average length of 22 words.",
"Among the learning tasks that can be defined on our corpus, here we focus on the following one: given a meme, find all the propaganda techniques used in it, both in the text and in the image, i.e., predict the techniques as per phase 5.",
"We used two nave baselines.",
"First, a Random baseline, where we assign a technique uniformly at random.",
"Second, a Majority class baseline, which always predicts the most frequent class: Smears .",
"Unimodal: text only.",
"For the text-based unimodal experiments, we used BERT (Devlin et al., 2019), which is a state-of-the-art pre-trained Transformer, and fastText (Joulin et al., 2017), which can tolerate potentially noisy text from social media as it is trained on word and character n -grams.",
"Unimodal: image.",
"For the image-based unimodal experiments, we used ResNet152 (He et al., 2016), which was successfully applied in a related setup (Kiela et al., 2019).",
"Multimodal: unimodally pretrained For the multimodal experiments, we trained separate models on the text and on the image, BERT and ResNet152, respectively, and then we combined them using",
"(a) early fusion Multimodal Bitransformers (MMBT) (Kiela et al., 2019),",
"(b) middle fusion (feature concatenation), and",
"(c) late fusion (com-bining the predictions of the models).",
"For middle fusion, we took the output of the second-to-last layer of ResNet-152 for the visual part and the output of the [CLS] token from BERT, and we fed them into a multilayer network.",
"Multimodal: joint models.",
"We further experimented with models trained using a multimodal objective.",
"In particular, we used ViLBERT (Lu et al., 2019), which is pretrained on Conceptual Captions (Sharma et al., 2018), and Visual BERT (Lin et al., 2014), which is pretrained on the MS-COCO dataset (Lin et al., 2014).",
"We split the data into training, development, and testing with 687 (72%), 63 (7%), and 200 (21%) examples, respectively.",
"Since we are dealing with a multi-class multi-label task, where the labels are imbalanced, we chose micro-average F 1 as our main evaluation measure, but we also report macro-average F 1 .",
"We used the Multimodal Framework (MMF) (Singh et al., 2020).",
"We trained all models on Tesla P100-PCIE-16GB GPU with the following manually tuned hyper-parameters (on dev): batch size of 32, early stopping on the validation set optimizing for F1-micro, sequence length of 128, AdamW as an optimizer with learning rate of 5e-5, epsilon of 1e-8, and weight decay of 0.01.",
"All reported results are averaged over three runs with random seeds.",
"The average execution time for BERT was 30 minutes, and for the other models it was 55 minutes.",
"Table 4 shows the results for the models in Section 5.1.",
"Rows 1 and 2 show a random and a majority class baseline, respectively.",
"Rows 3-5 show the results for the unimodal models.",
"While they all outperform the baselines, we can see that the model based on visual modality only, i.e., ResNet-152 (row 3), performs worse than models based on text only (rows 4-5).",
"This might indicate that identifying the techniques in the visual content is a harder task than in texts.",
"Moreover, BERT significantly outperforms fastText, which is to be expected as it can capture contextual representation better.",
"Rows 6-8 present results for multimodal fusion models.",
"The best one is BERT + ResNet-152 (+2 points over fastText + ResNet-152).",
"We observe that early fusion models (rows 7-8) outperform late fusion ones (row 6).",
"This makes sense as late fusion is a simple mean of the results of each modality, while early fusion has a more complex architecture and trains a separate multi-layer perceptron for the visual and for the textual features.",
"We can also see that both mid-fusion models (rows 7-8) improve over the corresponding text-only ones (rows 3-5).",
"Finally, looking at the results in rows 9-11, we can see that each multimodal model consistently outperforms each of the unimodal models (rows 1-8).",
"The best results are achieved with ViLBERT CC (row 10) and VisualBERT COCO (row 11), which use complex representations that combine the textual and the visual modalities.",
"Overall, we can conclude that multimodal approaches are necessary to detect the use of propaganda techniques in memes, and that pretrained transformer models seem to be the most promising approach.",
"We have proposed a new multi-class multi-label multimodal task: detecting the type of propaganda techniques used in memes.",
"We further created and released a corpus of 950 memes annotated with 22 propaganda techniques, which can appear in the text, in the image, or in both.",
"Our analysis of the corpus has shown that understanding both modalities is essential for detecting these techniques, which was further confirmed in our experiments with several state-of-the-art multimodal models.",
"In future work, we plan to extend the dataset in size, including with memes in other languages.",
"We further plan to develop new multi-modal models, specifically tailored to fine-grained propaganda detection, aiming for deeper understanding of the semantics of the meme and of the relation between the text and the image.",
"A number of promising ideas have been already tried by the participants in a shared task based on this data at SemEval-2021 (Dimitrov et al., 2021), which can serve as an inspiration when developing new models.",
"User Privacy Our dataset only includes memes and it does not contain any user information.",
"Biases Any biases found in the dataset are unintentional, and we do not intend to do harm to any group or individual.",
"We note that annotating propaganda techniques can be subjective, and thus it is inevitable that there would be biases in our gold-labeled data or in the label distribution.",
"We address these concerns by collecting examples from a variety of users and groups, and also by following a well-defined schema, which has clear definitions.",
"Our high inter-annotator agreement makes us con-fident that the assignment of the schema to the data is correct most of the time.",
"Misuse Potential We ask researchers to be aware that our dataset can be maliciously used to unfairly moderate memes based on biases that may or may not be related to demographics and other information within the text.",
"Intervention with human moderation would be required in order to ensure this does not occur.",
"Intended Use We present our dataset to encourage research in studying harmful memes on the web.",
"We believe that it represents a useful resource when used in the appropriate manner.",
"This research is part of the Tanbih mega-project, 6 which is developed at the Qatar Computing Research Institute, HBKU, and aims to limit the impact of fake news, propaganda, and media bias by making users aware of what they are reading."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"other",
"result",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"objective",
"method",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"other"
] |
[
"In this paper, we aim to learn associations between visual attributes of fonts and the verbal context of the texts they are typically applied to.",
"Compared to related work leveraging the surrounding visual context, we choose to focus only on the input text as this can enable new applications for which the text is the only visual element in the document.",
"We introduce a new dataset, containing examples of different topics in social media posts and ads, labeled through crowd-sourcing.",
"Due to the subjective nature of the task, multiple fonts might be perceived as acceptable for an input text, which makes this problem challenging.",
"To this end, we investigate different end-to-end models to learn label distributions on crowd-sourced data and capture inter-subjectivity across all annotations.",
"In visual designs, textual information requires the use of fonts with different properties.",
"Whether it is books, magazines, flyers, ads or social media posts, different typefaces are commonly used to express non-verbal information and add more dimensions to the text.",
"An appropriate font usually embodies information about character, context and usage of the design (Doyle and Bottomley, 2006).",
"This motivates us to explore font associations with regular users in a crowd-sourced setting.",
"In other words, we investigate how users relate fonts to different characteristics of the input text.",
"Current font selection interfaces such as O'Donovan et al. (2014) and commercial online services (e.g., MyFonts 1 and Typekit 2 ) assist users in selecting fonts by taking into account font similarity.",
"However, they do not consider the verbal 1 www.myfonts.com 2 https://fonts.adobe.com/ context of the input text.",
"Having a better understanding of the input text, users can benefit from a font recommendation system during authoring, saving time and avoiding tedious exploration of long lists of fonts.",
"Most graphic designers agree that there is no strict or universally-accepted rule for choosing fonts.",
"Different social and personal factors can be involved in typeface selection, which makes this process subjective.",
"However, there seems to be enough agreement among human opinions to build reasonably effective models of font properties (O'Donovan et al., 2014; Shinahara et al., 2019).",
"Several empirical studies have directly explored the relationship between fonts and texts (Shinahara et al., 2019; Henderson et al., 2004; Mackiewicz, 2007).",
"For example, Brumberger (2003a) indicates that readers have strong opinions about the appropriateness of particular typefaces for particular text passages, and they can differentiate typeface/text mismatches.",
"In this study, we aim to model for the first time the associations between visual font attributes and textual context, with the final goal of better font recommendation during text composition.",
"Our main contributions are:",
"1) We propose and formulate a new task: font recommendation from written text.",
"2) We introduce a new dataset, Short Text Font Dataset , containing a variety of text examples annotated with ten different representative fonts.",
"3) We compare different end-to-end models that exploit contextual and emotional representations of the input text to recommend fonts.",
"These models are able to capture inter-subjectivity among all annotations by learning label distributions during the training phase.",
"We show that emotional representations can be successfully used to capture the underlying characteristics of sentences to suggest proper fonts.",
"Font-related studies have been extensively explored in graphic design literature.",
"Shinahara et al. (2019) performed an empirical study on collections of book titles and online ads, showcasing trends relating typographic design and genre.",
"Several previous studies have attempted to associate personality traits and fonts (O'Donovan et al., 2014; Brumberger, 2003b; Juni and Gross, 2008; Mackiewicz and Moeller, 2005; Amare and Manning, 2012).",
"They support the idea of typefaces consistently perceived to have particular personas, emotions, or tones.",
"More recently, FontLex (Kulahcioglu and De Melo, 2018) was the first to find the association between fonts and words by utilizing font-emotion and word-emotion relationships.",
"Instead of focusing on independent words, our proposed model suggests fonts by considering the broader context of the whole text.",
"Task Subjectivity In some tasks, aggregated annotations always correspond to the correct answer (Brew et al., 2010).",
"Therefore, to fully utilize the crowd's knowledge, different approaches have been proposed to aggregate labels, from simply applying majority voting to more sophisticated strategies to assess annotators' reliability (Yang et al., 2018; Srinivasan and Chander, 2019; Rodrigues et al., 2014).",
"All of these methods rely on the assumption that only one answer is correct and should be considered as ground truth (Nguyen et al., 2016).",
"Whereas in tasks like ours, sentiment analysis (Brew et al., 2010) or facial expression (Barsoum et al., 2016), the answer is likely to be more subjective due to its non-deterministic nature (Urkullu et al., 2019).",
"We follow previous studies that successfully employed label distribution learning to handle ambiguity in the annotations (Geng et al., 2013; Shirani et al., 2019; Yang et al., 2015).",
"The proposed dataset includes 1,309 short text instances from Adobe Spark 3 .",
"The dataset is a collection of publicly available sample texts created by different designers.",
"It covers a variety of topics found in posters, flyers, motivational quotes and advertisements.",
"4 3 https://spark.adobe.com.",
"4 The dataset along with the annotations can be found online: https://github.com/RiTUAL-UH/ Font-prediction-dataset Choice of Fonts A vast number of fonts and typefaces are used in contemporary printed literature.",
"To narrow down the task, we had a font expert select a set of 10 display fonts that cover a wide range of trending styles.",
"These fonts display enough differentiation in visual attributes and typical use cases to cover the topics in our text samples.",
"Figure 1 shows several examples from the dataset, each rendered with the most congruent font (font with the highest agreement).",
"Annotation Process In an MTurk experiment, we asked nine annotators to label each sample text by selecting their top three fonts (Figure 2).",
"Workers were asked to choose suitable fonts after read-ing the sentence.",
"We included carefully-designed quality questions in 10 percent of the hits to monitor the quality of our labeling.",
"We also needed to ensure workers selected fonts based on the comprehension of the text rather than just personal preference.",
"Therefore, we removed the annotations of workers who selected the same font more than 90 percent of the time, resulting in six to eight annotations per instance (we removed instances with fewer than six annotations).",
"As we mentioned earlier, we asked annotators to rank their top three font choices for each text in our dataset.",
"We decided to treat the first, second, and third choices differently as they represent the workers' priorities.",
"Therefore, we give the highest weight to the first choices (1.0) and lower weights (0.6) and (0.3) to the second and third choices, respectively.",
"Figure 3 shows three examples with label distributions over 10 fonts.",
"By comparing the label distributions of these examples, we can observe that formal' fonts like F0, F2, and F5 are often selected in business contexts (left).",
"mod-ern/display' fonts like F1, F3, and F8 are favored in more casual settings (center), and script' fonts like Figure 2: A text sample from the dataset rendered using the available 10 fonts for labelling.",
"We observe that some fonts are more popular than others.",
"Figure 4 shows the average label distribution over all instances.",
"F3, F2, and F1 are the most popular, while F4, F8, and F9 are the least popular among all 10 fonts.",
"Statistics The dataset contains 8,211 tokens.",
"The mean and standard deviation number of tokens per instance is 6.27 and 4.65, ranging from 1 to 27 tokens.",
"We obtained a Fleiss kappa agreement (Fleiss, 1971) of 0.348 by taking into account all three choices.",
"This value is reasonable for a task such as this since previous subjective tasks have also reported low inter-rater agreement scores (Salminen et al., 2018; Alonso et al., 2014).",
"We split up the data randomly into training (70%), development (10%) and test (20%) sets for further experimentation and evaluation.",
"Task Definition Given a piece of text X , we want to determine which font(s) y = { y 0 , ...y 9 } are more appropriate or congruent with the properties of the input text.",
"We formulate this problem as a ranking problem where the model assigns each font a real value d xy , representing the degree to which y describes X .",
"In other words, d xy represents the degree of congruency of font y with input X .",
"The values for all the labels are summed up to 1 to fully describe the instance (Geng, 2016).",
"We explore transfer learning from pre-trained models to improve the performance of our task.",
"We investigate four different deep learning-based architectures to learn font distributions of examples in our dataset.",
"Inspired by previous works, which supported the relationship between font and emotion (Section 2), we compare the effectiveness of emotional embeddings in our models to contextual embeddings like BERT.",
"5 GloVe-BiLSTM Model In this model, we use GloVe embeddings (Pennington et al., 2014) as input and a BiLSTM layer to encode word sequence information in forward and backward directions.",
"Subsequently, we pass the encoded-words to two dense layers for prediction.",
"NRC Model Similar to the GloVe-BiLSTM Model, this model is LSTM-based.",
"The difference is that instead of GloVe embeddings, we use the emotional representations of words from NRC 5 The implementation is available online: https:// github.com/RiTUAL-UH/Font_LDL_2020 Figure 5: Font-Emoji Pearson Correlation Coefficient Heatmap Emotion (Mohammad and Turney, 2013), Intensity (Mohammad, 2018b) and Valence, Arousal, and Dominance (VAD) (Mohammad, 2018a) lexicons as input to the model.",
"To efficiently look up the emotion value of words, we search for the stemmed and synonym versions of out-of-vocabulary words.",
"BERT Model We use pre-trained BERT sequence classification model (Devlin et al., 2018) to obtain contextual embeddings as features.",
"Then the output is fed to two dense layers yielding the class predictions.",
"We implement our model based on the Hugging Face's BERT implementation (Wolf et al., 2019).",
"Emoji Model In this model, we use the DeepMoji pre-trained model (Felbo et al., 2017) to generate emoji vectors by encoding the text into 2304-dimensional feature vectors.",
"We treat these features as embedding and pass them to the model with two dense layers.",
"Deepmoji 6 is a sentence-level model containing rich representations of emotional content which is trained on a 1,246 million tweet corpus in the emoji prediction task.",
"The Kullback-Leibler Divergence (KL-DIV) (Kull-back and Leibler, 1951) is used as the loss function to train the models.",
"KL-DIV measures how the predicted probability distribution is different from the ground truth probability distribution.",
"To train all the models, we use Adam optimizer (Kingma and Ba, 2014) to optimize the model parameters.",
"We run all models over four runs with different random seeds and report the averaged score to ensure stability.",
"The reported test results correspond to models with the best accuracy on the validation set.",
"Font Recall (FR) Less popular fonts could be underrepresented by the models.",
"Therefore we need an evaluation metric that measures the performance of models in learning individual labels.",
"Since we are dealing with an unbalanced dataset, motivated by evaluation methodology used in previous recommendation systems like Kar et al. (2018); Carneiro et al. (2007), we compute Font Recall, i.e. the average recall per font, to measure the performance of the models in learning individual labels.",
"F-score For each instance X from the test set, we select the top k = { 1 , 3 and 5 } fonts with the highest probabilities from both ground truth and prediction distributions.",
"Then we compute weighted averaged F1-score for each k .",
"Note that there are many cases where two or more fonts have the exact same probability.",
"In this case, if the model predicts either one of the labels, we consider it as a correct answer in both metrics.",
"Table 1 compares different models in terms of five evaluation settings.",
"The first two columns of the results show FR for the top 3 and 5 fonts.",
"The other three columns show F-score for the top 1, 3 and 5 fonts.",
"Comparing to the Majority Baseline, the results from the Emoji and BERT models are statistically significant under paired t-test with 95% confidence interval.",
"Although the BERT model performs slightly better than the rest, the Emoji model performs just as well, which suggests two things: (1) the font recommendation task is highly related to what emojis represent and",
"2) a simpler model like Emoji model can perform similarly to a complex solution like BERT.",
"We analyze the reason behind the effectiveness of the Emoji model by visualizing the Font-Emoji Pearson Correlation Coefficient Heatmap (Figure",
"5) in the training set.",
"Interestingly, fonts F4 and F9 with a Script' style are highly correlated by Heart' and Love' emojis.",
"Also, F3 with a Play-ful' style is negatively correlated with emojis with discomfort and mild irritation expressions.",
"Data Augmentation A well-established technique for automatic data augmentation is leveraging machine translation to find meaning-equivalent phrases in a single language (Mallinson et al., 2017; Coulombe, 2018).",
"To mitigate the highly imbal-anced class distribution in our data set, we tried overand under-sampling techniques.",
"We selected examples with high values in underrepresented classes and translated them to four non-English languages using Google Translate 7 .",
"We then translated these examples back to English, resulting in 170 more examples.",
"We also removed 50 instances with high values in the popular classes.",
"We observed that the data augmentation process has marginal improvements (up to 1%) in some models.",
"We leave the exploration of more sophisticated data augmentation approaches for future work.",
"In this paper, we associated font with written text and tackle the problem of font recommendation from the input text.",
"We collected more than 1,300 short written texts and annotated them with ten fonts.",
"We formulated this task as a ranking problem and compared different models based on emotional and contextual representations that exploit label distribution learning to predict fonts.",
"The current approach covers a fixed number of fonts, but it can be extended to support a larger set of fonts.",
"For example, we can use font similarity techniques and enable users to pick a group of 7 https://cloud.google.com/translate/docs/apis fonts, or to provide increased flexibility for the fonts available to users.",
"This research began during an internship at Adobe Research, and was sponsored in part by Adobe Research.",
"We thank the reviewers for their thoughtful comments and efforts towards improving our work.",
"We also thank Tim Brown for his help with font set selection."
] | [
"objective",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"method",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Abstract The use of subword-level information (e.g., characters, character n-grams, morphemes) has become ubiquitous in modern word representation learning.",
"Its importance is attested especially for morphologically rich languages which generate a large number of rare words.",
"Despite a steadily increasing interest in such subword-informed word representations , their systematic comparative analysis across typologically diverse languages and different tasks is still missing.",
"In this work, we deliver such a study focusing on the variation of two crucial components required for subword-level integration into word representation models:",
"1) segmentation of words into subword units, and",
"2) subword composition functions to obtain final word representations.",
"We propose a general framework for learning subword-informed word representations that allows for easy experimentation with different segmentation and composition components, also including more advanced techniques based on position embeddings and self-attention.",
"Using the unified framework, we run experiments over a large number of subword-informed word representation configurations (60 in total) on 3 tasks (gen-eral and rare word similarity, dependency parsing, fine-grained entity typing) for 5 languages representing 3 language types.",
"Our main results clearly indicate that there is no one-size-fits-all configuration, as performance is both languageand task-dependent.",
"We also show that configurations based on unsupervised segmentation (e.g., BPE, Morfessor) are sometimes comparable to or even outperform the ones based on supervised word segmentation.",
"Word representations are central to a wide variety of NLP tasks (Collobert et al., 2011; Chen and Manning, 2014; Jia and Liang, 2016; Ammar et al., 2016; Goldberg, 2017; Peters et al., 2018;",
"Kudo, 2018, inter alia ).",
"Standard word representation models are based on the distributional hypothesis (Harris, 1954) and induce representations from large unlabeled corpora using word co-occurrence statistics (Mikolov et al., 2013; Pennington et al., 2014; Levy and Goldberg, 2014).",
"However, as pointed out by recent work (Bojanowski et al., 2017; Vania and Lopez, 2017; Pinter et al., 2017; Chaudhary et al., 2018; Zhao et al., 2018), mapping a finite set of word types into corresponding word representations limits the capacity of these models to learn beyond distributional information, which leads to several fundamental limitations.",
"The standard approaches ignore the internal structure of words, that is, the syntactic or semantic composition from subwords or morphemes to words, and are incapable of parameter sharing at the level of subword units.",
"Assigning only a single vector to each word causes the data sparsity problem, especially in resource-poor settings where huge amounts of training data cannot be guaranteed.",
"The issue is also prominent for morphologically rich languages (e.g., Finnish) with productive morphological systems that generate a large number of infrequent/rare words (Gerz et al., 2018).",
"Although potentially useful information on word relationships is hidden in their internal subword-level structure , 1 subword-agnostic word representation models do not take these structure features into account and are effectively unable to represent rare words accurately, or unseen words at all.",
"Therefore, there has been a surge of interest in subword-informed word representation architectures aiming to address these gaps.",
"A large number of architectures has been proposed in related research, and they can be clustered over the two main axes (Lazaridou et al., 2013; Luong et al., 2013; 1 For example, nouns in Finnish have 15 cases and 3 plural forms; Spanish verbs may contain over 40 inflected forms, sharing the lemma and taking up standard suffixes.",
"Qiu et al., 2014; Cotterell and Schutze, 2015; Wi-eting et al., 2016; Avraham and Goldberg, 2017; Vania and Lopez, 2017; Pinter et al., 2017; Cotterell and Schutze, 2018).",
"First, the models differ in the chosen method for segmenting words into subwords .",
"The methods range from fully supervised approaches (Cotterell and Schutze, 2015) to e.g. unsupervised approaches based on BPE (Heinzerling and Strube, 2018).",
"Second, another crucial aspect is the subword composition function used to obtain word embeddings from the embeddings of each word's constituent subword units.",
"Despite a steadily increasing interest in such subword-informed word representations, their systematic comparative analysis across the two main axes, as well as across typologically diverse languages and different tasks is still missing.",
"2 In this work, we conduct a systematic study of a variety of subword-informed word representation architectures that all can be described by a general framework illustrated by Figure 1.",
"The framework enables straightforward experimentation with prominent word segmentation methods (e.g., BPE, Morfessor, supervised segmentation systems) as well as subword composition functions (e.g., addition, self-attention), resulting in a large number of 2 A preliminary study of Vania and Lopez (2017) limits its focus on the use of subwords in the language modeling task.",
"different subword-informed configurations .",
"3 Our study aims at providing answers to the following crucial questions: Q1) How generalizable are subword-informed models across typologically diverse languages and across different downstream tasks?",
"Do different languages and tasks require different configurations to reach peak performances or is there a single best-performing configuration?",
"Q2) How important is it to choose an appropriate segmentation and composition method?",
"How effective are more generally applicable unsupervised segmentation methods?",
"Is it always better to resort to a supervised method, if available?",
"Q3) Is there a difference in performance with and without the full word representation?",
"Can more advanced techniques based on position embeddings and self-attention yield better task performance?",
"We evaluate subword-informed word representation configurations originating from the general framework in three different tasks using standard benchmarks and evaluation protocols:",
"1) general and rare word similarity and relatedness,",
"2) dependency parsing, and",
"3) fine-grained entity typing for 5 languages representing 3 language families (fusional, introflexive, agglutinative).",
"We show that different tasks and languages indeed require diverse subword-informed configurations to reach peak performance: this calls for a more careful language-and task-dependent tuning of configuration components.",
"We also show that more sophisticated configurations are particularly useful for representing rare words, and that unsupervised segmentation methods can be competitive to supervised segmentation in tasks such as parsing or fine-grained entity typing.",
"We hope that this paper will provide useful points of comparison and comprehensive guidance for developing next-generation subword-informed word representation models for typologically diverse languages.",
"The general framework for learning subword-informed word representations, illustrated by Figure 1, is introduced in 2.1.",
"We then describe its main components: segmentation of words into subword units ( 2.2), subword and position embeddings ( 2.3), and subword embedding composition 3 Following a similar work on subword-agnostic word embedding learning (Levy et al., 2015), our system design choices resulting in different configurations can be seen as a set of hyper-parameters that also have to be carefully tuned for each language and each application task.",
"where ( w ) is a deterministic function that segments w into an ordered sequence of its constituent subword units S w = ( s w i ) n 1 , with s w i S being a subword type from the subword vocabulary S of size |S| .",
"Optionally, some segmentation methods can also generate a sequence of the corresponding morphotactic tags T w = ( t w i ) n 1 .",
"Alone or together with T w , S w is embedded into a sequence of subword representations S w = ( s w i ) n 1 from the subword embedding matrix W s R |S| d , where d is the dimensionality of subword embeddings.",
"Another optional step is to obtain a sequence of position embeddings P w = ( p w i ) n 1 : they are taken from the position embedding matrix W p R p d , where p is the maximum number of the unique positions.",
"P w can interact with S w to compute the final representations for subwords R w = ( r w i ) n 1 (Vaswani et al., 2017).",
"f is a composition function taking R w as input and outputting a single vector w as the word embedding of w .",
"For the distributional word-level training, similar to prior work (Bojanowski et al., 2017), we adopt the standard skip-gram with negative sampling ( SGNS ) (Mikolov et al., 2013) with bag-of-words contexts.",
"However, we note that other distributional models can also be used under the same framework.",
"Again, following Bojanowski et al. (2017), we calculate the word embedding w t R d for each target word w t using the formulation from Eq.",
"(1), and parametrize context words with another word embedding matrix W c R |V| d , where |V| is the size of word vocabulary V .",
"Supervised Morphological Segmentation We use CHIPMUNK (Cotterell et al., 2015) as a representative supervised segmentation system, proven to provide a good trade-off between accuracy and speed.",
"4 It is based on semi-Markov conditional ran-4 http://cistern.cis.lmu.de/chipmunk ( dishonestly ) CHIPMUNK ( dis , honest , ly ) ( prefix , root , suffix ) Morfessor ( dishonest , ly ) BPE ( dish , on , est , ly ) Table 1: Segmentations of the word dishonestly .",
"dom fields (Sarawagi and Cohen, 2005).",
"For each word, apart from generating S w , it also outputs the corresponding morphotactic tags T w .",
"5 In 2.3 we discuss how to incorporate information from T w into subword representations.",
"Morfessor Morfessor (Smit et al., 2014) denotes a family of generative probabilistic models for unsupervised morphological segmentation used, among other applications, to learn morphologically-aware word embeddings (Luong et al., 2013).",
"BPE Byte Pair Encoding (BPE; Gage (1994)) is a simple data compression algorithm.",
"It has become a de facto standard for providing subword information in neural machine translation (Sennrich et al., 2016).",
"The input word is initially split into a sequence of characters, with each unique character denoted as a byte.",
"BPE then iteratively replaces the most common pair of consecutive bytes with a new byte that does not occur within the data, and the number of iterations can be set in advance to control the granularity of the byte combinations.",
"An example output for all three methods is shown in Table 1.",
"Note that a standard practice in subword-informed models is to also insert the entire word token into S w (Bojanowski et al., 2017).",
"6 This is, however, again an optional step and we evaluate configurations with and without the inclusion of the word token in S w .",
"The next step is to encode S w (or the tuple ( S w , T w ) for CHIPMUNK) to construct a sequence of subword representations S w .",
"Each row of the subword embedding matrix W s is simply defined as the embedding of a unique subword.",
"For CHIPMUNK, we define each row in W s as the concatenation of the subword s and its predicted tag t .",
"We also test 5 In our experiments, we use only basic information on affixes such as prefixes and suffixes, and leave the integration of fine-grained information such as inflectional and derivational affixes as future work.",
"6 We only do the insertion if | S w | > 1 .",
"For CHIPMUNK, a generic tag word is added to the sequence T w .",
"CHIPMUNK configurations without the use of T w to analyze its contribution.",
"7 After generating S w , an optional step is to have a learnable position embedding sequence P w further operate on S w to encode the order information.",
"Similar to W s , the definition of the position embedding matrix W p also varies: for Morfessor and BPE, we use the absolute positions of subwords in the sequence S w , whereas for CHIPMUNK morphotactic tags are encoded directly as positions.",
"Finally, following prior work (Gehring et al., 2017; Mikolov et al., 2018), we use addition and element-wise multiplication between each subword vector s from S w and the corresponding position vector p from P w to compute each entry r for the final sequence of subword vectors R w : r = s + p or r = s (cid:12) p .",
"A composition function f is then applied to the sequence of subword embeddings R w to compute the final word embedding w .",
"We investigate three composition functions:",
"1) addition,",
"2) single-head and",
"3) multi-head self-attention (Vaswani et al., 2017; Lin et al., 2017).",
"8 Addition is used in the original fastText model of Bojanowski et al. (2017), and remains a strong baseline for many tasks.",
"However, addition treats each subword with the same importance, ignoring semantic composition and interactions among the word's constituent subwords.",
"Therefore, we propose to use a self-attention mechanism, that is, a learnable weighted addition as the composition function on subword sequences.",
"To the best of our knowledge, we are the first to apply a self-attention mechanism to the problem of subword composition.",
"Composition Based on Self-Attention Our self-attention mechanism is inspired by Lin et al. (2017).",
"It is essentially a multilayer feed-forward neural network without bias term, which generates a weight matrix for the variable length input R w : H w = tanh ( W h 1 R Tw ) (3) A w = softmax ( W h 2 H w ) (4) 7 The extra information on tags should lead to a more expressive model resolving subword ambiguities.",
"For instance, the subword post in postwar and noun post are intrinsically different: the former is the prefix and the later is the root.",
"Each row of A w is a weight vector for rows of R w , which models different aspects of semantic compositions and interactions.",
"For the single-head self-attention, A w degenerates to a row vector as the final attention vector a w .",
"For the multihead self-attention, we average the rows of A w to generate a w .",
"9 Finally, w is computed as the weighted addition of subword embeddings from R w : w = (cid:80) w n w 1 a w i r w i .",
"We train different subword-informed model configurations on 5 languages representing 3 morphological language types: English ( EN ), German ( DE ), Finnish ( FI ), Turkish ( TR ) and Hebrew ( HE ), see Table 3.",
"We then evaluate the resulting subword-informed word embeddings in three distinct tasks:",
"1) general and rare word similarity and relatedness,",
"2) syntactic parsing, and",
"3) fine-grained entity typing.",
"The three tasks have been selected in particular as they require different degrees of syntactic and semantic information to be stored in the input word embeddings, ranging from a purely semantic task (word similarity) over a hybrid syntactic-semantic task of entity typing to syntactic parsing.",
"Subword-Informed Configurations We train a large number of subword-informed configurations by varying the segmentation method ( 2.2), subword embeddings W s , the inclusion of position embeddings W p and the operations on W s ( 2.3), and the composition functions f ( 2.4).",
"The configurations are based on the following variations of 9 We have also experimented with adding an extra transformation layer over attention matrix to generate the attention vector, but without any performance gains.",
"constituent components: (1) For the segmentation , we test a supervised morphological system CHIPMUNK ( sms ), Morfessor ( morf ) and BPE ( bpe ).",
"A word token can be optionally inserted into the subword sequence S w for all three segmentation methods ( ww ) or left out ( w); (2) We can only embed the subword s for morf and bpe , while with sms we can optionally embed the concatenation of the subword and its morphotactic tag s : t ( st ); 10 (3) We test subword embedding learning without position embeddings ( p), or we integrate them using addition ( pp ) or element-wise multiplication ( mp ); (4) For the composition function function f , we experiment with addition ( add ), single head self-attention ( att ), and multi-head self-attention ( mtx ).",
"Table 2 provides an overview of all components used to construct a variety of subword-informed configurations used in our evaluation.",
"The variations of components from Table 2 yield 24 different configurations in total for sms , and 18 for morf and bpe .",
"We use pretrained CHIPMUNK models for all test languages except for Hebrew, as Hebrew lacks gold segmentation data.",
"Following Vania and Lopez (2017), we use the default parameters for Morfessor, and 10k merge operations for BPE across languages.",
"We use available BPE models pre-trained on Wikipedia by Heinzerling and Strube (2018).",
"11 Two well-known word representation models, which can also be described by the general framework from Figure 1, are used as insightful baselines: the subword-agnostic SGNS model (Mikolov et al., 2013) and fastText ( FT ) 12 (Bojanowski et al., 2017).",
"FT computes the target word embedding using addition as the composition function, while the segmentation is straightforward: the model simply generates all character n-grams of length 3 to 6 and adds them to S w along with the full word.",
"Training Setup Our training data for all languages is Wikipedia.",
"We lowercase all text and replace all digits with a generic tag # .",
"The statistics of the training corpora are provided in Table 3.",
"All subword-informed variants are trained on the same data and share the same parameters for the SGNS model.",
"13 Further, we use ADAGRAD (Duchi 10 Once st is applied, we do not use position embeddings anymore, because the morphotactic tags are already encoded in subword embeddings, i.e., st and pp are mutually exclusive.",
"11 https://github.com/bheinzerling/bpemb 12 https://github.com/facebookresearch/ fastText 13 We rely on the standard choices: 300 -dimensional sub-Typology Language #tokens #types Fusional English ( EN ) 600M 900K German ( DE ) 200M 940K Agglutinative Finnish ( FI ) 66M 600K Turkish ( TR ) 52M 300K Introflexive Hebrew ( HE ) 90M 410K Table 3: Statistics of our Wikipedia training corpora.",
"et al., 2011) with a linearly decaying learning rate, and do a grid search of learning rate and batch size for each on the German 14 WordSim-353 data set ( WS ; Leviant and Reichart (2015)).",
"The hyperparameters are then fixed for all other languages and evaluation runs.",
"Finally, we set the learning rate to 0.05 for sms and bpe , and 0.075 for morf , and the batch size to 1024 for all the settings.",
"Word Similarity and Relatedness These standard intrinsic evaluation tasks test the semantics of word representations (Pennington et al., 2014; Bojanowski et al., 2017).",
"The evaluations are performed using the Spearman's rank correlation score between the average of human judgement similarity scores for word pairs and the cosine similarity between two word embeddings constituting each word pair.",
"We use Multilingual SimLex-999 ( SIMLEX ; Hill et al. (2015); Leviant and Reichart (2015); Mrksic et al. (2017)) for English, German and Hebrew, each containing 999 word pairs annotated for true semantic similarity.",
"We further evaluate embeddings on FinnSim-300 ( FS 300) produced by Venekoski and Vankka (2017) for Finnish and AnlamVer ( AN ; Ercan and Yldz (2018)) for Turkish.",
"We also run experiments on the WordSim-353 test set ( WS ; Finkelstein et al. (2002)), and its portions oriented towards true similarity ( WS-SIM ) and broader relatedness ( WS-REL ) portion for English and German.",
"Finally, to analyze the importance of subword information for learning embeddings of rare words, we evaluate on the recently released CARD-660 dataset ( CARD ; Pilehvar et al. (2018)) for English, annotated for true semantic similarity.",
"word and word embeddings, 5 training epochs, the context window size is 5, 5 negative samples, the subsampling rate of 10 5 , and the minimum word frequency is 5.",
"14 German has moderate morphological complexity among the five languages, so we think the hyperparameters tuned on it could be applicable to other languages.",
"Dependency Parsing Next, we use the syntactic dependency parsing task to analyze the importance of subword information for syntactically-driven downstream applications.",
"For all test languages, we rely on the standard Universal Dependencies treebanks (UD v2.2; Nivre et al. (2016)).",
"We use subword-informed word embeddings from different configurations to initialize the deep biaffine parser of Dozat and Manning (2017) which has shown competitive performance in shared tasks (Dozat et al., 2017) and among other parsing models (Ma and Hovy, 2017; Shi et al., 2017; Ma et al., 2018).",
"15 We use default settings for the biaffine parser for all experimental runs Fine-Grained Entity Typing The task is to map entities, which could comprise more than one entity token, to predefined entity types (Yaghoobzadeh and Schutze, 2015).",
"It is a suitable semi-semantic task to test our subword models, as the subwords of entities usually carry some semantic information from which the entity types can be inferred.",
"For example, Lincolnshire will belong to /location/county as -shire is a suffix that strongly indicates a location.",
"We rely on an entity typing dataset of Heinzerling and Strube (2018) built for over 250 languages by obtaining entity mentions from Wikidata (Vrandecic and Krotzsch, 2014) and their associated FIGER-based entity types (Ling and Weld, 2012): there only exists a one-to-one mapping between the entity and one of the 112 FIGER types.",
"We randomly sample the data to obtain a train/dev/test split with the size of 60k/20k/20k for all languages.",
"For evaluation we extend the RNN-based model of Heinzerling and Strube (2018), where they stacked all the subwords of entity tokens into a flattened sequence: we use the hierarchical embedding composition instead.",
"For each entity token, we first compute its word embeddings with our subword configurations, 16 then feed the word embeddings of entity tokens to a bidirectional LSTM with 2 hidden layers of size 512, followed by a projection layer which predicts the entity type.",
"15 https://github.com/tdozat/Parser-v2 16 Although it is true that case information can be very important to the task, we conform to Heinzerling and Strube (2018) lowercasing all letters.",
"To get a better grasp of the overall performance without overloading the tables, we focus on reporting two best configurations and the worst configuration for each task and language from the total of 60 configurations, except for Hebrew with 36 configurations, where there is no gold segmentation data for training sms model.",
"We also analyze the effects of different configurations on different tasks based on language typology.",
"The entire analysis revolves around the key questions Q1-Q3 posed in the introduction.",
"The reader is encouraged to refer to the supplementary material for the complete results.",
"Tables 4, 5, and 6 summarize the main results on word similarity and relatedness, dependency parsing and entity typing, respectively.",
"In addition, the comparisons of different configurations across tasks and language types are shown in Figure 2 (as well as Figure 3 to 7 in the supplementary ma-terial).",
"There, we center the comparison around two crucial components: segmentation and composition.",
"The value in each pixel block is the percentage rank of the row configuration minus that of column configuration.",
"We compute such percentage ranks by performing three levels of averaging over:",
"1) all related datasets for the same task;",
"2) all sub-configurations that entail the configuration in question;",
"3) all languages from the same language types.",
"Q1.",
"Tasks and Languages Regarding the absolute performance of our subword-informed configurations, we notice that they outperform SGNS and FT in 3/5 languages on average, and for 8/13 datasets on word similarity and relatedness.",
"The gains are more prominent over the subword-agnostic SGNS model and for morphologically richer languages such as Finnish and Turkish.",
"The results on the two other tasks are also very competitive, with strong performance reported especially on the entity typing task.",
"This clearly indicates the importance of integrating subword-level information into word vectors.",
"A finer-grained comparative analysis shows that best-performing configurations vary greatly across different languages.",
"The comparative analysis across tasks also suggests that there is no single configuration that outperforms the others in all three tasks, although certain patterns in the results emerge.",
"For instance, the supervised segmentation ( sms ) is very useful for word similarity and relatedness (seen in Figure 4).",
"This result is quite intuitive: sms is trained according to the readily available gold standard morphological segmentations.",
"However, sms is less useful for entity typing, where almost all best-performing configurations are based on morf (see also Figure 4).",
"This result is also interpretable: morf is a conservative segmenter that captures longer subwords, and is not distracted by short and nonsensical subwords (like bpe ) that have no contribution to the prediction.",
"17 The worst configurations on word similarity are based on bpe : its aggressive segmentation often results in non-interpretable or nonsensical subwords which are unable to recover useful semantic information.",
"Due to the same reason, the results in all three tasks indicate that the best configurations with bpe are always coupled with ww , and the worst are obtained with w(i.e., without the inclusion of the full word).",
"The results on parsing reveal similar performances for a spectrum of heterogeneous configurations.",
"In other words, while the chosen configuration is still important, its impact on performance of state-of-the-art dependency parsers (Kiperwasser and Goldberg, 2016; Dozat and Manning, 2017) is decreased, as such parsers are heavily parametrized multi-component methods (e.g., besides word embeddings they rely on biLSTMs, intra-sentence attention, character representations).",
"18 Therefore, a larger space of subword-informed configurations for word representations leads to optimal or near-optimal results.",
"However, sms seems to yield highest scores on average in agglutinative languages (see also Figure 2).",
"Figure 2 clearly demonstrates that, apart from entity typing heatmaps (the third row) which show very similar trends over different language types, the patterns for different tasks and language types tend to vary in general.",
"Similarly, other figures in the supplementary material also show diverging trends across languages representing different language types.",
"17 For example, sms and bpe both split Val-berg ( /location/city ) and Robert Valberg ( /person/actor ) with a suffix berg.",
"Since berg could represent both a place or a person, it is not useful alone as a suffix to predict the correct entity type, whereas morf does not split the word and makes the prediction solely based on surrounding entity tokens.",
"18 We also experimented with removing token features such as POS tags and character embeddings in some settings, but we observed similar trends in the final results.",
"segmentation and composition components.",
"The crucial components for word similarity are sms and ww , and sms is generally better than morf and bpe in fusional and agglutinative languages (see Figure 4 and 7).",
"The presence of ww is desired in this task as also found by Bojanowski et al. (2017): it enhances the information provided by the segmentation.",
"As discussed before, the best configurations with bpe are always coupled with ww , and the worst with w.",
"ww is less important for the more conservative morf where the information stored in ww can be fully recovered from the generated subwords.",
"Interestingly, pp and mp do not have positive effects on this task for fusional and introflexive languages, but they seem to resonate well with agglutinative languages, and they are useful for the two other tasks (seen in Figure 6).",
"In general, position embeddings have shown potential benefits in all tasks, where they selectively emphasize or filter subwords according to their positions.",
"pp is extremely useful in entity typing for all languages, because it indicates the root position.",
"Concerning composition functions, add still remains an extremely robust choice across tasks and languages.",
"Surprisingly, the more sophisticated self-attention composition prevails only on a handful of datasets: compare the results with add vs. att and mtx .",
"In fact, the worst configurations mostly use att and mtx (see also Figure 5).",
"In sum, our results suggest that, when unsure, add is by far the most robust choice for the subword composition function.",
"Further, morphotactic tags encoded in subword embeddings ( st ) seem to be only effective combined with self-attention in word similarity and relatedness.",
"These findings call for further investigation in future work, along with the inclusion of finer-grained morphotactic tags into the proposed modeling framework.",
"Further Discussion A recurring theme of this study is that subword-informed configurations are largely taskand language-dependent.",
"We can extract multiple examples from the reported results affirming this conjecture.",
"For instance, in fusional and agglutinative languages mp is critical to the model on dependency parsing, while for Hebrew, an introflexive language, mp is among the most detrimental components on the same task.",
"Further, for Turkish word similarity bpe.ww outperforms sms.ww : due to affix concatenation in Turkish, sms produces many affixes with only syntactic functions that bring noise to the task.",
"Interestingly, SGNS performs well in Hebrew on parsing and word similarity: it shows that it is still difficult for linear segmentation methods to capture non-concatenative morphology.",
"Finally, fine-tuning subword-informed representations seems especially beneficial for rare word semantics: our best configuration outperforms FT by 0.111 on CARD , and even surpasses all the state-of-the-art models on the rare word similarity task, as reported by Pilehvar et al. (2018).",
"We hope that our findings on the CARD dataset will motivate further work on building more accurate representations for rare and unseen words (Bhatia et al., 2016; Herbelot and Baroni, 2017; Schick and Schutze, 2018) by learning more effective and more informed components of subword-informed configurations.",
"We have presented a general framework for learning subword-informed word representations which has been used to perform a systematic analysis of 60 different subword-aware configurations for 5 typologically diverse languages across 3 diverse tasks.",
"The large space of presented results has allowed us to analyze the main properties of subword-informed representation learning: we have demonstrated that different components of the framework such as segmentation and composition methods, or the use of position embeddings, have to be carefully tuned to yield improved performance across different tasks and languages.",
"We hope that this study will guide the development of new subword-informed word representation architectures.",
"Code is available at: https://github.",
"com/cambridgeltl/sw_study .",
"This work is supported by the ERC Consolidator Grant LEXICAL (no 648909).",
"We thank the reviewers for their insightful comments, and Roi Reichart for many fruitful discussions."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"other",
"abstain",
"other",
"other"
] |
[
"Recently Graph Neural Network (GNN) has been used as a promising tool in multi-hop question answering task.",
"However, the unnecessary updations and simple edge constructions prevent an accurate answer span extraction in a more direct and interpretable way.",
"In this paper, we propose a novel model of Breadth First Reasoning Graph (BFR-Graph), which presents a new message passing way that better conforms to the reasoning process.",
"In BFR-Graph, the reasoning message is required to start from the question node and pass to the next sentences node hop by hop until all the edges have been passed, which can effectively prevent each node from over-smoothing or being updated multiple times unnecessarily.",
"To introduce more semantics, we also define the reasoning graph as a weighted graph with considering the number of co-occurrence entities and the distance between sentences.",
"Then we present a more direct and interpretable way to aggregate scores from different levels of granularity based on the GNN.",
"On HotpotQA leaderboard, the proposed BFR-Graph achieves state-of-the-art on answer span prediction.",
"Typical Question Answering (QA) or Reading Comprehension (RC) task aims at exploring a desired answer through a single evidence document or paragraph.",
"Recently, a more challenging multi-hop QA task, where we need to reason over multiple paragraphs to find the answer, is gradually catching attention.",
"One example from HotpotQA dataset (Yang et al., 2018) is shown in Fig.",
"1. One method for achieving multi-hop QA is to concatenate all the paragraphs together and treat it as a typical single-hop QA task (Yang et al., 2018), then existing QA techniques can be applied.",
"Although multi-hop QA can be solved to some extent, * Corresponding author.",
"Graph Neural Networks (GNN) is a natural way to represent the solving procedure of multi-hop QA.",
"For instance, nodes in GNN represent sen-tences/entities in the paragraphs, and from the up-dation through edges we can get interactive message between them, which is similar to the process of reasoning.",
"Thus, a more reasonable method is to construct GNN to simulate the reasoning process among multiple paragraphs (Ding et al., 2019; Qiu et al., 2019; Tu et al., 2020).",
"Promising performance has been reported in methods that designed different type of nodes or edges for GNN(De Cao et al., 2019; Tu et al., 2019, 2020; Fang et al., 2020) and the features generated from GNN has also been combined with those from the context encoder in a latent way (Qiu et al., 2019; Fang et al., 2020).",
"Despite of the success that GNN achieves in multi-hop QA, new problems associated to GNN arise.",
"Firstly, current approaches update all the nodes, including some unnecessary ones, together within each layer, which may lead the nodes to converge to similar values and lose the discriminating ability for GNN with more layers (Kipf and Welling, 2017).",
"Secondly, although different types of edges have been designed for GNN, there is no more fine-grained distinction between edges of the same type, without considering the other relational information between sentences.",
"Thirdly, existing methods only latently fuse the hidden representations of GNN and context encoder, without contributing to the answer span extraction in a direct and interpretable way.",
"To solve the aforementioned issues, we proposed a novel model of Breadth First Reasoning Graph (BFR-Graph) to effectively adapt GNN to multihop QA.",
"The proposed BFR-Graph is a weighted graph in which the weight of an edge is computed based on other relational information (e.g., co-occurrence entities and distance) of the connected sentences.",
"Inspired by the Human reasoning mechanism and the Breadth First Search algorithm, in BFR-Graph the reasoning message starts from the question and passes to the next sentence nodes hop by hop until all the edges have been passed, effectively preventing each node from updating multiple times or being updated unnecessarily.",
"Then the reasoning result from BFR-Graph is converted to the sentence scores and paragraph scores, contributing to the answer span extraction.",
"Specifically, the final answer span probability is the sum of the score of answer span, the sentence and the paragraph, in both of which the answer is located.",
"Experiment results shows that our methods make GNN more powerful in multi-hop QA and achieves state-of-the-art on answer span prediction of HotpotQA.",
"The contributions of this paper are summarized as follows: We propose BFR-Graph for multi-hop QA, which is more in line with reasoning process than existing GNNs.",
"The reasoning message starts at the question and then reasons to the next sentences hop by hop.",
"Our BFR-Graph is a weighted graph, considering the number of co-occurrence entities and the distance between sentences.",
"BFR-Graph, multi-score mechanism is used for answer span extraction in a more direct and interpretable way.",
"Serval multi-hop QA datasets have been proposed such as WikiHop (Welbl et al., 2018) and HotpotQA (Yang et al., 2018).",
"WikiHop provides candidate answers for selection while HotpotQA needs to find an answer span over all paragraphs.",
"Based on these datasets, several categories of multi-hop QA approaches were proposed.",
"Yang et al. (2018) proposed a baseline method based on RNNs and Min et al. (2019) decomposed the multi-hop question into simpler single-hop subquestion that can be answered by existing single-hop RC models.",
"To better utilize multiple paragraphs, Nishida et al. (2019) proposed Query Focused Extractor to sequentially summarize the context and Asai et al. (2020) used a recurrent retrieval approach that learns to sequentially retrieve evidence paragraphs.",
"Moreover, reasoning has also been conducted in multi-hop QA.",
"Jiang and Bansal (2019) designed a neural modular network to perform unique types of reasoning; Chen et al. (2020) presented extra hop attention that can naturally hops across the connected text sequences.",
"Qiu et al. (2019) regards the task as a two-stage task including paragraph selection and downstream model.",
"and Tu et al. (2020) further proposed a pairwise learning-to-rank loss for better interaction between paragraphs.",
"Although the aforementioned methods are specifically designed for multi-hop QA with different structures, they lack an explicit scheme to show the reasoning process.",
"Recently GNNs such as Graph Convolution Networks (Kipf and Welling, 2017) and Graph Attention Networks (Velickovic et al., 2018) show enhancement in multi-hop QA because the GNN-based methods are more intuitive and explicit.",
"Entity-GCN (De Cao et al., 2019) considered different type of edges and Tu et al. (2019) further built a heterogeneous graph with multiple types of nodes and edges for different granularity levels of information.",
"Besides, Ding et al. (2019) coordinated implicit extraction and explicit reasoning through a GNN inspired by the dual process theory in cognitive science, and Tu et al. (2020) built a Figure 2: Diagram of our system.",
"GNN model for reasoning over sentence, which is summarized over token representations based on a mixed attentive pooling mechanism.",
"Furthermore, more complex graphs is also designed.",
"Qiu et al. (2019) proposed a Dynamically Fused Graph Network to explore along the entity graph dynamically and finds supporting entities from the context.",
"Fang et al. (2020) created a hierarchical graph for different levels of granularity to aggregate clues from scattered texts across multiple paragraphs.",
"However, GNNs in these methods update all the nodes together, including some unnecessary ones.",
"To solve the aforementioned issues, we propose a novel model of Breadth First Reasoning Graph (BFR-Graph) for multi-hop QA.",
"Different from existing GNN-based methods, BFR-Graph introduces new restrictions on the message passing: the message only starts from the question and then passes to the latter sentence nodes hop by hop.",
"Besides, our graph is constructed as a weighted graph considering the co-occurrence entities and distance between sentences.",
"Moreover, multi-score answer prediction is designed to take advantage of the reasoning result from BFR-Graph.",
"In short, we propose breadth first reasoning on the weighted graph and then combine multi-level scores for answer prediction in the framework of multi-task joint training.",
"The diagram of our system is shown in Fig.",
"2. Given multiple paragraphs, we first filter out irrelevant paragraph with paragraph selection (Sec. 3.1) and then use a BERT for context encoding (Sec. 3.2).",
"A weighted graph is constructed (Sec. 3.3) to reason over sentences (Sec. 3.4) and calculate the sentence score and paragraph score.",
"Finally, we use multi-score mechanism to predict the answer span (Sec. 3.5).",
"Although multiple candidate paragraphs are given for answering the question, not all of them are useful (i.e., relevant to the question).",
"Following Qiu et al. (2019), we retrieve N useful paragraphs for each question through a straightforward way.",
"Each candidate paragraph is concatenated with the question ([CLS] + question + [SEP] + paragraph + [SEP]) and fed into a BERT (Devlin et al., 2019) for binary classification.",
"After training procedure, we select paragraphs with topN score as the useful paragraphs, which are then concatenated together as context C .",
"Following Qiu et al. (2019), we concatenate each question Q and its corresponding context C , and feed them into a BERT followed by a bi-attention layer (Seo et al., 2017) to obtain the encoded representations of question and context.",
"The output is denoted as: H = { h 0 , , h L 1 } RL d , (1) where L is the length of the input sequence (con-catenating question and context), and d is the output dimension of bi-attention layer (also the dimension of BERT).",
"sen-Figure 3: Message passing procedure of BFR-Graph and typical GNN.",
"Active node is the node that is reachable for its neighbors while the quiet one is on the contrary.",
"Active edge is the passable edge while the quiet one is on the contrary.",
"where s starti , s endi are the start and end position of the sentence i respectively, L s i is the length of sentence i .",
"Note that the question is also a sentence.",
"Then using the method in Rei and Sgaard (2019), we get sentence representation: s i = L s (cid:88) k =0 ik S seqi [ k, :] R d , (3) where ik is the weight on the k -th token of sentence i , obtained from a two-layer MLP (Multi-Layer Perceptron) with output size",
"= 1. 3.3 Weighted Graph Construction The nodes in our weighted graph represent question Q and sentences in context C .",
"To better exploit complex relational information between sentences, two types of correlation are defined: positive correlation and negative correlation.",
"Although they can be designed in many ways, now we illustrate our design: (1) Positive correlation: an edge is added if the nodes representing the sentences i and j have n ( n 1) of the same named entities, and the weight of the edge is: w ij = 1 1 + e n + K 1 .",
"(2) Negative correlation: otherwise, an edge is added if the two nodes are originally from the same paragraph, and the weight of the edge is: w ij = 1 1 + e d + K 2 , (5) where d is the distance of the two sentences (e.g., d = 1 if the sentence is immediately followed by the other sentence in a paragraph, d = 2 if there is a sentence between them, etc.).",
"K 1 and K 2 are hyperparameters.",
"To simplify our design, we treat our graph as a homogeneous graph, which contains single type of nodes and edges.",
"When we reason over paragraphs to answer a question, we start from the question and find the next sentence hop by hop.",
"For a GNN where nodes represent sentences, the following message passing is unnecessary and may suppress the disturbance from useless nodes: (1) from the latter node to the former node, (2) a node haven't received the message from question but it updates other nodes.",
"To prevent each node from being updated multiple times unnecessarily, the reasoning message in our BFR-Graph starts from the question node and passes to the next nodes hop by hop until all the edges have been passed.",
"Note that a node is allowed to update multiple times, depending on whether the connected edges have all been passed.",
"Fig. 3 visually shows the difference between BFR-Graph and typical GNN.",
"Specifically, a node i is updated by node j when the following conditions are met simultaneously: (1) node i and node j are neighbors, (2) node j is active, i.e., it is updated last layer, (3) the edge between node i and node j haven't been passed previously.",
"The overall message passing procedure of BFR-Graph is illustrated in Algorithm",
"1. Inspired by Graph Attention Networks (Velickovic et al., 2018), the updating function (or message passing function) is defined as: s (cid:48) i = LeakyRelu( (cid:88) j N (cid:48) i ij s j W ) , (6) ij = exp( f ( s i , s j )) w ij (cid:80) k N (cid:48) i exp( f ( s i , s k )) w ik , (7) Figure 4: Multi-score answer prediction.",
"where N (cid:48) i is the set of reachable neighbors for node i , calculated with Algorithm",
"1. f ( s i , s j ) = s i W 1 W 2 s j is for calculating the attention score between node i and j .",
"W , W 1 and W 2 are learnable parametres.",
"w ij is the weight of the edge ( i, j ) , described in the Sec. 3.3.",
"For clarity, s (cid:48) is written as s in following contents.",
"The answer in HotpotQA dataset is a span from the context.",
"Existing works only calculate the span probability on the output of encoder (e.g., BERT) or additionally concatenate the GNN's hidden output.",
"Differently, we use a more interpretable method by calculating the sentence score and paragraph score obtained from the GNN.",
"An example is shown in Fig. 4.",
"1 to obtain the score value.",
"Then, we calculate the sentence score corresponding to each node in GNN: sent ( s i ) = MLP 3 ( s i ) .",
"where s p j i is the representation of the i -th sentence in paragraph p j , L p j is the number of sentences in paragraph p j .",
"Max( ) is a max-pooling layer with pooling size = L p j 1 , which can also be done by taking the maximum hidden value on each dimension over all the sentence nodes.",
"Finally, the probability of y -th word in context being the start of the answer span is determined by: p start ( y ) = softmax( (cid:48) start ( y )) , (12) (cid:48) start ( y ) = start ( y ) + sent ( s i ) + para ( p j ) , (13) where the y -th word is located in sentence s i and paragraph p j .",
"And the probability of y -th word in context being the end of the answer span can be calculated similarly.",
"In addition to the answer span prediction, there are other two training tasks in HotpotQA.",
"One is the answer type prediction task: some answers cannot be retrieved from the context, but are Yes or No, so finally there are three type of answers (e.g., span, Yes and No).",
"We use a global-max-pooling similar with",
"Eq.(11) to compress all the nodes in the GNN and predict the answer type through a two-layer MLP.",
"The other task is to predict whether a sentence in the context is a support sentence (or called supporting fact in some papers) that is an evidence to the answer.",
"Following previous works (Tu et al., 2020), we use the output of the GNN to predict the supporting sentences with a two-layer MLP.",
"The tasks in HotpotQA are jointly performed through multi-task learning, and the loss function is: L = LCE ( y start , y start ) + LCE ( y end , y end )+ 1 LCE ( y type , y type ) + 2 LBCE ( y sp , y sp ) , (14) where LCE and LBCE denote the cross entropy and binary cross entropy loss respectively.",
"y start denotes the logits of start position from",
"Eq.(12) and y start is the label.",
"Similarly, y type and y sp are the logits of answer type prediction and supporting sentence prediction respectively.",
"The HotpotQA dataset (Yang et al., 2018) is the first explainable multi-hop QA dataset with sentence-level evidence supervision.",
"Each sample in the dataset contains 2 gold paragraphs and 8 distracting paragraphs.",
"Three tasks are included for evaluation: (1) answer span prediction (denoted as Ans) that extracts a span in the paragraphs or generate Yes/No; (2) supporting sentences prediction (denoted as Sup) that determines which sentences are evidences to the answer; (3) joint prediction (denoted as Joint).",
"We submit our model to HotpotQA official leaderboard 1 and carry out ablation studies on the dev-set.",
"We also apply the main idea of BFR-Graph to the WikiHop dataset (Welbl et al., 2018), which provides candidate answers for selection while HotpotQA dataset needs to find an answer span over all paragraphs.",
"Implementation details can be found in Appendix A. 4.2 Results The experimental result on HotpotQA dataset is shown in Table",
"1. As a reading comprehension task, the performance of answer prediction should be emphasized.",
"Our model improves 0.84% Ans-EM (Exact Match) than HGN-large, becoming the first model to break through 70% and achieving state-of-the-art on answer span prediction.",
"On supporting sentence prediction and joint prediction, our model shows a close performances to HGN-large, possibly because this paper is based on the standard GNN (homogeneous graph) for simple clarifica-tion, and we just plan to prove that our algorithm can improve the performance of GNN.",
"Existing GNN methods mostly constructed elaborate graphs for more granular expression of nodes, while our BFR-Graph solve the problem from another novel perspective.",
"Thus, BFR-Graph is universal and can be easily applied to existing promising models (e.g., HGN) to get better results, which provides a promising direction for future research.",
"We also compare our model with two state-of-the-art GNN models (i.e., SAE and HGN), shown in Table",
"2. Both of them need to set the number of GNN layers manually while BFR-Graph can pass through all the connected nodes automatically with 1 https://hotpotqa.github.io/ Model Ans Sup Joint EM F1 EM F1 EM F1 Official baseline (Yang et al., 2018) 45.60 59.02 20.32 64.49 10.83 40.16 QFE (Nishida et al., 2019) 53.86 68.06 57.75 84.49 34.63 59.61 DFGN (Qiu et al., 2019) 56.31 69.69 51.50 81.62 33.62 59.82 LQR-Net (Grail et al., 2020) 60.20 73.78 56.21 84.09 36.56 63.68 SAE-large (Tu et al., 2020) 66.92 79.62 61.53 86.86 45.36 71.45 C2F-reader (Shao et al., 2020) 67.98 81.24 60.81 87.63 44.67 72.73 HGN-large (Fang et al., 2020) 69.22 82.19 62.76 88.47 47.11 74.21 BFR-Graph 70.06 82.20 61.33 88.41 45.92 74.13 Table 1: Results on HotpotQA leaderboard.",
"an extremely low risk of over-smoothing (Kipf and Welling, 2017).",
"SAE and HGN set a fixed types of edges, which is still not fine-grained enough, while BFR-Graph define different weights (can up to different weights depends on the dataset) to distinguish nodes in a finer granularity.",
"Furthermore, we can easily observe scores from GNN in an intuitive way in BFR-Graph.",
"Besides, Table 3 shows the results on WikiHop dev-set.",
"When we add the breadth first reasoning graph and weights to Longformer (Beltagy et al., 2020), the performance is slightly improved, showing that our method have the ability for better reasoning.",
"In this section, we carry out ablation studies on HotpotQA dev-set.",
"Table 4 shows the results of our full model and that without breadth first reasoning, weights, and multi-score.",
"It indicates that our methods obviously improve the performance of GNN.",
"Table 5 shows the result by gradually replace the BFR-Graph layers with standard GNN layers.",
"In detail, r/p 1 layer denotes replacing the first layer with a standard GNN layer, r/p 2 layers denotes the same operation for the first and second layers, etc..",
"We observe that the more layers to be replaced, the more severely the result drops.",
"And when we replace 4 layers, the joint F1 drops at about 6%, meaning that it causes over-smoothing.",
"It also re-flects the severe problem of typical GNN: if it have more layers, over-smoothing is caused; if it have less layers, it cannot achieve long-path reasoning.",
"To further analyze why this particular approach of message passing in a breadth first reasoning fashion should result in better reasoning, we propose to calculate how many useful messages the answer sentence node received from supporting senteences: precision = N sp & rcv N rcv , recall = N sp & rcv N sp , where N rcv denotes how many nodes' massages the answer sentence node received, N sp denotes the number of supporting sentence (containing the question sentence here), and N sp & rcv denotes how many supporting nodes' massages the answer sentence node received.",
"The above-mentioned precision, recall and corresponding F1 on dev-set is shown in Table 6, where the typical GNN is a 2-layer GNN following previous works.",
"With breadth first reasoning, the answer Ans F1 Sup F1 Joint F1 full model 81.82 88.80 73.98 bfr&ws&ms 80.72 87.77 72.20 Table 4: General ablation study for our full model.",
"sentence could receive messages from supporting sentences with a higher precision, meaning that it can focus on useful sentences and eliminate invalid distractions.",
"Since the restrictions on message passing in breadth first reasoning, it leads to a decrease in recall.",
"However, it is hard to draw a PR curve or get different precision-recall results because this is not a binary classification task as we generally understand.",
"But fortunately, BFR-Graph shows a higher F1 than the typical GNN.",
"Table 7 (top) presents the results with and without the weights in the GNN.",
"-ent denotes removing the weights (we set the weights = 0.5 rather than simply remove them) and dist denotes removing the distance weights.",
"When we remove the weights, although the answer F1 rises slightly, the supporting F1 falls to a greater extent.",
"This shows that the proposed weights is beneficial to the supporting sentences prediction, which is directly predicted from the GNN nodes.",
"To our understanding, our model enhances the discrimination of edges by setting weights for them, and inevitably reduces the robustness of model.",
"Fortunately, by designing",
"Eqs.(4) and (5), the quantitative error will not cause the weight to increase or decrease sharply, and is still able to distinguish Precision Recall F1 typical GNN 37.62 95.61 52.89 BFR-Graph 59.44 83.49 63.08 Table 6: Message passing in different style.",
"For multi-score, we evaluate how the result changes if this particular way of exploiting GNN's output is replaced by traditional way.",
"In Table 7 (bottom), -sent and -para denote removing multi-score for sentence and paragraph respectively.",
"It indicates that both the addition of sentence scores and paragraph scores are beneficial to the performance.",
"We also analyze the complexities of BFR-Graph and typical GNN, which is simply shown in Table 8.",
"Firstly, in each layer of our BFR-Graph, only several nodes are updated by active nodes, so the number of nodes to be updated in a BFR-Graph layer is less than or equal to that in a typical GNN ( N update N ).",
"Secondly, for a node in a layer of BFR-Graph, it is only updated by its reachable nodes (i.e., active neighbors), so the number of reachable nodes for a node in a BFR-Graph layer is also less than or equal to that in typical GNN ( M reach M ).",
"Therefore, breadth first reasoning leads to lower complexity.",
"For GPU parallel training, we also show the actual cost of time per epoch.",
"BFR-Graph cost 158.6 minutes per epoch, while a 2-layer and 3-layer typical GNN costs 157.5 and 165.6 minutes respectively.",
"We find that BFR-Graph is always 4 layers in HotpotQA dataset, and it can even cost less time than a 3-layer typical GNN and is close to a 2-layer typical GNN.",
"In this paper, we proposed a novel GNN model of BFR-Graph.",
"Specifically, the reasoning message starts from the question node and passes to the next sentences node hop by hop until all the edges have been passed.",
"We also construct the reasoning graph as a weighted graph and present a more interpretable way to aggregate scores of different levels from GNN.",
"On HotpotQA leaderboard, BFR-Graph achieved state-of-the-art on answer span prediction.",
"This work is partially supported by National Natural Science Foundation of China (Grants no. 61772568), Guangdong Basic and Applied Basic Research Foundation (Grant no. 2019A1515012029), and Youth science and technology innovation talent of Guangdong Special Support Program."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"other"
] |
[
"Modeling what makes a request persuasive eliciting the desired response from a reader is critical to the study of propaganda, behavioral economics, and advertising.",
"Yet current models can't quantify the persuasiveness of requests or extract successful persuasive strategies.",
"Building on theories of persuasion, we propose a neural network to quantify persuasiveness and identify the persuasive strategies in advocacy requests.",
"Our semi-supervised hierarchical neural network model is supervised by the number of people persuaded to take actions and partially supervised at the sentence level with human-labeled rhetorical strategies.",
"Our method outperforms several baselines, uncovers persuasive strategiesoffering increased interpretability of persuasive speech and has applications for other situations with document-level supervision but only partial sentence supervision.",
"Crowdfunding platforms are a popular way to raise funds for projects.",
"For example, Kiva, a peer-to-peer lending platform, has crowd-funded more than a million loans, totaling over $1 billion since 2005.",
"Kickstarter, another online crowdfunding platform, successfully funded 110,270 projects with a total of over 2 billion dollars.",
"Yet most projects still suffer from low success rates.",
"How can we help requesters craft persuasive and successful pitches to convince others to take actions?",
"Persuasive communication has the potential to shape and change people's attitudes and behaviors (Hovland et al., 1953), and has been widely researched in various fields such as social psychology, marketing, behavioral economics, and political campaigning (Shrum et al., 2012).",
"One of the Equal contribution.",
"most influential theories in the advertising literature is Chaiken's systematic-heuristic dual processing theory, which suggests that people process persuasive communication by evaluating the quality of arguments or by relying on inferential rules.",
"Some such heuristic rules are commonly used in consumer behaviors; commercial websites may highlight the limited availability of their items In high demand only 2 left on our site! or emphasize the person in authority Speak to our head of saleshe has over 15 years' experience selling properties to attract potential consumers.",
"Although numerous studies on persuasion have been conducted (Chaiken, 1980), we still know little about the way how persuasion functions in the wild and how it can be modeled computationally.",
"In this work, we utilize neural-network based methods to computationally model persuasion in requests from crowdfunding websites.",
"We build on theoretical models of persuasion to operational-ize persuasive strategies and ensure generalizability.",
"We propose to identify the persuasive strategy employed in each sentence in each request.",
"However, constructing a large dataset with persuasion strategies labeled at the sentence level is time-consuming and expensive.",
"Instead, we propose to use a small amount of hand-labeled sentences together with a large number of requests automatically labeled at the document level by the number of persuaded support actions.",
"Our model is a semi-supervised hierarchical neural network that iden-tifies the persuasive strategies employed in each sentence, where the supervision comes from the overall persuasiveness of the request.",
"We propose that the success of requests could have substantive explanatory power to uncover their persuasive strategies.",
"We also introduce an annotated corpus with sentence-level persuasion strategy labels and document-level persuasiveness labels, to facilitate future work on persuasion.",
"Experiments show that our semi-supervised model outperforms several baselines.",
"We then apply this automated model to unseen requests from different domains and obtain nuanced findings of the importance of different strategies on persuasion success.",
"Our model can be useful in any situation in which we have exogenous document-level supervision, but only small amounts of expensive human-annotated sentence labels.",
"Computational argumentation has received much recent attention (Ghosh et al., 2016; Stab and Gurevych, 2017; Peldszus and Stede, 2013; Stab et al., 2018; Ghosh et al., 2014).",
"Most work has either identified the arguments in news articles (Sar-dianos et al., 2015) or user-generated web content (Habernal and Gurevych, 2017; Musi et al., 2018), or classified argument components (Zhang and Litman, 2015) into claims and premises, supporting and opposing claims, or backings, rebuttals and refutations .",
"For example, Stab and Gurevych (2014) proposed structural, lexical, syntactic and contextual features to identify convincing components of Web arguments including claim, major claim, and premise.",
"Similarly, Zhang and Litman (2015) studied student essay revisions and classified a set of argumentative actions associated with successful writing such as warrant/reasoning/backing, rebut-tal/reservation, and claims/ideas.",
"Habernal and Gurevych (2016) investigated the persuasiveness of arguments in any given argument pair using bidirectional LSTM.",
"Hidey et al., (2017) utilized the persuasive modesethos, logos, pathosto model premises and the semantic types of argument components in an online persuasive forum.",
"While most computational argumentation focuses on the relational support structures and factual evidence to make claims, persuasion focuses more on language cues aimed at shaping, reinforcing and changing people's opinions and beliefs.",
"How language changes people's attitudes and behaviors have received less attention from the computational community than argumentation, although there have been important preliminary work (Persing and Ng, 2017; Carlile et al., 2018).",
"Farra et al., (2015) built regression models to predict essay scores based on features extracted from opinion expressions and topical elements.",
"Chatterjee et al., (2014) used verbal descriptors and para-verbal markers of hesitation to predict speak-ers' persuasiveness on website housing videos of product reviews.",
"When looking at persuasion in the context of online forum discussions (Wei et al., 2016), Tan et al., (2016) found that on the Change My View subreddit, interaction dynamics such as the language interplay between opinion holders and other participants provides highly predictive cues for persuasiveness.",
"Using the same dataset, Wel et al., (2016) extracted a set of textual information and social interaction features to identify persuasive posts.",
"Recently, Pryzant et al., (2017) introduced a neural network with an adversarial objective to select text features that are predictive of some outcomes but decorrelated with others and further analyzed the narratives highlighted by such text features.",
"Further work extended the model to induce narrative persuasion lexicons predictive of enrollment from course descriptions and sales from product descriptions (Pryzant et al., 2018a), and the efficacy of search advertisements (Pryzant et al., 2018b).",
"Similar to their settings, we use the outcomes of a persuasive description to supervise the learning of persuasion tactics, and our model can similarly induce lexicons associated with successful narrative persuasion by examining highly attentional words associated with persuasion outcomes.",
"Our work differs both in our semi-supervised method and also because we explicitly draw on the theoretical literature to model the persuasion strategy for each sentence in requests, allowing requests to have multiple persuasion strategies; our induced lexicons can thus be very specific to different persuasion strategies.",
"Other lines of persuasion work predict the success of requests on peer-to-peer lending or crowdfunding platforms, and mainly exploit request attributes like project description (Greenberg et al., 2013), project videos (Dey et al., 2017), and social predictors such as the number of backers (Et-ter et al., 2013) or specific types of project updates (Xu et al., 2014).",
"Among them, only a few investigated the effect of language on the success of requests.",
"Althoff et al., (2014) studied donations in Random Acts of Pizza on Reddit, using the social relations between recipient and donor plus linguistic factors to predict the success of these altruistic requests.",
"Based on a corpus of 45K crowd-funded projects, Mitra and Gilbert (2014) found that 9M phrases commonly present in crowd-funding have reasonable predictive power in accounting for variance around successful funding, suggesting that language does exhibit some general principles of persuasion.",
"Although this prior work offers predictive and insightful models, most studies chose their persuasion labels or variables without reference to a taxonomy of persuasion techniques nor to a principled method of choosing them.",
"Some exceptions include Yang and Kraut (2017), Dey et al., (2017), and Rosenthal and McKeown (2017).",
"For example, Yang and Kraut (2017) looked at the effectiveness of a set of persuasive cues in Kiva requests and found that certain heuristic cues are positively correlated with lenders' contribution.",
"Inspired by these prior work, we operational-ize persuasive strategies based on theories of persuasion and aim to learn local structures/labels of sentences based on the global labels of para-graphs/requests.",
"Our task is different from most previous work on semi-supervised learning for NLP (Liang, 2005; Yang et al., 2017) that focuses on the setting with partial data labels.",
"While in computer vision, there is a lot of prior work in using image global labels to uncover local pixel level labels and bounding boxes of objects (Oquab et al., 2015; Pinheiro and Collobert, 2015), the investigation of this task in NLP, to the best of our knowledge, is novel and could potentially have much broader applications.",
"We situate this research within the team forums of Kiva 1 , the largest peer-to-peer lending website.",
"These self-organized lending teams are built around common interests, school affiliation or location.",
"In such teams, members can post messages in their team discussion board to persuade other members to lend to a particular borrower.",
"One such message is shown in Figure 1. A borrower, Sheila, posted a message on Kiva to request loans for woman-led group.",
"As highlighted in the fig-ure, she made use of several persuasion strategies such as commitment, concreteness, and impact to render her request more persuasive.",
"We define the persuasiveness score of a request message as the number of team members (in log-scale) who read the message and make loans to the mentioned borrower.",
"We then regard this overall persuasiveness of messages as high-level supervision for training our model to determine which persuasion strategy 1 https://www.kiva.org/ Figure 1: An anonymized advocating message that persuaded 5 members to lend to the mentioned borrower.",
"Numerous studies have investigated the basic principles that govern getting compliance from people (Cialdini and Garde, 1987; Petty et al., 1983).",
"In this work, we utilized Chiaken's 1980 systemic-heuristic model of social information processing, which suggests that people process persuasive requests by assessing the quality of arguments (sys-tematic processing) or by relying on heuristic rules (heuristic processing).",
"Building on that, we first borrow several commonly used heuristic principles (Cialdini and Garde, 1987) that are also suitable for our context as below.",
"Scarcity states that people tend to value an item more as soon as it becomes rare, distinct or limited.",
"For example, take the use of ex-pire'in this message: This loan is going to expire in 35 mins... . The principle of Emotion says that making messages full of emotional valence and arousal affect (e.g., describing a miserable situation or a happy moment) can make people care and act, e.g., The picture of widow Bunisia holding one of her children in front of her meager home brings tears to my eyes.. , similar to Sentiment and Politeness used by Althoff et al., (2014) and Tan et al., (2016), and Pathos used by Hidey et al., (2017). Commitment states that once we make a choice, we will encounter pressures that cause us to respond in ways that justify our earlier decision, and to convince others that we have made the correct choice. Here it could be mentioning their contribution in the message, e.g., I loaned to her already. Social Identity refers to people's self-concept of their membership in a social group, and people have an affinity for their groups over others, similar to name mentions in Rosenthal and McKeown (2017). Thus if a loan request comes from their own groups, they are more likely to contribute, such as For those of you in our team who love bread, here is a loan about bakery. Concreteness refers to providing concrete facts or evidence, such as She wishes to have a septic tank and toilet, and is 51% raised and needs $825 , similar to Claim and Evidence (Zhang et al., 2016; Stab and Gurevych, 2014)), Evidentiality (Althoff et al., 2014), and Logos (Hidey et al., 2017). We also propose a new strategy to capture importance or impact on these requests: Impact and Value emphasizes the importance or bigger impact of this loan, such as ... to grow organic rice. Then, she can provide better education for her daughter . Note that other persuasion tactics such as Reciprocity feel obligated to return something after receiving something of value from another and Authority comply with the requests of authority in an unthinking way to guide their decisions are also widely used in persuasive communication. However, in this context, we did not observe enough instances of them. 5 Semi-supervised Neural Net Given a message M = { S 0 , S 1 , ..., SL } consisting of L sentences that the author posted to advocate for a loan, our task is to predict the persuasion strategies p i employed in each sentence S i , i [0 , L ] . However, purely constructing a large-scale dataset that contains such labels of sentence-level persuasion strategy is often time-consuming and expensive. Instead, we propose to utilize a small amount of labeled and a large amount of unlabeled data. We design a semi-supervised hierarchical neural network to identify the persuasive strategies employed in each sentence, where the supervision comes from the sentence-level labels g in a small portion of data and the overall persuasiveness scores y of messages. The overall architecture of our method is shown in Figure 2. 5.1 Sentence Encoder Given a sentence S i with words w i,j , j [0 , l ] and l is the sentence length, a GRU (Bahdanau Figure 2: The overall model architecture. The blue part describes the sentence encoder. Sentences with labels of persuasion strategies are highlighted with dark blue like p 1 . The orange part shows the document encoder. et al., 2014) is used to incorporate contextual cues of words into hidden state h i,j . This GRU reads the sentence S i from w i, 1 to w i,l and encodes each word w i,j with its context into hidden state h i,j : h i,j = GRU ( W e w i,j , h i,j 1 ) , j [0 , l ] . (1) where W e is the word embedding matrix. To learn the characteristic words associated with the persuasive strategy in a sentence, we apply an attention mechanism (Bahdanau et al., 2014; Yang et al., 2016). The representation of those words are then aggregated to form the sentence vector s i . We formulated this word level attention as follows: u i,j = tanh( W w h i,j + b w ) (2) i,j = exp( u (cid:124) i,j u w ) (cid:80) k exp( u (cid:124) i,k u w ) (3) s i = (cid:88) j i,j h i,j (4) where u w is a context vector that queries the characteristic words associated with different persuasion strategies. It is randomly initialized and jointly learned from data. 5.2 Latent Persuasive Strategies We assume that each sentence instantiates only one type of persuasion strategy. For example, a sentence She is 51% raised and needs $825 in 3 days employs Scarcity , trying to emphasize limited time availability. We propose to use the high level representation of each sentence to predict the latent variable: p i = softmax ( W v s i + b v ) (5) 5.3 Document Encoder After obtaining the sentence vector p i , we can get a document vector in a similar way: h i = GRU ( p i , h i 1 ) , i [0 , L ] (6) where L denotes the number of sentences in a message. Similarly, we introduce an attention mechanism to measure the importance of each sentence and its persuasion strategy via a context vector u s : u i = tanh( W s h i + b s ) (7) i = exp( u (cid:124) i u s ) (cid:80) k exp( u (cid:124) k u s ) (8) v = (cid:88) i i h i (9) 5.4 Semi-Supervised Learning Objective The document vector v is a high-level representation of the document and can be used as a set of features for predicting y , the persuasiveness of a message, i.e., how many team members will make loans to the project mentioned in this message. We also include a context vector c to further assist the prediction of making loans. For instance, c could represent the number of team members in a team, the total amount of money contributed by this team in the past, etc. y = W f [ v, c ] + b f (10) We then can use the mean squared error between the predicted and ground truth persuasiveness as training loss. To take advantage of the labeled subset that has sentence level annotation of persuasive strategies, we reformulate this problem as a semi-supervised learning task: l = (cid:88) d CL ( y d y d ) 2 (cid:88) g i log p i (11) + (1 ) (cid:88) d (cid:48) CU ( y d (cid:48) y d (cid:48) ) 2 (12) Here, CL refers to the document corpus with sentence level persuasion labels. CU denotes those without any sentence labels. g i refers to the persuasion strategy in sentence S i , and p i is predicted by our model. and are used as re-weight factors to trade off the penalization and reward introduced by different components. 6 Experiment 6.1 Dataset Our collaboration with Kiva provided us access to all public data dumps of the team discussion forums on Kiva. Here we only focused on messages that have explicit links because in most cases, members need to include the loan link to better direct others to a specific loan or borrower. After removing messages that do not contain any links, we obtained 41,666 messages that contain loan advocacy. We used Amazon's Mechanical Turk (MTurk) to construct a reliable, hand-coded dataset to obtain the persuasion strategy label for each sentence. To increase annotation quality, we required Turkers to have a United States location with 98% approval rate for their previous work on MTurk. Since messages often contain different numbers of sentences, which might be associated with different sets of persuasion strategies, we sampled 200 messages for each fixed message length ranging from one sentence to six sentences, in order to guarantee that our hand-coded dataset reasonably represents the data. Messages with at most six sentences accounted for 89% percentages among all messages in our corpus. Each sentence in a message was labeled by two Mechanical Turk Master Workers 2 . To assess the reliability of the judges' ratings, we computed the intra-class correlation (ICC), and obtained an overall ICC score of 0.524, indicating moderate agreement among annotators (Cicchetti, 1994).",
"The distribution for each persuasion strategy in the annotated corpus is described in the blue line in Figure 3. We assigned a persuasion label to a sentence if two annotators gave consistent labels for it, and filtered out sentences that annotators disagreed on the label.",
"In the final annotated corpus, there were 1200 messages, with 2898 sentences.",
"The average number of sentences is 2.4 and the average number of words per sentence is 17.3.",
"For predicting the persuasive strategy in each sentence, we randomly 2 https://www.mturk.com/worker/help: What-is-a-Mechanical-Turk-Master-Worker Figure 3: The distribution of each persuasion strategy in the annotation corpus and in the whole unlabeled corpus after prediction.",
"split 80% of this annotated corpus as the training set (2271 sentences in 1060 messages), 10% as the validation set (322 sentences in 70 messages), and 10% as the testing set (305 sentences in 70 messages).",
"To further utilize supervision from the persuasiveness score of each message, we merged 1060 documents with sentence labels and 40,466 unlabeled messages, using it as the final training set for training semi-supervised models.",
"We split documents into sentences and tokenize each sentence using Stanford's CoreNLP (Man-ning et al., 2014).",
"Words appearing less than 5 times were replaced with a special UNK token.",
"We trained the hyperparameters of the models on the validation set using Adam (Kingma and Ba, 2014).",
"Specifically, we set the word embedding dimension to be 128, where the word embeddings are initialized randomly, and GRU dimension to be 256.",
"The learning rate is set to be 5e-5.",
"The balancer is the ratio of labeled data in a batch of training data.",
"The balancer is selected via grid search, searching in a set of (5, 10, 20, 50, 100), resulting in =10.",
"We propose several baselines to predict the sentence level persuasion strategies for comparison with our model.",
"(1) SVM + BoW is a SVM classi-fier with RBF kernel using bag-of-words features (one-hot).",
"(2) GRU uses the hidden state at the last word as features to classify persuasive strategies, a special case of our SH Net model without the supervision from the overall persuasiveness scores.",
"(3) bi-GRU uses bi-directional GRU.",
"H Net is a hierarchical GRU for classifying strategies with the supervision from the overall persuasiveness scores as shown in Figure 2, but it only adopts all the annotated messages.",
"We denote our semi-supervised hierarchical model as SH Net (Semi Hierarchical Net), which utilizes both annotated messages and unlabeled corpus.",
"Semi-Att Net builds on SH Net by incorporating both word-level and sentence-level attention.",
"In addition to the textual cues in the advocation message, persuasive requests also depend on the context.",
"We introduced a set of contextual descriptors into our semi-supervised hierarchical network, denoted as SH-Att Plus Net .",
"Such features include the number of borrowers in this message, the number of team members in a team, the total amount of money contributed by this team, the number of messages ever posted in the discussion board of this team, and the amount of money requested in this loan.",
"6.3 Results We evaluated the baselines and our hierarchical neural network models using accuracy, macro-averaged F1 score, macro-averaged precision and macro-averaged recall, as well as RMSE for evaluating the message level persuasiveness score prediction.",
"As we can see in Table 1, when predicting the persuasive strategies (6 types of persuasive strategies plus an Other strategy), BoW + SVM gives a performance of 0.347 and a macro F1 of 0.229.",
"A direct neural network GRU boosted the accuracy to 0.518, demonstrating the effectiveness of neural networks for sentence classification.",
"When bi-directional contextual information is used, the sentence level prediction performance is 52.1%.",
"Our hierarchical neural network achieved an accuracy of 48.2% and a macro F1 of 0.432.",
"When incorporating the whole corpus of unlabeled messages, our semi-supervised neural network achieved an accuracy of 56.1% (16.4% improvement over H Net ).",
"This indicates that our semi-supervised model effectively takes advantage of the supervision from the small amount of labeled data and the overall persuasiveness scores.",
"Moreover, we noticed that this semi-supervised neural network not only helps predict the sentence level persuasion strategies, but also assist the prediction of messages' overall persuasiveness with a 9% RMSE decrease.",
"Semi-Att outperformed SH Net with an accuracy of 56.9%, and a macro F1 score of 0.518.",
"Although the improvement from attention is minor (but signif-icant), it's important for visualizing associations between words, persuasion strategies and persua-Evaluating Sentence Level Strategies Doc Level Model Accuracy Macro F1 Macro Precision Macro Recall RMSE SVM (RBF) + BoW 0.347 0.229 0.364 0.167 GRU 0.518 0.479 0.479 0.479 -bi-GRU 0.521 0.440 0.445 0.436 Hierarchical Net (H Net) 0.482 0.432 0.430 0.432 1.15 Semi Net (SH Net) * 0.561 0.513 0.504 0.522 1.05 Semi-Att Net * 0.569 0.518 0.512 0.534 1.04 Semi-Att Plus Net 0.552 0.513 0.515 0.512 0.87 Table 1: Results of different models.",
"sion outcomes.",
"Interestingly, incorporating contextual descriptors did not help the prediction of persuasion strategies.",
"However, such contextual information strongly predicted the overall persuasiveness, decreasing RMSE to 0.84 from 1.04.",
"Strategy-Level Performance: We also report the accuracy per persuasion strategy category via Semi-Att , SH Net and simple GRU in Figure 4. It seems that overall neural models are better at capturing persuasion strategies such as concreteness, identity and scarcity.",
"This might be because people are concrete by using specific terms such as numbers or entities that are easy to model.",
"Simi-Strategy Top Ranked Keywords Commitment joined, lenders, loaning, lend, loan just, join, loaned, made, lent Concreteness women, married, old, heads, year-old money, sells, years, business, number Emotion hard, thank, better, grief, great maybe, help, please, thanks, happy Identity promotion, shall, captain, form, number spirits, lenders, member, team Impact improve, new, better, products, money to, use, business, more, order Scarcity minutes, there's, now, soon, go expire, hours, days, number, left Table 2: Top ranked keywords for persuasion strategies lar principles might also occur for social identity and scarcity where the use of words such as we , our and expire , left can reveal a lot about the persuasion strategies.",
"Different Percentage of Labeled Data: To fig-ure out the importance of supervision from mas-sages' overall persuasiveness scores, we experiment on SH Net with all the labeled messages.",
"To this end, we include all the labeled messages, and vary the percentage of unlabeled corpus from 0%, 25%, 50%, 75%, to 100%, in Figure 5",
"(a).",
"We found that as the amount of unlabeled messages increases, the accuracy of sentence level prediction increases as well, which further validates the effectiveness of the semi-supervised setting for persuasion strategy prediction.",
"Similarly, to investigate the predictive power introduced by the sentence level labels, we also vary the percentage of labeled messages from 25%, 50%, 75%, to 100% when including the whole unlabeled corpus, as shown in Figure 5",
"(b).",
"As expected, having more training data about sentence-level annotation increases the prediction performance.",
"Overall, these experiments demonstrate the effectiveness of semi-supervised models for predicting sentence level persuasion strategies.",
"This enables us Scarcity 5 days left $3475 needed .",
"to obtain sentence level labels for any given paragraphs by using a small amount of labeled data.",
"To validate whether our semi-supervised model captures characteristic words and sentences in requests, we visualize the attention in a sentence in Figure 6.",
"We show the predicted persuasion label for each sentence in a message in red (left-most columns in Figure",
"6(a) and",
"6(b)), with the color scale indicating its learned attention weight.",
"Word-level attention is highlighted in blue (re-maining columns).",
"As we can see in Figure",
"6(b), our model places emphasis on Scarcity , and highlights words such as left and day that carry the scarcity meaning.",
"Similarly, in the second message 5 days left 3475 needed our model first labeled the sentence as Scarcity , and then picked words such as days and left .",
"Sentences predicted as Concreteness seem to contain specific entities and concepts such as business , her , and home .",
"For Impact , our model accurately localizes words like in order to and cover .",
"To demonstrate that our model can learn representative words associated with different persuasion strategies, we show the 10 highest-scoring words from sentences with different labels in Table 2. Interestingly, Commitment is highly associated with words such as made and loan .",
"Explicit mentions of thanks and hard were found in sentences with Emotional labels.",
"Sentences that emphasize their team as a whole were labeled as Identity .",
"Overall, this validates that our model is able to select informative words associated with different persuasion strategies.",
"For further illustration, we visualized the attention weight distributions of different persuasion strategies.",
"Since the number of sentences inside each message is intertwined with attention weights, we only plotted the distributions for mes-Figure 7: Attention weight distributions of persuasion strategies in requests with 2-3 sentences.",
"sages with two or three sentences in Figure 7.",
"We observed that Scarcity , Identity , and Impact seem to play a relatively more important role for influ-encing the success of requests, whereas Emotional language, Commitment and Concreteness seem to concentrate more on the lower weight ranges.",
"After applying the semi-supervised hierarchical neural network to the unlabeled 40552 messages, we obtained their sentence-level persuasive strategies usages.",
"We showed the distribution of each persuasive strategy in the whole corpus in Figure 3, as described by the orange line.",
"To further investigate how important each persuasive strategy is for convincing others to make loans, in this section, we present results on which of them are predictive via linear regression.",
"All variables are standardized before entering the regression model.",
"We controlled for the number of team members in a team, the total amount of money contributed by this team, the number of messages posted in the discussion board of this team.",
"Since those variables are highly correlated with each other, we av-Persuasion Kiva RAOP Strategy (Coef.) (Odds ratio) Concreteness 0.041*** 1.111*** Commitment -0.015** 1.062 Emotional 0.030*** 1.145*** Identity 0.087*** 1.104** Impact/Value 0.024*** 1.084* Scarcity -0.076*** 1.118*** Table 3: The influences of different persuasive strategies on request success on Kiva and RAOP.",
"eraged them into a single variable to capture these team level attributes.",
"We also controlled for the amount of money the borrower requested.",
"We represented each message as a 6-dimensional vector to capture the amount of each persuasive strategy, which is calculated by selecting the maximum probability associated with each strategy from all sentences in this message.",
"The persuasive strategy features significantly improve the model fit, as indicated by a 11.8% improvement in adjusted R-squared from 0.152 to 0.170.",
"To demonstrate the generalizability of our persuasion strategies and the resulted semi-supervised model, we also applied our Semi-Att model to 5671 textual requests for pizza from the Reddit community Random Acts of Pizza (RAOP).",
"Specifically, we used the data released by Althoff et al., (2014) where each request asked for a free pizza and the outcome whether its author received a pizza or not was provided in the dataset.",
"Via Semi-Att , we were able to obtain the persuasive strategy used in each sentence of each request.",
"Similarly, we built a logistic regression model to predict whether a request will receive the pizza or not, controlling for the community age of the requester, the number of subred-dits the requester participated in, his/her number of posts as well as the votes (upvotes downvotes) this requester had received.",
"As shown in the column of RAOP in Table 3, concreteness is significantly correlated with success on both datasets.",
"This demonstrated that providing more evidence might help readers know the situation better, consistent with the effect of Evidentiality in Althoff et al., (2014).",
"Similarly, making the request full of emotions ( =0.030, Odds ratio (OR) =1.145), mentioning the similarity between potential readers and the requester ( =0.087, OR =1.104), and talking about the potential impact and value for others ( =0.024, OR =1.084) are all significantly associated with increases in the persuasiveness of these requests across two contexts.",
"In contrast, highlighting the urgency of the requests and emphasizing existing contribution to loans ( =-0.015) negatively correlate with request success ( =-0.076) on Kiva, confirming prior work (Yang and Kraut, 2017).",
"This communicates to us that some of those loans might have expired before others read the request and took action given the limited time available, or it could be that members thought their actions might not help if the remaining money needed is high and the time left is low, different from the limited-time offer tactics widely used in commercial advertising.",
"To sum up, the two analyses demonstrated that certain persuasive strategies such as Identity and Impact are consistently effective across two datasets, whereas Scarcity and Commitment contribute differently and need to be used with caution for different contexts.",
"In this work, we operationalized a set of persuasive strategies widely used in micro-lending platforms based on theories of persuasion, and developed an annotated corpus for identifying persuasion strategies.",
"We designed a semi-supervised hierarchical neural network to identify the persuasive strategies contained in loan requests.",
"Results show that our model improves accuracy considerably.",
"We also showed how different persuasive strategies contribute to request success.",
"In the future, we plan to build a richer taxonomy of persuasion strategies and incorporate additional neural architectures such as variational autoencoders to better represent sentences in each message to further assist the modeling of persuasiveness.",
"Beyond the text, images and even audios may provide additional insights on the successes of persuasive requests.",
"Our model also has important applications to other domains, such as in computational advertisements, micro-funding platforms and political campaigns.",
"The authors would like to thank Jason Eisner for his help at the brainstorm stage and insightful followup suggestions, and the anonymous reviewers for their helpful comments.",
"Diyi Yang was supported by Facebook Fellowship."
] | [
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"objective",
"abstain",
"objective",
"method",
"objective",
"abstain",
"result",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"result",
"objective",
"method",
"abstain",
"abstain",
"other",
"other"
] |
[
"Program induction for answering complex questions over knowledge bases (KBs) aims to decompose a question into a multi-step program, whose execution against the KB produces the final answer.",
"Learning to induce programs relies on a large number of parallel question-program pairs for the given KB.",
"However, for most KBs, the gold program annotations are usually lacking, making learning difficult.",
"In this paper, we propose the approach of program transfer , which aims to leverage the valuable program annotations on the rich-resourced KBs as external supervision signals to aid program induction for the low-resourced KBs that lack program annotations.",
"For program transfer, we design a novel two-stage parsing framework with an efficient ontology-guided pruning strategy.",
"First, a sketch parser translates the question into a high-level program sketch, which is the composition of functions.",
"Second, given the question and sketch, an argument parser searches the detailed arguments from the KB for functions.",
"During the searching, we incorporate the KB ontology to prune the search space.",
"The experiments on ComplexWebQuestions and WebQuestionSP show that our method outperforms SOTA methods significantly, demonstrating the effectiveness of program transfer and our framework.",
"Our codes and datasets can be obtained from https://github.com/ THU-KEG/ProgramTransfer .",
"Answering complex questions over knowledge bases (Complex KBQA) is a challenging task requiring logical, quantitative, and comparative reasoning over KBs (Hu et al., 2018; Lan et al., 2021).",
"Recently, the program induction (PI) paradigm, which gains increasing study in various areas (Lake et al., 2015; Neelakantan et al., 2017; Wong et al., Corresponding Author Program Relate Find Relate FilterConcept FilterConcept Find Question What Barcelonafacility whereFC Barcelonaplays canI visit? Answer Camp Nou What And FilterConcept touristattraction Relate tourist attractions Find Barcelona Find FC Barcelona Relate arenastadium FilterConcept sportsfacility Figure 1: An example question, the corresponding program, and the answer. The left side is the sketch, and the right side is the complete program, with dotted boxes denoting arguments for functions. 2021), emerges as a promising technique for Complex KBQA (Liang et al., 2017; Saha et al., 2019a; Ansari et al., 2019).",
"Given a KB, PI for Complex KBQA aims to decompose a complex question into a multi-step program, whose execution on the KB produces the answer.",
"Fig. 1 presents a complex question and its corresponding program whose functions take KB elements ( i.e. , entities, relations and concepts) as arguments.",
"E.g. , the relation tourist attractions is the argument of function Relate .",
"For most KBs, the parallel question-program pairs are lacking because such annotation is both expensive and labor-intensive.",
"Thus, the PI models have to learn only from question-answer pairs.",
"Typically, they take the answers as weak supervision and search for gold programs with reinforcement learning (RL) (Saha et al., 2019b; Liang et al., 2017; Ansari et al., 2019).",
"The combinatorial explosion in program space, along with extremely sparse rewards, makes the learning challenging.",
"Abundant attempts have been made to improve the stability of RL algorithms with pseudo-gold programs (Liang et al., 2017), noise-stabilizing wrapper (Ansari et al., 2019), or auxiliary rewards (Saha et al., 2019b).",
"Despite promising results, they re-8128 quire significant human efforts to develop carefully-designed heuristics or are constrained to relatively simple questions.",
"Recently, for several KBs, there emerge question-program annotation resources (Johnson et al., 2017; Cao et al., 2022).",
"Thanks to the supervision signals ( i.e. , program annotation for each question), the PI models on these rich-resourced KBs achieve impressive performance for even extremely complex questions, and are free from expert engineering.",
"Intuitively, leveraging these supervision signals to aid program induction for low-resourced KBs with only weak-supervision signals ( i.e. , question-answer pairs) is a promising direction.",
"In this paper, we formalize it as Program Transfer .",
"In practice, program transfer is challenging due to the following reasons:",
"(a) Domain Heterogeneity .",
"The questions and KBs across domains are both heterogeneous due to language and knowledge diversity (Lan et al., 2021).",
"It is hard to decide what to transfer for program induction.",
"(b) Unseen KB Elements.",
"The coverage of source KB is limited, e.g. , KQA Pro in (Cao et al., 2022) covers only 3.9% relations and 0.24% concepts of Wikidata.",
"Thus, most elements in the massive scale target KB are not covered in the source.",
"(c) Huge Search Space.",
"The search space of function arguments depends on the scale of target KB.",
"For realistic KBs containing millions of entities, concepts and relations, the huge search space is unmanageable.",
"To address the above problems, we propose a novel two-stage parsing framework with an efficient ontology-guided pruning strategy.",
"First, we design a sketch parser to parse the question into a program sketch (the left side in Fig. 1), which is composed of functions without arguments.",
"As Baroni (2019) points out, the composition of functions well captures the language compositionality.",
"Translation from questions to sketches is thus relevant to language compositional structure and independent of KB structure.",
"Therefore, our sketch parser can transfer across KBs.",
"Second, we design an argument parser to fill in the detailed arguments (typi-cally KB elements) for functions in the sketch.",
"It retrieves relevant KB elements from the target KB and ranks them according to the question.",
"Specifically, it identifies KB elements with their label descriptions and relies on language understanding to resolve unseen ones.",
"We further propose an ontology-guided pruning strategy, which introduces high-level KB ontology to prune the candidate space for the argument parser, thus alleviating the problem of huge search space.",
"Specifically, the sketch parser is implemented with a Seq2Seq model with the attention mechanism.",
"The argument parser identifies elements through semantic matching and utilizes pre-trained language models (Devlin et al., 2019) for language understanding.",
"The high-level ontology includes the domain and range of relations and entity types.",
"In evaluation, we take the Wikidata-based KQA Pro as the source, Freebase-based ComplexWebQuestions and WebQuestionSP as the target domain datasets.",
"Experimental results show that our method improves the F1 score by 14.7% and 2.5% respectively, compared with SOTA methods that learn from question-answer pairs.",
"Our contributions include:",
"(a) proposing the approach of program transfer for Complex KBQA for the first time;",
"(b) proposing a novel two-stage parsing framework with an efficient ontology-guided pruning strategy for program transfer;",
"(c) demonstrating the effectiveness of program transfer through extensive experiments and careful ablation studies on two benchmark datasets.",
"KBQA .",
"KBQA aims to find answers for questions expressed in natural language from a KB, such as Freebase (Bollacker et al., 2008), DBpedia (Lehmann et al., 2015) and Wikidata (Vran-decic and Krtzsch, 2014).",
"Current methods for KBQA can be categorized into two groups: 1) semantic parsing based methods (Berant et al., 2013; Yih et al., 2015; Cheng et al., 2017; Liang et al., 2017; Ansari et al., 2019), which learn a semantic parser that converts questions into intermediate logic forms which can be executed against a KB; 2) information retrieval based methods (Bor-des et al., 2014; Xu et al., 2016; Miller et al., 2016; Zhang et al., 2018; Sun et al., 2018, 2019; Shi et al., 2021), which retrieve candidate answers from the topic-entity-centric subgraph and then rank them according to the questions.",
"Recently, semantic parsing for KBQA has gained increasing research attention because the methods are effective and more interpretable.",
"Multiple kinds of logical forms have been proposed and researched, such as SPARQL (hommeaux, 2011), DCS (Liang, 2013), -calculus (Artzi et al., 2013), query graph (Yih et al., 2015), program (Liang et al., 2017).",
"PI aims to convert questions into 8129 programs, and is in line with semantic parsing.",
"Cross-domain Semantic Parsing.",
"Cross-domain semantic parsing trains a semantic parser on some source domains and adapts it to the target domain.",
"Some works (Herzig and Berant, 2017; Su and Yan, 2017; Fan et al., 2017) pool together examples from multiple datasets in different domains and train a single sequence-to-sequence model over all examples, sharing parameters across domains.",
"However, these methods rely on annotated logic forms in the target domain.",
"To facilitate low-resource target domains, (Chen et al., 2020) adapts to target domains with a very limited amount of annotated data.",
"Other works consider a zero-shot semantic parsing task (Givoli and Reichart, 2019), decoupling structures from lexicons for transfer.",
"However, they only learn from the source domain without further learning from the target domain using the transferred prior knowledge.",
"In addition, existing works mainly focus on the domains in OVERNIGHT (Wang et al., 2015), which are much smaller than large scale KBs such as Wikidata and Freebase.",
"Considering the complex schema of large scale KBs, transfer in ours setting is more challenging.",
"Knowledge Bases .",
"Knowledge base describes concepts, entities, and the relations between them.",
"It can be formalized as KB = {C , E , R , T } .",
"C , E , R and T denote the sets of concepts, entities, relations and triples respectively.",
"Relation set R can be formalized as R = { r e , r c } R l , where r e is instanceOf , r c is subClassOf , and R l is the general relation set.",
"T can be divided into three disjoint subsets: (1) instanceOf triple set T e = { ( e, r e , c ) | e E , c C} ; (2) subClassOf triple set T c = { ( c i , r c , c j ) | c i , c j C} ; (3) relational triple set T l = { ( e i , r, e j ) | e i , e j E , r R l } .",
"Program .",
"Program is composed of symbolic functions with arguments, and produces an answer when executed against a KB.",
"Each function defines a basic operation on KB and takes a specific type of argument.",
"For example, the function Relate aims to find entities that have a specific relation with the given entity.",
"Formally, a program y is denoted as (cid:10) o 1 [ arg 1 ] , , o t [ arg t ] , , o | y | [ arg | y | ] (cid:11) , o t O , arg t E C R .",
"Here, O is a pre-defined function set, which covers basic reasoning opera-Function ArgumentType Argument Description Find entity FC Barcelona Find the specific KB entity Relate relation arena stadium Find the entities that hold a specific relation with the given entity FilterConcept concept sports facility Find the entities that belong to a specific concept And -Return the intersection of two entity sets Table 1: Function examples.",
"tions over KBs (Cao et al., 2022).",
"According to the argument type, O can be devided into four disjoint subsets: O = OE OC OR O , representing the functions whose argument type is entity, concept, relation and empty respectively.",
"Table 1 gives some examples of program functions.",
"Program Induction .",
"Given a KB , and a complex natural language question x = (cid:10) w 1 , w 2 , , w | x | (cid:11) , it aims to produce a program y that generates the right answer z when executed against KB .",
"Program Transfer .",
"In this task, we have access to the source domain data S = (cid:10) KBS , DS (cid:11) , where DS contains pairs of question and program { ( x Si , y Si ) } n S i =1 ; and target domain data T = (cid:10) KBT , DT (cid:11) , where DT contains pairs of question and answer { ( x Ti , z Ti ) } n T i =1 .",
"We aim at learning a PI model to translate a question x for KBT into program y , which produces the correct answer when executed on KBT .",
"As mentioned in the introduction, to perform program transfer for Complex KBQA, we need to address three crucial problems: (1) What to transfer when both questions and KBs are heterogeneous?",
"(2) How to deal with the KB elements unseen in the external annotations?",
"(3) How to prune the search space of input arguments to alleviate the huge search space problem?",
"In this section, we introduce our two-stage parsing framework with an ontology-guided pruning strategy, which is shown in Fig. 2.",
"(1) Sketch Parser : At the first stage, we design a sketch parser f s to parse x into a program sketch y s = (cid:10) o 1 , , o t , o | y | (cid:11) , which is a sequence of functions without arguments.",
"The sketch parsing process can be formulated as y s = f s ( x ) .",
"Translation from question to sketch is relevant to language compositionality, and irrelevant to KB structure.",
"Therefore, the sketch parser can generalize across KBs.",
"(2) Argument Parser : At the second stage, we design an argument parser f a to retrieve the argument arg t from a candidate pool P for each function o t , which can be formulated as arg t = f a ( x, o t , P ) .",
"Here, the candidate pool P contains the relevant elements in KBT , including concepts, entities, and relations.",
"In a real KB, the candidate pool is usually huge, which makes searching and learning from answers very hard.",
"Therefore, we propose an ontology-guided pruning strategy, which dynamically updates the candidate pool and progressively reduces its search space.",
"In the following we will introduce the implementation details of our sketch parser (Section 4.1), argument parser (Section 4.2) and training strategies (Section 4.3).",
"The sketch parser is based on encoder-decoder model (Sutskever et al., 2014) with attention mechanism (Dong and Lapata, 2016).",
"We aim to estimate p ( y s | x ) , the conditional probability of sketch y s given input x .",
"It can be decomposed as: p ( y s | x ) = | y s | (cid:89) t =1 p ( o t | o <t , x ) , (3) where o <t = o 1 , ..., o t 1 .",
"Specifically, our sketch parser comprises a question encoder that encodes the question into vectors and a sketch decoder that autoregressively outputs the sketch step-by-step.",
"The details are as follows: Question Encoder.",
"We utilize BERT (Devlin et al., 2019) as the encoder.",
"Formally, x , ( x 1 , , x i , , x | x | ) = BERT ( x ) , (4) where x R d is the question embedding, and x i R d is the hidden vector of word x i .",
"d is the hidden dimension.",
"Sketch Decoder.",
"We use Gated Recurrent Unit (GRU) (Cho et al., 2014), a well-known variant of RNNs, as our decoder of program sketch.",
"The decoding is conducted step by step.",
"After we have predicted o t 1 , the hidden state of step t is computed as: h t = GRU ( h t 1 , o t 1 ) , (5) where h t 1 is the hidden state from last time step, o t 1 = [ W ] o t 1 denotes the embedding 8131 corresponding to o t 1 in the embedding matrix W R |O| d .",
"We use h t as the attention key to compute scores for each word in the question based on the hidden vector x i , and compute the attention vector c t as: i = exp( x T i h t ) (cid:80) | x | j =1 exp( x T j h t ) , c t = | x | (cid:88) i =1 i x i .",
"(6) The information of h t and c t are fused to predict the final probability of the next sketch token: g t = h t + c t , p ( o t | o <t , x ) = [ Softmax ( MLP ( g t ))] o t , (7) where MLP (short for multi-layer perceptron) projects d -dimensional feature to |O| -dimension, which consists of two linear layers with ReLU activation.",
"In the above section, the sketch is obtained with a sketch parser.",
"In this section, we will introduce our argument parser, which aims to retrieve the argument arg t from the target KB for each function o t in the sketch.",
"To reduce the search space, it retrieves arguments from a restricted candidate pool P , which is constructed with our ontology-guided pruning strategy.",
"In the following, we will introduce the argument retrieval process and the candidate pool construction process.",
"Argument Retrieval .",
"Specifically, we take g t in Equation 7 as the context representation of o t , learn vector representation P i R d for each candidate P i , and calculate the probability for P i based on g t and P i .",
"Candidate P i is encoded with the BERT encoder in Equation 4, which can be formulated as: P i = BERT ( P i ) .",
"P i is the i th row of P .",
"The probability of candidate arg t is calculated as: p ( arg t | x, o t , P ) = [ Softmax ( Pg t )] arg t .",
"Candidate Pool Construction .",
"In the following, we will introduce the KB ontology first.",
"Then, we will describe the rationale of our ontology-guided pruning strategy and its implementation details.",
"ontology.",
"Specifically, a relation r comes with a domain dom ( r ) C and a range ran ( r ) C .",
"An entity e comes with a type type ( e ) = { c | ( e, instanceOf , c ) T } .",
"For example, as shown in Fig. 2, sports team owner dom ( teams owned ) , sports team ran ( teams owned ) , and sports team type ( Baltimore Ravens ) .",
"The rationale of our pruning is that the arguments for program functions are mutually constrained according to the KB ontology.",
"Therefore, when the argument arg t for o t is determined, the possible candidates for { o i } | y s | i = t +1 will be adjusted.",
"For example, in Fig. 2, when Relate takes teams owned as the argument, the candidate pool for the next FilterConcept is constrained to the range of relation teams owned , thus other concepts ( e.g. , time zone ) will be excluded from the candidate pool.",
"In practice, we propose a set of ontology-oriented operators to adjust the candidate pool P step-by-step.",
"Specifically, we define three ontology-oriented operators C ( e ) , R ( r ) , D ( c ) , which aim to find the type of entity e , the range of relation r , and the relations whose domain contains c .",
"Furthermore, we use the operators to maintain an entity pool PE , a relation pool PR and a concept pool PC .",
"When arg t of o t is determined, we will update PE , PR , and PC using C ( e ) , R ( r ) , D ( c ) .",
"We take one of the three pools as P according to the argument type of o t .",
"The detailed algorithm is shown in Appendix.",
"We train our model using the popular pretrain-finetune paradigm.",
"Specifically, we pretrain the parsers on the source domain data DS = (cid:8)(cid:0) x Si , y Si (cid:1)(cid:9) n S i =1 in a supervised way.",
"After that, we conduct finetuning on the target domain data DT = (cid:8)(cid:0) x Ti , z Ti (cid:1)(cid:9) n T i =1 in a weakly supervised way.",
"Pretraining in Source Domain.",
"Since the source domain data provides complete annotations, we can directly maximize the log-likelihood of the golden sketch and golden arguments: L pretrain = (cid:88) ( x S ,y S ) D S (cid:18) log p ( y S s | x S ) + | y s | (cid:88) t =1 log p ( arg St | x S , o St , P ) (cid:19) .",
"(10) 8132 Finetuning in Target Domain.",
"At this training phase, questions are labeled with answers while programs remain unknown.",
"The basic idea is to search for potentially correct programs and optimize their corresponding probabilities.",
"Specifically, we propose two training strategies: Hard-EM Approach.",
"At each training step, hard-EM generates a set of possible programs with beam search based on current model parameters, and then executes them to find the one whose answers have the highest F1 score compared with the gold.",
"Let y T denote the best program, we directly maximize p ( y T | x T ) like Equation 10.",
"Reinforcement learning (RL).",
"It formulates the program generation as a decision making procedure and computes the rewards for sampled programs based on their execution results.",
"We take the F1 score between the executed answers and golden answers as the reward value, and use REINFORCE (Williams, 1992) algorithm to optimize the parsers.",
"Source Domain.",
"KQA Pro (Cao et al., 2022) provides 117,970 question-program pairs based on a Wikidata (Vrandecic and Krtzsch, 2014) subset.",
"Target Domain.",
"We use WebQuestionSP (We-bQSP) (Yih et al., 2016) and ComplexWebQuestions (CWQ) (Talmor and Berant, 2018) as the target domain datasets for two reasons: (1) They are two widely used benchmark datasets in Complex KBQA; (2) They are based on a large-scale KB Freebase (Bollacker et al., 2008), which makes program transfer challenging.",
"Specifically, WebQSP contains 4,737 questions and is divided into 2,998 train, 100 dev and 1,639 test cases.",
"CWQ is an extended version of WebQSP which is more challenging, with four types of questions: composition (44.7%), conjunction (43.6%), comparative (6.2%), and superlative (5.4%).",
"CWQ is divided into 27,639 train, 3,519 dev and 3,531 test cases.",
"We use the Freebase dump on 2015-08-09 1 , from which we extract the type of entities, domain and range of relations to construct the ontology.",
"The average domain, range, type size is 1.43 per relation, 1.17 per relation, 8.89 per entity respectively.",
"Table 2 shows the statistics of the source and target domain KB.",
"The target domain KB contains much more KB elements, and most of them are uncovered by the source domain.",
"In our experiments, we select representative models that learn from question-answer pairs as our baselines.",
"They can be categorized into three groups: program induction methods, query graph generation methods and information retrieval methods.",
"Existing program induction methods search for gold programs with RL.",
"They usually require human efforts or are constrained to simple questions.",
"NSM (Liang et al., 2017) uses the provided entity, relation and type annotations to ease the search, and can solve relatively simple questions.",
"NPI (Ansari et al., 2019) designs heuristic rules such as disallowing repeating or useless actions for efficient search.",
"Existing query graph generation methods generate query graphs whose execution on KBs produces the answer.",
"They use entity-level triples as search guidance, ignoring the useful ontology.",
"TEXTRAY (Bhutani et al., 2019) uses a decompose-execute-join approach.",
"QGG (Lan and Jiang, 2020) incorporates constraints into query graphs in the early stage.",
"TeacherNet (He et al., 2021) utilizes bidirectional searching.",
"Existing information retrieval methods directly construct a question-specific sub-KB and then rank the entities in the sub-KB to get the answer.",
"GraftNet (Sun et al., 2018) uses heuristics to create the subgraph and uses a variant of graph convolutional networks to rank the entities.",
"PullNet (Sun et al., 2019) improves GraftNet by iteratively constructing the subgraph instead of using heuristics.",
"Besides, we compare our full model Ours with Ours -f , Ours -p , Ours -pa , Ours -o , which denotes our model without finetuning, without pretraining, without pretraining of argument parser, and without our ontology-guided pruning strategy respectively.",
"Following prior works (Berant et al., 2013; Sun et al., 2018; He et al., 2021), we use F1 score and Hit@1 as the evaluation metrics.",
"Since questions in the datasets have multiple answers, F1 score reflects the coverage of predicted answers better.",
"We used the bert-base-cased model of Hugging-Face 2 as our BERT encoder with the hidden dimension d 768.",
"The hidden dimension of the sketch decoder d was 1024.",
"We used AdamW (Loshchilov and Hutter, 2019) as our optimizer.",
"We searched the learning rate for BERT paramters in { 1e-4, 3e-5, 1e-5 } , the learning rate for other parameters in { 1e-3, 1e-4, 1e-5 } , and the weight decay in { 1e-4, 1e-5, 1e-6 } .",
"According to the performance on dev set, we finally used learning rate 3e-5 for BERT parameters, 1e-3 for other parameters, and weight decay 1e-5.",
"As shown in Table 3, our model achieves the best performance on both WebQSP and CWQ.",
"Especially on CWQ, we have an absolute gain of 14.7% in F1 and 9.3% in Hit@1, beating previous methods by a large margin.",
"Note that CWQ is much more challenging than WebQSP because it includes more compositional and conjunctional questions.",
"Previous works mainly suffer from the huge search 2 https://github.com/huggingface/transformers space and sparse training signals.",
"We alleviate these issues by transferring the prior knowledge from external annotations and incorporating the ontology guidance.",
"Both of them reduce the search space substantially.",
"On WebQSP, we achieve an absolute gain of 2.5% and 0.3% in F1 and Hit@1, respectively, demonstrating that our model can also handle simple questions well, and can adapt to different complexities of questions.",
"Note that our F1 scores are higher than the corresponding Hit@1.",
"This is because we just randomly sampled one answer from the returned answer set as the top 1 without ranking them.",
"We utilize beam search to generate multiple possible programs and evaluate their performance.",
"Table 4 shows the highest F1 score in the top-k generated programs, where top-1 is the same as Table 3.",
"We can see that the best F1 in the top-10 programs is much higher than the F1 of the top-1 ( e.g. , with an absolute gain 10.4% for WebQSP and 6.3% for CWQ).",
"This indicates that a good re-ranking method can further improve the overall performance of our model.",
"We leave this as our future work.",
"Pretraining: As shown in Table 3, when comparing Ours -pa with Ours, the F1 and Hit@1 on CWQ drop by 4 .",
"2% and 3 .",
"8% respectively, which indicates that the pretraining for the argument parser is necessary.",
"Ours -p denotes the model without pretraining for neither sketch parser nor argument parser.",
"We can see that its results are very poor, achieving just about 3% and 2% on WebQSP and CWQ, indicating that the pretraining is essential, especially for the sketch parser.",
"Finetuning: Without finetuning on the target data, i.e. , in Ours -f , performance drops a lot compared with the complete model.",
"For example, F1 and Hit@1 on CWQ drop by 12.8% and 12.9% respectively.",
"It indicates that finetuning is necessary for the model's performance.",
"As shown in Table 2, most of the relations and concepts in the target domain are uncovered by the source domain.",
"Due to 8134 the semantic gap between source and target data, the prior knowledge must be properly transferred to the target domain to bring into full play.",
"Ontology: We implemented Ours -o by removing ontology from KB and removing FilterConcept from the program.",
"Comparing Ours -o with Ours, the F1 and Hit@1 on CWQ drops by 2.9% and 3.4% respectively, which demonstrates the importance of ontology-guided pruning strategy.",
"We calculated the search space size for each compositional and conjunctive question in the dev set of CWQ, and report the average size in Table 5.",
"The statistics shows that, the average search space size of Ours is only 0.26% and 3.2% of that in Ours -o for the two kinds of questions.",
"By incorporating the ontology guidance, Ours substantially reduces the search space.",
"Hard-EM v.s. RL: For both WebQSP and CWQ, training with Hard-EM achieves better performance.",
"For RL, we simply employed the REINFORCE algorithm and did not implement any auxiliary reward strategy since this is not the focus of our work.",
"The sparse, delayed reward causes high variance, instability, and local minima issues, making the training hard (Saha et al., 2019b).",
"We leave exploring more complex training strategies as our future work.",
"Fig. 3 gives a case, where our model parses an question into multiple programs along with their probablility scores and F1 scores of executed answers.",
"Given the question The person whose education institution is Robert G. Cole Junior-Senior High School played for what basketball teams? , we show the programs with the largest, 2-nd largest Find R.G.C. High School Relate [inv] educationinstitution Fil.Con.",
"and 10-th largest possibility score.",
"Both of the top-2 programs get the correct answer set and are semantically equivelant with the question, while the 10-th best program is wrong.",
"Error Analysis We randomly sampled 100 error cases whose F1 score is lower than 0.1 for manual inspection.",
"The errors can be summarized into the following categories: (1) Wrong relation (53%): wrongly predicted relation makes the program wrong, e.g. , for question What language do people in the Central Western Time Zone speak? , our model predicts the relation main country , while the ground truth is countries spoken in ; (2) Wrong concept (38%): wrongly predicted concept makes the program wrong, e.g. , for the question What continent does the leader Ovadia Yosel live in? , our model predicted the concept location , whereas the ground truth is continent .",
"(3) Model limitation (9%): Handling attribute constraint was not considered in our model, e.g. , for the question Who held his governmental position from before April 4, 1861 and influenced Whitman's poetry? , the time constraint April 4, 1861 cannot be handled.",
"In this parper, we propose program transfer for Complex KBQA for the first time.",
"We propose a novel two-stage parsing framework with an efficient ontology-guided pruning strategy.",
"First, a sketch parser translates a question into the program, and then an argument parser fills in the detailed 8135 arguments for functions, whose search space is restricted by an ontology-guided pruning strategy.",
"The experimental results demonstrate that our program transfer approach outperforms the previous methods significantly.",
"The ablation studies show that our two-stage parsing paradigm and ontology-guided pruning are both effective.",
"This work is founded by the National Key Research and Development Program of China (2020AAA0106501), the Institute for Guo Qiang, Tsinghua University (2019GQB0003), Huawei Noah's Ark Lab and Beijing Academy of Artificial Intelligence."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"objective",
"objective",
"abstain",
"objective",
"result",
"other"
] |
[
"We consider the problem of learning distributed representations for entities and relations of multi-relational data so as to predict missing links therein.",
"Convolutional neural networks have recently shown their superiority for this problem, bringing increased model expressiveness while remaining parameter efficient.",
"Despite the success, previous convolution designs fail to model full interactions between input entities and relations, which potentially limits the performance of link prediction.",
"In this work we introduce ConvR, an adaptive convolutional network designed to maximize entity-relation interactions in a convolutional fashion.",
"ConvR adaptively constructs convolution filters from relation representations, and applies these filters across entity representations to generate convolutional features.",
"As such, ConvR enables rich interactions between entity and relation representations at diverse regions, and all the convolutional features generated will be able to capture such interactions.",
"We evaluate ConvR on multiple benchmark datasets.",
"Experimental results show that: (1) ConvR performs substantially better than competitive baselines in almost all the metrics and on all the datasets; (2) Compared with state-of-the-art convolutional models, ConvR is not only more effective but also more efficient.",
"It offers a 7% increase in MRR and a 6% increase in Hits@10, while saving 12% in parameter storage.",
"Multi-relational data refers to directed graphs whose nodes correspond to entities and edges different types of relations between entities.",
"An edge of the form ( subject , relation , object ) indicates that there exists a specific relation between the subject and object entities.",
"Learning with multi-relational data plays a pivotal role in many application domains, ranging from social networks or recommender systems to large-scale knowledge bases (KBs) (Bordes et al., 2013; Jenatton et al., 2012).",
"This work focuses on modeling multi-relational data from KBs, with the aim of predicting missing facts on KBs, a challenging task known as link prediction in statistical relational learning (SRL) (Getoor and Taskar, 2007).",
"Various SRL techniques (Nickel et al., 2016a) have been proposed for this task, among which vector space embedding models (Wang et al., 2017) are gaining increasing attention due to their superior performance and potential scalability.",
"The key idea there is to learn and operate on latent features (embeddings) of entities and relations, so as to uncover non-trivial connectivity patterns in multi-relational data.",
"Previous works of this kind tend to adopt shallow, simple models to extract latent features, e.g., the translation based models (Bordes et al., 2013; Wang et al., 2014) or the bilinear models and their variants (Jenatton et al., 2012; Yang et al., 2015; Trouillon et al., 2016).",
"Using these simple models allows one to easily handle large-scale KBs, but usually at the cost of learning less expressive features.",
"In fact, such simple models typically generate a single feature with each entry of the embeddings.",
"The only way to increase the number of features (and thus their expressiveness) is to increase the embedding size (Dettmers et al., 2018).",
"This potentially limits the performance of link prediction with a given number of parameters.",
"To increase model expressiveness, there emerge some deeper, more complicated designs, in particular those on the basis of neural network architectures (Socher et al., 2013; Bordes et al., 2014; Dong et al., 2014; Schlichtkrull et al., 2017a).",
"Such approaches, however, often have more parameters and are prone to overfit, at least on the (relatively small) benchmark datasets used by the scientific community (Nickel et al., 2016a).",
"Recently, Dettmers et al. (2018) devised ConvE, 979 a multi-layer convolutional network which enables expressive feature learning while remaining highly parameter efficient.",
"Given a subject-relation-object triple ( s, r, o ) , ConvE first reshapes the vector representation of the subject s and that of the relation r into 2D matrices, and then concatenates the two matrices and feeds them into a 2D convolutional layer to extract higher-level, non-linear features, as illustrated in Figure",
"1(a).",
"The resultant convolutional features are finally projected and matched with the vector representation of the object o via an inner product.",
"Note that by sliding across the embeddings using small-sized filters, the convolution operator can easily generate much more features without increasing the embedding size.",
"As such, ConvE offers increased expressiveness and achieves competitive performance in link prediction.",
"Nevertheless, despite its success, ConvE is still insufficient to fully capture the interactions between input entities and relations, which has long been recognized as crucial for modeling multi-relational data (Nickel et al., 2011; Garca-Durn et al., 2014; Trouillon et al., 2016).",
"In ConvE, the (reshaped) representations of input entities and relations are simply stacked together and fed into a convolutional layer.",
"Although 2D convolution is better than 1D convolution in modeling entity-relation interactions, typical 2D convolution with global filters on such a stacked matrix, however, can only model interactions around the concatenation line (Dettmers et al., 2018).",
"Consider the example in Figure",
"1(a), where two matrices of size 3 3 are formed after reshaping, stacked, and fed as input to a convolutional layer.",
"Convolving across the input with a global filter of size 2 2 will then be able to model interactions only in the regions where the two matrices adjoin (e.g., the region outlined in red).",
"That means, only a small proportion of the output convolutional features (20% in this example, striped with orange and blue) will effectively capture entity-relation interactions, and the vast majority others will be entityor relation-independent.",
"This poses potential negative impacts on the link prediction task.",
"This paper, aiming at maximizing the interactions between input entities and relations, introduces ConvR, an adaptive convolutional network specifically designed for multi-relational data.",
"As illustrated in Figure",
"1(b), the key idea of ConvR is to facilitate convolution across entity representations with its filters adaptively constructed from relation representations.",
"Such adaptive convolution will model the interactions between the two types of input not only more naturally but also more effectively.",
"Specifically, given a triple, the vector representation of the subject is reshaped and fed as input to a convolutional layer, while that of the relation is split and reshaped into a set of filters.",
"ConvR then convolves across the input with these filters, enabling each filter (a part of the relation representation) to interact with diverse regions of the input (the entity representation).",
"Through this adaptive convolution process, all the features generated will be able to capture entity-relation interactions (striped with orange and blue in Figure",
"1(b)).",
"These convolutional features are finally projected and matched with the representation of the object.",
"Besides being more effective, adaptive convolution enables potentially more efficient modeling (in terms of the number of parameters).",
"Compared with ConvE (Figure",
"1(a)), ConvR (Figure",
"1(b)) needs no global filters and generates smaller feature maps, making the follow-up projection layer roughly half as large as that of ConvE.",
"The idea of adaptive convolution, in fact, is rather generic for the multi-relational scenario.",
"By splitting and reshaping relation vectors, ConvR can be easily generalized to other paradigms such as 1D or 3D convolution, not restricted to the 2D setting.",
"To facilitate a direct and fair comparison to ConvE where only 2D convolution is considered and tested, this paper takes the 2D setting as an example, and shows the superiority of ConvR over ConvE in this setting.",
"We will investigate higher dimensional convolution in our future work.",
"Our contributions are as follows.",
"(1) We propose a novel adaptive convolution model for learning with multi-relational data.",
"Our approach, ConvR, takes full advantage of entity-relation interactions in a convolutional fashion, while still remaining highly parameter efficient.",
"(2) We evaluate ConvR in the link prediction task on KBs and achieve very promising results on multiple benchmark datasets, including not only the popular WN18 and FB15K (Bordes et al., 2013), but also the more difficult WN18RR (Dettmers et al., 2018) and FB15K-237 (Toutanova and Chen, 2015).",
"(3) We systematically compare the efficiency and effectiveness of ConvR and ConvE on FB15K-237, showing that ConvR can perform substantially better with a good variety of parameter settings.",
"In particular, it offers a 7% 980 Global filters Reshape Stack Convolve Embeddings (cid:256) Image (cid:257) Feature maps s r",
"(b) ConvR: Convolution with relation-specfic filters.",
"increase in MRR and a 6% increase in Hits@10, with the total parameter number only 88% as large as that of ConvE.",
"We consider multi-relational data represented as a graph, which can also be formalized as a set of subject-relation-object triples G = { ( s, r, o ) } E R E .",
"Here, E is the set of entities, and R the set of relations.",
"Each triple ( s, r, o ) is composed of a subject entity s E , a relation r R , and an object entity o E , indicating that there exists a relation of type r between the two entities s and o .",
"Such triples are also called facts in knowledge bases (KBs).",
"We follow (Dettmers et al., 2018) and formalize link prediction on multi-relational data as a pointwise learning to rank problem, where the objective is to learn a scoring function : E R E R .",
"For any input triple ( s, r, o ) , the higher the score ( s, r, o ) , the more likely the triple is true.",
"Various statistical relational learning (SRL) techniques have been proposed for this task.",
"See (Nickel et al., 2016a) for a thorough review of such techniques, with their application on large-scale KBs.",
"This paper focuses on vector space embedding models, a branch of SRL with superior performance and potential scalability.",
"Given an input triple ( s, r, o ) , a model of this kind first maps the entities s, o and relation r to their distributed representations (i.e., embeddings), usually vectors s , r , o R d for efficient learning.",
"A score is then defined for the triple by operating on these distributed representations, i.e., ( s, r, o ) = ( s , r , o ) .",
"A great many approaches of this kind have been devised in the last few years, where a key difference is the designing of the scoring function ( s , r , o ) .",
"See (Wang et al., 2017) for a recent survey.",
"This section presents ConvR, an adaptive convolutional network specifically designed for learning with multi-relational data.",
"The key idea of ConvR is to facilitate convolution across entity representations with its filters adaptively constructed from relation representations, so as to maximize the interactions between the two types of input.",
"Figure",
"1(b) provides a simple illustration of this idea.",
"In the rest of this section, we detail the ConvR model, discuss parameter learning of it, and show its advantages over ConvE, a convolutional network achieving promising results in multi-relational link prediction (Dettmers et al., 2018).",
"To facilitate a direct and fair comparison to ConvE, we focus on the 2D setting.",
"But the idea can be easily generalized to other convolution paradigms.",
"The ConvR model Given a triple ( s, r, o ) , ConvR maps the two entities s, o to vectors s , o R d e , and the relation r to vector r R d r , where d e and d r are the embedding size of entities and relations, respectively.",
"Then, the subject vector s is reshaped into a 2D matrix S R d he d we (where d e = d he d we ) and fed as input to a convolutional layer.",
"As shown in (Dettmers et al., 2018), using 2D rather than 1D convolution would be able to extract more feature interactions and increase model expressiveness.",
"The relation vector r is further split into blocks r (1) , , r ( c ) with equal size, where each r ( (cid:2) ) R d r /c is reshaped into a 2D convolution filter R ( (cid:2) ) R h w .",
"Here, c is the number of filters, h and w the height and width of each filter, and d r = chw .",
"Figure",
"1(b) gives a simple example of this reshaping process, where a subject vector of length 9 is reshaped into a 3 3 matrix, and a relation vector of length 8 is split and reshaped into 981 s r = Adaptive convolution Feature maps s r s r s r = RS (1) Figure 2: A simple illustration of adaptive convolution, where a 2 2 filter R (1) (constructed from the first half of the relation vector r ) is convolved across a 3 3 input S (reshaped from the subject vector s ), generating a feature map of size 2 2 .",
"After reshaping, ConvR convolves across the input S using these adaptively constructed, relation-specific filters.",
"For each filter R ( (cid:2) ) , a convolutional feature map C ( (cid:2) ) R ( d he h +1) ( d we w +1) will be generated, with the mn -th entry calculated as: c ( (cid:2) ) m,n = f (cid:2)(cid:3) i,j s m + i 1 ,n + j 1 r ( (cid:2) ) i,j (cid:4) , (1) where f ( ) is a non-linear function, e.g., ReLU (Krizhevsky et al., 2012).",
"Figure 2 visualizes how such a feature map could be generated by convolving across the input with a relation-specific filter (the first equality sign = ), and how each entry of the feature map could be calculated with the original entity and relation vectors (the second equality sign = ).",
"We can see that the adaptive convolution paradigm is quite effective in modeling entity-relation interactions.",
"It enables rich interactions between input entity and relation representations at diverse regions, and all the convolutional features generated will be able to capture such interactions.",
"Finally, to compute the triple score ( s, r, o ) , we flatten the convolutional feature maps C (1) , , C ( c ) and stack them into a single vector c .",
"This vector is then projected into R d e by a fully-connected layer, and matched with the object vector o with an inner product, i.e., ( s, r, o ) = f ( Wc + b ) (cid:2) o , (2) where W R d e c ( d he h +1)( d we w +1) and b R d e are parameters of the fully-connected layer, and f ( ) is again a non-linear function.",
"1 During reshaping we consider the most natural ordering of the embedding entries.",
"That means, a lengthx vector is reshaped into a y z matrix ( x = yz ) such that the first row of the matrix comes from the first z entries of the vector, the second row from the second z entries, and the y -th row from the last z entries.",
"Parameter learning For learning the model parameters, we follow (Dettmers et al., 2018) and use 1-to-many scoring to speed-up training and evaluation.",
"Unlike traditional 1-to-1 scoring which takes a triple ( s, r, o ) as input and directly scores it, 1-to-many scoring takes ( s, r ) as input and scores it against all candidate objects o E simultaneously, generating a score vector p s,r R |E| .",
"Each dimension of this score vector corresponds to an entity o E , calculated as p s,ro = ( ( s, r, o )) , where ( s, r, o ) is the triple score defined in Eq.",
"(2) and ( x ) = 1 1+ e x the sigmoid function.",
"For each input ( s, r ) , we minimize the following cross-entropy loss: L ( s,r )= 1 |E| (cid:3) o E y s,ro log( p s,ro )+ (1 y s,ro )log(1 p s,ro ) , (3) where y s,ro is a binary label.",
"We have y s,ro = 1 if ( s, r, o ) is a valid triple and y s,ro = 0 otherwise.",
"During optimization, we use dropout (Srivastava et al., 2014) to prevent overfitting.",
"Specifically, we use dropout on the reshaped subject representations, the convolutional feature maps, and the projected vectors after the fully-connected layer.",
"We also use batch normalization (Ioffe and Szegedy, 2015) on these representations to stabilize and speed up convergence.",
"We use Adam (Kingma and Ba, 2014) optimizer and label smoothing (Szegedy et al., 2016) as suggested by ConvE.",
"Advantages over ConvE The most prominent advantage of ConvR over ConvE is its high ability to model entity-relation interactions in a convolutional fashion, which is crucial for learning with multi-relational data.",
"ConvE, which convolves across stacked entity-relation representations with global filters, can only model interactions between 982 the two types of input around the concatenation line, and only a small proportion of the convolutional features would be able to capture such interactions (see Figure",
"1(a)).",
"ConvR, by contrast, enables entity-relation interactions at diverse regions, and all the convolutional features are able to capture such interactions (see Figure",
"1(b) and Figure 2).",
"Besides being more effective, ConvR might potentially be more efficient (in terms of the number of parameters).",
"ConvE has a space complexity of O ( d |E| + d |R| + chw + cd (2 d h h + 1)( d w w + 1)) , where d |E| is to store entity vectors, d |R| relation vectors, chw the c global filters with size h w , cd (2 d h h +1)( d w w +1) the projection matrix in the fully-connected layer, and d = d h d w .",
"As entity and relation representations need to be stacked in ConvE, they are usually of the same size, say d .",
"In ConvR, convolution filters are adaptively constructed from relation vectors, so there is no need for global filters.",
"Also, the input of the convolutional layer will be half-sized, generating smaller feature maps, and hence requires a smaller fully-connected layer.",
"The space complexity of ConvR would be O ( d e |E| + d r |R| + cd e ( d he h +1)( d we w + 1)) .",
"Although it could be possible to use different configuration of those common arguments in the two methods (e.g., different number of filters or entity vectors with different size), which may result in different memory cost, we empirically show that ConvR can perform substantially better than ConvE with a good variety of configurations, even those with fewer parameters (see the section Parameter efficiency of ConvR for details).",
"In this section, we evaluate ConvR against competitive baselines in the link prediction task on multiple benchmark KBs.",
"We also investigate parameter efficiency of ConvR against ConvE to further show its superiority.",
"Datasets We use four datasets for our experiments.",
"The first two are the popular WN18 and FB15k, both released by (Bordes et al., 2013).",
"2 WN18 is a subset of WordNet for lexical relationships between words, and FB15k a subgraph of Freebase for generic facts.",
"In most cases WN18 2 https://everest.hds.utc.fr/doku.php?",
"and FB15k encode a relation and its inverse relation at the same time.",
"That means, once a fact is observed, there are usually two distinct triples created for it, e.g., ( s, hyponym , o ) and ( o, hypernym , s ) , or ( s, director-of , o ) and ( o, directed-by , s ) .",
"As pointed out by (Toutanova and Chen, 2015) and (Dettmers et al., 2018), encoding inverse relations might suffer from test leakage, i.e., for each test triple ( s, r, o ) , it is likely to find its inverse ( o, r 1 , s ) in the training set.",
"To avoid this test leakage issue, we further use WN18RR (Dettmers et al., 2018), 3 a subset of WN18 with inverse relations removed, and FB15k-237 (Toutanova and Chen, 2015), 4 a filtered version of FB15k with both inverse and duplicate relations removed.",
"Table 1 summarizes the statistics of the four datasets, where the training sets are used for parameter learning, the validation sets for hyperparameter tuning, and the test sets for evaluation.",
"Evaluation protocol We adopt the ranking process proposed in (Bordes et al., 2013) for evaluation.",
"For each triple ( s, r, o ) in the test set, we replace the subject s with every entity e E , and calculate a score for the corrupted triple ( e, r, o ) .",
"Then we sort these scores in descending order to get the rank of the correct subject s .",
"Since corrupted triples may also be valid, we remove those that already exist in either the training, validation, or test set during ranking, i.e., the filtered setting as called by (Bordes et al., 2013).",
"This whole procedure is repeated while replacing the object o .",
"We aggregate over all test triples, and report the mean reciprocal rank (MRR) and the proportion of correct entities ranked in the top n (Hits@n), with n = 1 , 3 , 10 .",
"3 https://github.com/TimDettmers/ConvE/ blob/master/WN18RR.tar.gz 4 https://www.microsoft.com/en-us/",
"download/details.aspx?id=52312 983 Dataset d e c h w 1 2 3 FB15k 200 100 3 3 0.1 0.4 0.2 WN18 200 100 3 3 0.4 0.3 0.3 FB15k-237 100 100 5 5 0.3 0.2 0.3 WN18RR 200 200 3 3 0.2 0.2 0.5 Table 2: Optimal configurations of ConvR on the four datasets.",
"size to 128, initial learning rate to 0.001, and label smoothing coefficient to 0.1.",
"Other hyper-parameters are selected with grid search on the validation set.",
"Specifically, we tune the entity embedding size d e { 100 , 200 } , filter number c { 50 , 100 , 150 , 200 } , and filter size h w { 3 3 , 4 4 , 5 5 } .",
"All dropout ratios, i.e., 1 on reshaped subject representations, 2 on convolutional feature maps, and 3 on projected vectors after the fully-connected layer, are tuned in { 0 .",
"1 , 0 .",
"2 , 0 .",
"3 , 0 .",
"4 , 0 .",
"5 } .",
"On each dataset, we choose the optimal configuration with the highest MRR on the validation set within 1000 epochs, and report its performance on the test set.",
"Table 2 lists the optimal configurations of ConvR on the four datasets.",
"Methods that use (relatively) simple operations in vector space to model multi-relational data, including TransE (Bordes et al., 2013), DistMult (Yang et al., 2015) and its re-implementation (Kadlec et al., 2017), HolE (Nickel et al., 2016b), ComplEx (Trouillon et al., 2016), ANALOGY (Liu et al., 2017), TorusE (Ebisu and Ichise, 2017), Gaifman (Niepert, 2016), KBGAN (Cai and Wang, 2017), KBLRN (Garcia-Duran and Niepert, 2018), and Node+LinkFeat (Toutanova and Chen, 2015).",
"Methods that further introduce multi-layer structures and non-linearity, in particular those based on neural networks, including R-GCN (Schlichtkrull et al., 2017a), Neural LP (Yang et al., 2017), ConvE (Dettmers et al., 2018), and ConvKB (Nguyen et al., 2018).",
"Table 3 reports the results on WN18 and FB15k, and Table 4 the results on WN18RR and FB15k-237.",
"On all the four datasets, the results for the baselines are taken directly from previous literature to avoid re-implementation bias.",
"Since not all baselines have their results reported on all the four datasets, we cannot make the two sets of baselines compared in Table 3 and Table 4 exactly the same.",
"From the results, we can see that: (1) On WN18 and FB15k, ConvR performs better than or at least as well as the baselines in almost all the metrics.",
"(2) Compared to ConvE, it offers a 5% increase in MRR, a 7% increase in Hits@1, and a 2% increase in Hits@10 on FB15k.",
"(3) On the more difficult WN18RR and FB15k-237, ConvR consistently outperforms most of the baselines, except for MRR score of ConvKB on FB15k-237.",
"However, on WN18RR ConvR outperforms ConvKB on all known metrices, especially MRR.",
"This discrepancy may be attributed to ConvKB's initialization with TransE on FB15k-237.",
"(4) Compared to ConvE, it offers a 3% increase in MRR, a 14% increase in Hits@1, a 12% increase in Hits@10 on WN18RR, and an 11% increase in MRR, a 9% increase in Hits@1, an 8% increase in Hits@10 on FB15k-237.",
"We further investigate parameter efficiency of ConvR against ConvE on FB15k-237.",
"Specifically, we tune the number of filters c { 20 , 40 , 60 , 80 , 100 } and the filter size h w { 2 2 , 3 3 , 4 4 , 5 5 } , fix the other hyper-parameters to their optimal configurations (see Table 2 for details), and show how the performance of ConvR (on the test set) will change as the number of parameters varies.",
"For comparison, we directly show the performance and parameter efficiency of the optimal ConvE model, as reported in (Dettmers et al., 2018).",
"The results are given in Table",
"5. 5 From the results, we can see that: (1) The parameter number of ConvR steadily grows as the filter number c and filter size h w increase, but the performance does not change much.",
"That means, ConvR might achieve relatively good (though not best) performance with a potentially small number 5 Note that some results reported here are even better than those reported in Table",
"4. This is because in Table 4 we determine optimal configurations according to MRR on the validation set, which may not necessarily lead to best performance on the test set.",
"of parameters.",
"(2) ConvR consistently and substantially outperforms the best performing ConvE with all the configurations listed in Table",
"5. (3) In particular, even the most efficient configuration (i.e., c = 20 and h w = 2 2 ) offers a 7% increase in MRR and a 6% increase in Hits@10, with its parameter number only 88% as large as that of ConvE.",
"Link prediction is a crucial task for knowledge bases (KBs).",
"A good variety of statistical relational learning techniques have been proposed for this task (Nickel et al., 2016a), among which vector space embedding models are most particular due to their superior performance and potential scal-ability.",
"Early works of this kind tend to employ simple vector space operations for link prediction.",
"For example, TransE (Bordes et al., 2013) takes relations as translations between subject and object entities.",
"DistMult (Yang et al., 2015) uses multilinear dot product to characterize three-way interactions among subjects, relations, and objects.",
"ComplEx (Trouillon et al., 2016) further generalizes DistMult to complex vector space.",
"Using simple models allows one to easily handle large-scale KBs, but usually at the cost of less model expressiveness (Dettmers et al., 2018).",
"HolE (Nickel et al., 2016b) tries to increase model expressiveness while keeping simplicity.",
"It uses cross-correlation, i.e., the 985 h w = 2 2 h w = 3 3 h w = 4 4 h w = 5 5 ConvE 1.89M | 0.32 | 0.49 ConvR, c = 20 1.67M | 0.342 | 0.520 1.68M | 0.342 | 0.522 1.72M | 0.342 | 0.522 1.78M | 0.342 | 0.522 ConvR, c = 40 1.87M | 0.345 | 0.526 1.90M | 0.344 | 0.524 1.97M | 0.348 | 0.529 2.09M | 0.347 | 0.526 ConvR, c = 60 2.07M | 0.347 | 0.525 2.11M | 0.348 | 0.530 2.22M | 0.350 | 0.529 2.40M | 0.350 | 0.527 ConvR, c = 80 2.27M | 0.347 | 0.528 2.32M | 0.348 | 0.532 2.47M | 0.350 | 0.532 2.71M | 0.348 | 0.528 ConvR, c = 100 2.47M | 0.348 | 0.531 2.54M | 0.348 | 0.527 2.72M | 0.349 | 0.527 3.02M | 0.350 | 0.528 Table 5: Parameter efficiency on FB15k-237.",
"inverse of circular convolution, to match subject and object entities, which has some similarity to our work.",
"But HolE is not a typical neural network architecture.",
"It does not learn multiple layers of non-linear features, and hence is less expressive than our approach.",
"A more direct way of increasing model expressiveness is to employ deeper, more complicated neural network architectures, e.g., multi-layer per-ceptron (Dong et al., 2014), semantic matching energy networks (Bordes et al., 2014), and neural tensor networks (Socher et al., 2013).",
"This kind of approaches, however, often have more parameters and are prone to overfit (Nickel et al., 2016a).",
"(Dettmers et al., 2018) recently devised ConvE, a multi-layer convolutional network which offers increased model expressiveness while remaining highly parameter efficient.",
"After that, (Nguyen et al., 2018) propose ConvKB that explores the global relationships among same dimensional entries of the entity and relation embeddings.",
"However, neither of them models the interactions between various positions of entities and relations.",
"R-GCN (Schlichtkrull et al., 2017a) is another convolutional network designed for KBs, generalized from GCN (Kipf and Welling, 2016) for uni-relational data.",
"But the convolution of R-GCN is conducted in a message passing manner, quite different from our work.",
"Convolutional neural networks have been successfully applied to a wide variety of domains, ranging from speech or visual recognition (Abdel-Hamid et al., 2014; Krizhevsky et al., 2012) to natural language processing (Collobert et al., 2011).",
"Similar ideas of using adaptive or dynamic convolutional filters have been studied before (Lee et al., 2010; Jia et al., 2016; Kang et al., 2017).",
"But most of such works focus on image or video processing.",
"This work focuses on multi-relational data and devises an adaptive convolution paradigm particularly suitable for this scenario.",
"In this paper, we propose ConvR, an adaptive convolutional network specially designed for learning with multi-relational data.",
"In contrast to previous work which convolves across stacked representations with global filters, ConvR adaptively constructs convolution filters from relation representations, and applies these filters across entity representations to generate convolutional features.",
"This adaptive convolution paradigm enables rich interactions between entity and relation representations at diverse regions, and all convolutional features generated in this way will be able to capture such interactions.",
"Experimental results on multiple benchmark knowledge bases show that ConvR achieves significant and consistent improvements against a variety of baselines.",
"In particular, it is not only more effective but also more efficient than state-of-the-art convolutional models, offering a 7% increase in MRR and a 6% increase in Hits@10, while saving 12% in parameter storage.",
"As future work, we plan to devise convolutional paradigms that can maximize interactions not only between subject entities and relations, but also between object entities and relations.",
"In ConvR, we use 1-to-many scoring to speed up training and evaluation.",
"As a side effect, object representations can only interact with a hidden vector (output of the fully-connected layer) via an inner product, which potentially limits the performance of ConvR.",
"It is worth investigating modeling these interactions while keeping the merit of fast training and evaluation.",
"We would like to thank all the anonymous reviewers for their insightful and valuable suggestions, which help to improve the quality of this paper.",
"This work is supported by the National Natural Science Foundation of China (grant No. 61876223)."
] | [
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"objective",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other"
] |
[
"Exploiting natural language processing in the clinical domain requires de-identification, i.e., anonymization of personal information in texts.",
"However, current research considers de-identification and downstream tasks, such as concept extraction, only in isolation and does not study the effects of de-identification on other tasks.",
"In this paper, we close this gap by reporting concept extraction performance on automatically anonymized data and investigating joint models for de-identification and concept extraction.",
"In particular, we propose a stacked model with restricted access to privacy-sensitive information and a multitask model.",
"We set the new state of the art on benchmark datasets in English (96.1% F1 for de-identification and 88.9% F1 for concept extraction) and Spanish (91.4% F1 for concept extraction).",
"In the clinical or biomedical domain, natural language processing (NLP) could significantly improve the efficiency and effectiveness of processes.",
"For example, the extraction of structured information from clinical narratives can help in decision making or drug repurposing (Mari-mon et al., 2019).",
"However, the automatic processing of documents with privacy-sensitive content is restricted due to the necessity of applying anonymization techniques.",
"Text anonymization, also called de-identification, aims at detecting and replacing protected health information (PHI), 1 such as patient names, age and phone numbers, as shown in the upper part of Figure 1.",
"Recent studies show that automatic de-identification leads to 1 PHI types are typically defined by governments, for instance in the Health Insurance Portability and Accountability Act (HIPAA) of the United States.",
"AGEPROBLEM Patrick died of myocardial infarction (MI) at 76.",
"PHI terms concept Figure 1: Sentence with annotations of the two tasks.",
"promising results (Stubbs and Uzuner, 2015; Marimon et al., 2019).",
"A severe limitation of current approaches in research, however, is that de-identification is typically addressed in isolation but not together with a downstream task, such as concept extraction (CE) from medical texts (Gonzalez-Agirre et al., 2019; Uzuner et al., 2011).",
"Instead, the downstream task models are trained and evaluated on the non-anonymized data, and it remains unclear how de-identification affects their performance in real-world settings.",
"In this paper, we argue that to evaluate the effectiveness of NLP in the medical domain, the tasks of de-identification and information extraction should be analyzed together.",
"Our contributions are as follows: We close this gap and analyze the effect of de-identification on clinical concept extraction.",
"Moreover, we consider the two tasks jointly and propose two end-to-end models: A multitask model that shares the input representation across tasks, and a stacked model that trains a pipeline of de-identification and concept extraction in an end-to-end manner.",
"For the stacked model, we propose to use a masked embedding layer to restrict the access of the concept detector to privacy-sensitive information and train it on an anonymized version of the data.",
"To make the model differentiable, we use the Gumbel softmax trick (Maddison et al., 2017; Jang et al., 2017).",
"We conduct experiments on clinical benchmark datasets in English and Spanish.",
"Our results indicate that de-identification does not affect CE models negatively, but has even a slight positive effect",
"on the results, probably because de-identification homogenizes the input for CE.",
"Modeling both tasks jointly leads to better results than treating de-identification as a pure preprocessing step, resulting in new state-of-the-art performance for CE.",
"For future research, we publish our code.",
"2 2 Related Work While many works propose joint training for other NLP tasks (i.a., Finkel and Manning, 2009; Miwa and Sasaki, 2014), including multitask learning (i.a., Collobert and Weston, 2008; Klerke et al., 2016; Sgaard and Goldberg, 2016) and stacking of pipeline components (i.a., Miwa and Bansal, 2016), we are to the best of our knowledge the first to combine de-identification with an information extraction task.",
"In this section, we report related work in those two fields.",
"The increasing importance of de-identification is reflected in the number of shared tasks (Uzuner et al., 2007; Stubbs and Uzuner, 2015; Marimon et al., 2019).",
"State-of-the-art methods for de-identification typically rely on recurrent neural networks (RNNs) (Dernoncourt et al., 2016; Lange et al., 2019b; Kajiyama et al., 2018).",
"Feutry et al. (2018) and Friedrich et al. (2019) create pseudo-de-identified text representations with adversarial training.",
"In particular, they replace personal information, such as names, by other names.",
"Zhao et al. (2018) augment the training data by creating more general text skeletons, e.g., by replacing rare words, such as names, by a 2 https://github.com/boschresearch/ joint_anonymization_extraction special unknown token.",
"Compared to these works, we exploit the advantages of both approaches and replace personal information by their class names as placeholders.",
"This approach is not only common for de-identification (Johnson et al., 2016), but also for relation extraction where entities are often either replaced by their type or enriched with type information (i.a., Zhang et al., 2017; Miwa and Sasaki, 2014).",
"We further motivate our choice in Section 3.2.",
"Another difference to the above mentioned works is that we do not augment the training data for our de-identification model.",
"Analogously, there have been a series of shared tasks for information extraction in the clinical and biomedical domain (Uzuner et al., 2011; Sun et al., 2013; Krallinger et al., 2015; Gonzalez-Agirre et al., 2019).",
"Models for these tasks often either rely on hand-crafted features (Leaman et al., 2015; Xu et al., 2012) or RNNs (Hemati and Mehler, 2019; Korvigo et al., 2018; Tourille et al., 2018).",
"Newman-Griffis and Zirikly (2018) study the performance of RNNs for medical named entity recognition in the context of patient mobility and find that they benefit from domain adaption.",
"In contrast to previous work, we investigate the usage of de-identified texts as input for clinical concept extraction models and propose to jointly model de-identification and concept extraction.",
"In this section, we present our systems for the two individual tasks and our proposed joint models.",
"Figure 2 shows the respective architectures.",
"We model both document anonymization (ANON) and clinical concept extraction (CE) as sequence labeling problems and apply a bidirectional long short-term memory (BiLSTM) network (Hochre-iter and Schmidhuber, 1997) with a conditional random field (CRF) output layer (Lafferty et al., 2001), similar to Lample et al. (2016).",
"In recent works on clinical de-identification and CE, this architecture was shown to be very promising (Mari-mon et al., 2019; Gonzalez-Agirre et al., 2019).",
"Each token is represented with a concatenation of different pre-trained language-specific embeddings: byte-pair-encoding (Heinzerling and Strube, 2018), fastText (Bojanowski et al., 2017) and FLAIR (Akbik et al., 2018).",
"For Spanish, we also include multilingual BERT embeddings (De-vlin et al., 2019).",
"Further, we include the following domain-specific embeddings: clinical BERT for English pre-trained on discharge summaries (Alsentzer et al., 2019) and clinical fastText for Spanish pre-trained on the Spanish E-health corpus from the Scielo archive (Soares et al., 2019).",
"To assess the effects of de-identification on CE, we first apply the de-identification model to anonymize the CE dataset and then evaluate the CE model on the anonymized data.",
"We refer to this approach as PIPELINE model (see Figure 2a).",
"For anonymization, we replace each detected privacy-sensitive term with a placeholder of its PHI type, i.e., there is one placeholder per type.",
"This replacement choice has advantages over the alternatives described in Section",
"2. Compared to replacing personal information with alternative names, it leads to a more general text and thus, homogenizes the input for the downstream-task classifier.",
"Compared to replacing all personal information with the same token, the resulting text is more specific, allowing the downstream-task classifier to take into account which kind of personal information was mentioned.",
"Thus, the approach is a trade-off between more homogeneous input and more fine-grained information for the downstream-task classifier.",
"Instead of using a sequential pipeline, we propose to train both tasks jointly.",
"For this, we test two approaches: a multitask model and a stacked model.",
"In the MULTITASK model (Figure 2b), the weights up to the BiLSTM layer are shared across both tasks.",
"For each task, we add a task-specific hidden layer with ReLU activation and a CRF output layer.",
"Note that in this architecture, the CE model has access to the original, privacy-sensitive data.",
"We also propose a STACKED model (Figure 2c), where only the de-identification part has access to the privacy-sensitive information.",
"The access of the CE part is restricted by a masked embedding layer as described in the following.",
"Masked Embedding Layer.",
"The masked embedding layer ensures that the CE model does not have access to privacy-sensitive information by replacing the input embeddings of privacy-sensitive tokens by PHI-class embeddings which are randomly initialized and fine-tuned during training.",
"This is depicted in Figure",
"3. Gumbel Softmax Trick.",
"The masked embedding layer requires a discrete output from the de-identification part.",
"In order to ensure that the model stays fully differentiable, we use the Gumbel softmax trick (Maddison et al., 2017; Jang et al., 2017).",
"It approximates categorical samples with a continuous distribution on the simplex and computes gradients for backpropagation with the reparameterization trick.",
"The Gumbel softmax function has the following form: y k = exp((log k + G k ) / ) (cid:80) Ki =1 exp((log i + G i ) / ) (1) with 1 , ... K being the unnormalized output scores from the de-identification layer, G 1 , ..., GK being i.i.d samples drawn from Gumbel(0, 1) and being a temperature.",
"For 0 , the distribution becomes identical to the categorical distribution.",
"The masked embedding layer takes the output of the Gumbel softmax (i.e., an anonymization label) and if the label is a PHI class and requires anonymization, the masked embedding layer uses English (i2b2) Spanish ANON CE ANON CE # classes 24 3 22 3 train (# tokens) 45,793 16,315 15,903 8,068 dev (# tokens) 5,088 -8,277 3,748 test (# tokens) 32,587 27,625 7,966 3,930 Table 1: Dataset statistics.",
"the respective PHI class embedding vector, otherwise it uses the original embedding vector.",
"In this section, we describe the datasets used in our experiments, and training details for our models.",
"Finally, we present our results and analysis.",
"We evaluate our models on corpora from the clinical domain in English and Spanish.",
"For English, we use the data from the i2b2 2010 CE task (Uzuner et al., 2011) and the i2b2 2014 de-identification task (Stubbs and Uzuner, 2015).",
"For Spanish, we use the MEDDOCAN (Marimon et al., 2019) corpus for de-identification and the PharmaCoNER corpus (Gonzalez-Agirre et al., 2019) for CE.",
"As PharmaCoNER is a subset of MEDDOCAN, we have both gold-standard concept and de-identification annotations for this data.",
"Data Preprocessing.",
"We used the preprocessing scripts from Alsentzer et al. (2019) for the English i2b2 corpora and the Spanish Clinical Case Corpus tokenizer (Intxaurrondo, 2019) for both Spanish corpora.",
"We noticed that the Spanish tokenizer sometimes merges multi-word expressions into a single token joined with underscores for contiguous words.",
"As a result, some tokens cannot be aligned with the corresponding entity annotations.",
"To address this, we split those tokens into their components in a postprocessing step.",
"Table 1 shows statistics about corpora sizes.",
"Hyperparameters.",
"The embeddings have 300 (byte-pair-encoding), 300 (fastText) and 4,048 (FLAIR) dimensions.",
"For English, we use clinical BERT embeddings with 768 dimensions which are constructed by averaging the last four layers with the scalar mix operation proposed by Liu et al. (2019).",
"We concatenate all embeddings to one input vector, resulting in a total input dimensionality of 5,416.",
"Analogously, we use multilingual BERT (768 dim.) and domain-specific fastText embed-Models English Spanish Yang and Garibaldi (2015) 96.0 Zhao et al. (2018) 94.0 Alsentzer et al. (2019) 93.0 Lange et al. (2019b) 97.0 Hassan et al. (2019) 96.3 Perez et al. (2019) 96.0 Our PIPELINE (ANON only) 96.1 96.8 Our STACKED 95.9 96.8 Our MULTITASK 95.2 96.7 Table 2: F1 results for de-identification.",
"dings (100 dim.) for Spanish, resulting in 5,516 input dimensions.",
"For the LSTM, we use 256 hidden units per direction.",
"The task-specific hidden layer of the multitask model has 128 units.",
"Training.",
"For training, we use stochastic gradient descent with a learning rate of 0.1 and a batch size of 32 sentences.",
"The learning rate is halved after 3 consecutive epochs without improvements on the development set.",
"For the joint models, we pretrain the anonymization part for 3 epochs and, then, use a higher initial learning rate of 0.2 for the concept extraction part.",
"We perform early stopping on the development set.",
"If no development set was provided by the corpus (i2b2 2010 corpus), we held out 10% of the training set as development set.",
"Note that we use the same hyperparameters for all our models and all tasks, which were tuned on the Spanish concept extraction data.",
"Evaluation.",
"We train each model with three random initializations and report F1 for exact matching for the best model in all experiments.",
"We perform statistical significance testing to check if our joint models are better than the PIPELINE model.",
"We use paired permutation testing with 2 20 permutations and a significance level of 0.05.",
"Table 2 shows that the de-identification component of our PIPELINE model which was trained on the single task of de-identification sets the new state of the art on English and performs comparable on Spanish.",
"The performance difference to our prior work (Lange et al., 2019b) is due to a slightly different set of input embeddings.",
"However, we found no statistically significant differences to that model.",
"The de-identification performance of STACKED is comparable to the PIPELINE model, the de-identification performance of MULTITASK Models No PHI English Spanish de Bruijn et al. (2010) no 85.2 Xu et al. (2012) no 84.9 Alsentzer et al. (2019) no 87.7 Sun and Yang (2019) no 89.2 Lange et al. (2019a) no 88.6 Our PIPELINE yes 88.0 89.6 Our STACKED yes 88.7 90.0 Our MULTITASK no 88.9 90.3 Xiong et al. (2019) no 91.1 Stoeckel et al. (2019) no 90.5 Our MULTITASK no 91.4 Table 3: F1 results for concept extraction.",
"is slightly lower, however, we only found statistically significant differences for the MULTITASK model for English.",
"The results for our concept extraction models in comparison to state of the art are shown in Table",
"3. We set the new state of the art on both languages.",
"While in the PIPELINE setting, the CE performance is slightly lower (as it has been trained on the non-anonymized texts but is evaluated on the de-identification output), training de-identification and CE jointly leads to considerable improvements for both STACKED and MULTITASK with statistically significant differences for both models in English and for MULTITASK also in Spanish.",
"Especially the results of STACKED in comparison to PIPELINE shows that end-to-end training of the two steps is promising, while still preserving privacy aspects during model training by restricting internal access to PHI tokens.",
"The performance of each embedding used in our experiments is shown in Table",
"4. As mentioned before, we did not include multilingual BERT embeddings for English, but show their results for completeness.",
"Finally, we analyze the impact of de-identification on CE.",
"The results for training and testing our CE model on different inputs (non-anonymized vs. anonymized) are shown in Table",
"5. We restrict our analysis to Spanish since the data is labeled with both de-identification and concept information (see Section 4.1).",
"Thus, we can also investigate the difference between gold and predicted de-identification labels.",
"The CE model benefits from being trained and evaluated on anonymized data (lines 4-6).",
"However, it hurts to train on non-anonymized data and evaluate on predicted de-identification labels (line 1 vs. 2) and vice versa (line 1 vs. 3).",
"This supports our motivation that it is necessary to investigate anonymization and downstream applications together.",
"The difference of training on gold vs. predicted de-identification labels (lines 4-6) is only marginal, suggesting that state-of-the-art de-identification systems are good enough to be used in such settings.",
"In this paper, we close the gap and consider de-identification of clinical text together with concept extraction, a possible downstream application.",
"We investigate the effects of de-identification on concept extraction and show that it positively influ-ences the concept extraction performance.",
"We propose two models to learn both tasks jointly, a multitask model and a stacked model, and set the new state of the art on medical concept extraction benchmark datasets for English and Spanish.",
"We would like to thank the members of the BCAI NLP&KRR research group and the anonymous reviewers for their helpful comments."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"other"
] |
[
"We study the problem of multilingual masked language modeling, i.e. the training of a single model on concatenated text from multiple languages, and present a detailed study of several factors that influence why these models are so effective for cross-lingual transfer.",
"We show, contrary to what was previously hypothesized, that transfer is possible even when there is no shared vocabulary across the monolingual corpora and also when the text comes from very different domains.",
"The only requirement is that there are some shared parameters in the top layers of the multi-lingual encoder.",
"To better understand this result, we also show that representations from monolingual BERT models in different languages can be aligned post-hoc quite effectively, strongly suggesting that, much like for non-contextual word embeddings, there are universal latent symmetries in the learned embedding spaces.",
"For multilingual masked language modeling, these symmetries are automatically discovered and aligned during the joint training process.",
"Multilingual language models such as mBERT (De-vlin et al., 2019) and XLM (Lample and Conneau, 2019) enable effective cross-lingual transfer it is possible to learn a model from supervised data in one language and apply it to another with no additional training.",
"Recent work has shown that transfer is effective for a wide range of tasks (Wu and Dredze, 2019; Pires et al., 2019).",
"These work speculates why multilingual pretraining works (e.g. shared vocabulary), but only experiment with a single reference mBERT and is unable to systematically measure these effects.",
"In this paper, we present the first detailed empirical study of the effects of different masked lanEqual contribution.",
"guage modeling (MLM) pretraining regimes on cross-lingual transfer.",
"Our first set of experiments is a detailed ablation study on a range of zero-shot cross-lingual transfer tasks.",
"Much to our surprise, we discover that language universal representations emerge in pretrained models without the requirement of any shared vocabulary or domain similarity, and even when only a subset of the parameters in the joint encoder are shared.",
"In particular, by systematically varying the amount of shared vocabulary between two languages during pretraining, we show that the amount of overlap only accounts for a few points of performance in transfer tasks, much less than might be expected.",
"By sharing parameters alone, pretraining learns to map similar words and sentences to similar hidden representations.",
"To better understand these effects, we also analyze multiple monolingual BERT models trained independently.",
"We find that monolingual models trained in different languages learn representations that align with each other surprisingly well, even though they have no shared parameters.",
"This result closely mirrors the widely observed fact that word embeddings can be effectively aligned across languages (Mikolov et al., 2013).",
"Similar dynamics are at play in MLM pretraining, and at least in part explain why they aligned so well with relatively little parameter tying in our earlier experiments.",
"This type of emergent language universality has interesting theoretical and practical implications.",
"We gain insight into why the models transfer so well and open up new lines of inquiry into what properties emerge in common in these representations.",
"They also suggest it should be possible to adapt pretrained models to new languages with little additional training and it may be possible to better align independently trained representations without having to jointly train on all of the (very large) unlabeled data that could be gathered.",
"For example, concurrent work has shown that a pretrained MLM model can be rapidly fine-tuned to another language (Artetxe et al., 2019).",
"This paper offers the following contributions: We provide a detailed ablation study on crosslingual representation of bilingual BERT.",
"We show parameter sharing plays the most important role in learning cross-lingual representation, while shared BPE, shared softmax and domain similarity play a minor role.",
"We demonstrate even without any shared sub-words (anchor points) across languages, crosslingual representation can still be learned.",
"With bilingual dictionary, we propose a simple technique to create more anchor points by creating synthetic code-switched corpus, ben-efiting especially distantly-related languages.",
"We show monolingual BERTs of different language are similar with each other.",
"Similar to word embeddings (Mikolov et al., 2013), we show monolingual BERT can be easily aligned with linear mapping to produce crosslingual representation space at each level.",
"Language Model Pretraining Our work follows in the recent line of language model pretraining.",
"ELMo (Peters et al., 2018) first popularized representation learning from a language model.",
"The representations are used in a transfer learning setup to improve performance on a variety of downstream NLP tasks.",
"Follow-up work by Howard and Ruder (2018); Radford et al. (2018) further improves on this idea by fine-tuning the entire language model.",
"BERT (Devlin et al., 2019) signifi-cantly outperforms these methods by introducing a masked-language model and next-sentence prediction objectives combined with a bi-directional transformer model.",
"The multilingual version of BERT (dubbed mBERT) trained on Wikipedia data of over 100 languages obtains strong performance on zero-shot cross-lingual transfer without using any parallel data during training (Wu and Dredze, 2019; Pires et al., 2019).",
"This shows that multilingual representations can emerge from a shared Transformer with a shared subword vocabulary.",
"Crosslingual language model (XLM) pretraining (Lam-ple and Conneau, 2019) was introduced concurrently to mBERT.",
"On top of multilingual masked language models, they investigate an objective based on parallel sentences as an explicit crosslingual signal.",
"XLM shows that cross-lingual language model pretraining leads to a new state of the art on XNLI (Conneau et al., 2018), supervised and unsupervised machine translation (Lample et al., 2018).",
"Other work has shown that mBERT outperforms word embeddings on token-level NLP tasks (Wu and Dredze, 2019), and that adding character-level information (Mulcaire et al., 2019) and using multi-task learning (Huang et al., 2019) can improve cross-lingual performance.",
"Alignment of Word Embeddings Researchers working on word embeddings noticed early that embedding spaces tend to be shaped similarly across different languages (Mikolov et al., 2013).",
"This inspired work in aligning monolingual embeddings.",
"The alignment was done by using a bilingual dictionary to project words that have the same meaning close to each other (Mikolov et al., 2013).",
"This projection aligns the words outside of the dictionary as well due to the similar shapes of the word embedding spaces.",
"Follow-up efforts only required a very small seed dictionary (e.g., only numbers (Artetxe et al., 2017)) or even no dictionary at all (Conneau et al., 2017; Zhang et al., 2017).",
"Other work has pointed out that word embeddings may not be as isomorphic as thought (Sgaard et al., 2018) especially for distantly related language pairs (Patra et al., 2019).",
"Ormazabal et al. (2019) show joint training can lead to more isomorphic word embeddings space.",
"Schuster et al. (2019) showed that ELMo embeddings can be aligned by a linear projection as well.",
"They demonstrate a strong zero-shot crosslingual transfer performance on dependency parsing.",
"Wang et al. (2019) align mBERT representations and evaluate on dependency parsing as well.",
"hypothesize that similar to word embedding spaces, language-universal structures emerge in pretrained language models.",
"While computing word embedding similarity is relatively straightforward, the same cannot be said for the deep contextualized BERT models that we study.",
"Recent work introduces ways to measure the similarity of neural network activation between different layers and different models (Laakso and Cottrell, 2000; Li et al., 2016; Raghu et al., 2017; Morcos et al., 2018; Wang et al., 2018).",
"For example, Raghu et al. (2017) use canonical correlation analysis (CCA) and a new method, singular vector canonical correlation analysis (SVCCA), to show that early layers converge faster than upper layers in convolutional neural networks.",
"Kudugunta et al. (2019) use SVCCA to investigate the multilingual representations obtained by the encoder of a massively multilingual neural machine translation system (Aha-roni et al., 2019).",
"Kornblith et al. (2019) argues that CCA fails to measure meaningful similarities between representations that have a higher dimension than the number of data points and introduce the centered kernel alignment (CKA) to solve this problem.",
"They successfully use CKA to identify correspondences between activations in networks trained from different initializations.",
"We study a standard multilingual masked language modeling formulation and evaluate performance on several different cross-lingual transfer tasks, as described in this section.",
"Our multilingual masked language models follow the setup used by both mBERT and XLM.",
"We use the implementation of Lample and Conneau (2019).",
"Specifically, we consider continuous streams of 256 tokens and mask 15% of the input tokens which we replace 80% of the time by a mask token, 10% of the time with the original word, and 10% of the time with a random word.",
"Note the random words could be foreign words.",
"The model is trained to recover the masked tokens from its context (Taylor, 1953).",
"The subword vocabulary and model parameters are shared across languages.",
"Note the model has a softmax prediction layer shared across languages.",
"We use Wikipedia for training data, preprocessed by Moses (Koehn et al., 2007) and Stanford word segmenter (for Chinese only) and BPE (Sen-nrich et al., 2016) to learn subword vocabulary.",
"During training, we sample a batch of continuous streams of text from one language proportionally to the fraction of sentences in each training corpus, exponentiated to the power 0 .",
"7 .",
"Pretraining details Each model is a Transformer (Vaswani et al., 2017) with 8 layers, 12 heads and GELU activiation functions (Hendrycks and Gimpel, 2016).",
"The output softmax layer is tied with input embeddings (Press and Wolf, 2017).",
"The embeddings dimension is 768, the hidden dimension of the feed-forward layer is 3072, and dropout is 0.1.",
"We train our models with the Adam optimizer (Kingma and Ba, 2014) and the inverse square root learning rate scheduler of Vaswani et al. (2017) with 10 4 learning rate and 30k linear warmup steps.",
"For each model, we train it with 8 NVIDIA V100 GPUs with 32GB of memory and mixed precision.",
"It takes around 3 days to train one model.",
"We use batch size 96 for each GPU and each epoch contains 200k batches.",
"We stop training at epoch 200 and select the best model based on English dev perplexity for evaluation.",
"We consider three NLP tasks to evaluate performance: natural language inference (NLI), named entity recognition (NER) and dependency parsing (Parsing).",
"We adopt the zero-shot cross-lingual transfer setting, where we (1) fine-tune the pretrained model on English and (2) directly transfer the model to target languages.",
"We select the model and tune hyperparameters with the English dev set.",
"We report the result on average of best two set of hyperparameters.",
"Fine-tuning details We fine-tune the model for 10 epochs for NER and Parsing and 200 epochs for NLI.",
"We search the following hyperparam-eter for NER and Parsing: batch size { 16 , 32 } ; learning rate { 2e-5 , 3e-5 , 5e-5 } .",
"For XNLI, we search: batch size { 4 , 8 } ; encoder learning rate { 1.25e-6 , 2.5e-6 , 5e-6 } ; classifier learning rate { 5e-6 , 2.5e-5 , 1.25e-4 } .",
"We use Adam with fixed learning rate for XNLI and warmup the learning rate for the first 10% batch then decrease linearly to 0 for NER and Parsing.",
"We save checkpoint after each epoch.",
"NLI We use the cross-lingual natural language inference (XNLI) dataset (Conneau et al., 2018).",
"The task-specific layer is a linear mapping to a softmax classifier, which takes the representation of the first token as input.",
"NER We use WikiAnn (Pan et al., 2017), a silver NER dataset built automatically from Wikipedia, for English-Russian and English-French.",
"For English-Chinese, we use CoNLL 2003 English (Tjong Kim Sang and De Meulder, 2003) and a Chinese NER dataset (Levow, 2006), with realigned Chinese NER labels based on the Stanford word segmenter.",
"We model NER as BIO tagging.",
"The task-specific layer is a linear mapping to a softmax Figure 1: On the impact of anchor points and parameter sharing on the emergence of multilingual representations.",
"classifier, which takes the representation of the first subword of each word as input.",
"We report span-level F1.",
"We adopt a simple post-processing heuristic to obtain a valid span, rewriting standalone I-X into B-X and B-X I-Y I-Z into B-Z I-Z I-Z , following the final entity type.",
"We report the span-level F1.",
"Parsing Finally, we use the Universal Dependencies (UD v2.3) (Nivre, 2018) for dependency parsing.",
"We consider the following four treebanks: English-EWT, French-GSD, Russian-GSD, and Chinese-GSD.",
"The task-specific layer is a graph-based parser (Dozat and Manning, 2016), using representations of the first subword of each word as inputs.",
"We measure performance with the labeled attachment score (LAS).",
"We hypothesize that the following factors play important roles in what makes multilingual BERT multilingual: domain similarity, shared vocabulary (or anchor points), shared parameters, and language similarity.",
"Without loss of generality, we focus on bilingual MLM.",
"We consider three pairs of languages: English-French, English-Russian, and English-Chinese.",
"Multilingual BERT and XLM are trained on the Wikipedia comparable corpora.",
"Domain similarity has been shown to affect the quality of crosslingual word embeddings (Conneau et al., 2017), but this effect is not well established for masked language models.",
"We consider domain difference by training on Wikipedia for English and a random subset of Common Crawl of the same size for the other languages ( Wiki-CC ).",
"We also consider a model trained with Wikipedia only ( Default ) for comparison.",
"The first group in Tab.",
"1 shows domain mismatch has a relatively modest effect on performance.",
"XNLI and parsing performance drop around 2 points while NER drops over 6 points for all languages on average.",
"One possible reason is that the labeled WikiAnn data for NER consists of Wikipedia text; domain differences between source and target language during pretraining hurt performance more.",
"Indeed for English and Chinese NER, where neither side comes from Wikipedia, performance only drops around 2 points.",
"Anchor points are identical strings that appear in both languages in the training corpus.",
"Translingual words like DNA or Paris appear in the Wikipedia of many languages with the same meaning.",
"In mBERT, anchor points are naturally preserved due to joint BPE and shared vocabulary across languages.",
"Anchor point existence has been suggested as a key ingredient for effective cross-lingual transfer since they allow the shared encoder to have at least some direct tying of meaning across different languages (Lample and Conneau, 2019; Pires et al., 2019; Wu and Dredze, 2019).",
"However, this effect 40 60 ACC Default Wiki-CC No anchors Default anchors Extra anchors Sep Emb Sep L1-3 Sep L1-6 Sep Emb + L1-3 Sep Emb + L1-6 En-Fr XNLI 0 20 40 60 F1 En-Zh NER 0 20 40 LAS En-Ru Parsing Figure 3: Cross-lingual transfer of bilingual MLM on three tasks and language pairs under different settings.",
"We present a controlled study of the impact of anchor points on cross-lingual transfer performance by varying the amount of shared subword vocabulary across languages.",
"Instead of using a single joint BPE with 80k merges, we use language-specific BPE with 40k merges for each language.",
"We then build vocabulary by taking the union of the vocabulary of two languages and train a bilingual MLM ( Default anchors ).",
"To remove anchor points, we add a language prefix to each word in the vocabulary before taking the union.",
"Bilingual MLM ( No anchors ) trained with such data has no shared vocabulary across languages.",
"However, it still has a single softmax prediction layer shared across languages and tied with input embeddings.",
"As Wu and Dredze (2019) suggest there may also be correlation between cross-lingual performance and anchor points, we additionally increase anchor points by using a bilingual dictionary to create code switch data for training bilingual MLM ( Extra anchors ).",
"For two languages, (cid:96) 1 and (cid:96) 2 , with bilingual dictionary entries d (cid:96) 1 ,(cid:96) 2 , we add anchors to the training data as follows.",
"For each training word w (cid:96) 1 in the bilingual dictionary, we either leave it as is (70% of the time) or randomly replace it with one of the possible translations from the dictionary (30% of the time).",
"We change at most 15% of the words in a batch and sample word translations from PanLex (Kamholz et al., 2014) bilingual dictionaries, weighted according to their translation quality 1 .",
"The second group of Tab.",
"1 shows cross-lingual transfer performance under the three anchor point conditions.",
"Anchor points have a clear effect on performance and more anchor points help, especially in the less closely related language pairs (e.g. English-Chinese has a larger effect than English-French with over 3 points improvement on NER and XNLI).",
"However, surprisingly, effective transfer is still possible with no anchor points.",
"Com-1 Although we only consider pairs of languages, this procedure naturally scales to multiple languages, which could produce larger gains in future work.",
"paring no anchors and default anchors, the performance of XNLI and parsing drops only around 1 point while NER even improve 1 points averaging over three languages.",
"Overall, these results show that we have previously overestimated the contribution of anchor points during multilingual pretraining.",
"Concurrently, Karthikeyan et al. (2020) similarly find anchor points play minor role in learning cross-lingual representation.",
"Given that anchor points are not required for transfer, a natural next question is the extent to which we need to tie the parameters of the transformer layers.",
"Sharing the parameters of the top layer is necessary to provide shared inputs to the task-specific layer.",
"However, as seen in Figure 1, we can progressively separate the bottom layers 1:3 and 1:6 of the Transformers and/or the embedding layers (including positional embeddings) ( Sep Emb ; Sep L1-3 ; Sep L1-6 ; Sep Emb + L1-3 ; Sep Emb + L1-6 ).",
"Since the prediction layer is tied with the embeddings layer, separating the embeddings layer also introduces a language-specific softmax prediction layer for the cloze task.",
"Additionally, we only sample random words within one language during the MLM pretraining.",
"During fine-tuning on the English training set, we freeze the language-specific layers and only fine-tune the shared layers.",
"The third group in Tab.",
"1 shows cross-lingual transfer performance under different parameter sharing conditions with Sep denote which layers is not shared across languages.",
"Sep Emb (effec-tively no anchor point) drops more than No anchors with 3 points on XNLI and around 1 point on NER and parsing, suggesting have a cross-language softmax layer also helps to learn cross-lingual representations.",
"Performance degrades as fewer layers are shared for all pairs, and again the less closely related language pairs lose the most.",
"Most notably, the cross-lingual transfer performance drops to random when separating embeddings and bottom 6 layers of the transformer.",
"However, reasonably strong levels of transfer are still possible without tying the bottom three layers.",
"These trends suggest that parameter sharing is the key ingredient that enables the learning of an effective cross-lingual representation space, and having language-specific capacity does not help learn a language-specific encoder for cross-lingual representation.",
"Our hypothesis is that the representations that the models learn for different languages are similarly shaped and models can reduce their capacity budget by aligning representations for text that has similar meaning across languages.",
"Finally, in contrast to many of the experiments above, language similarity seems to be quite important for effective transfer.",
"Looking at Tab.",
"1 column by column in each task, we observe performance drops as language pairs become more distantly related.",
"Using extra anchor points helps to close the gap.",
"However, the more complex tasks seem to have larger performance gaps and having language-specific capacity does not seem to be the solution.",
"Future work could consider scaling the model with more data and cross-lingual signal to close the performance gap.",
"Summarised by Figure 3, parameter sharing is the most important factor.",
"More anchor points help but anchor points and shared softmax projection parameters are not necessary for effective crosslingual transfer.",
"Joint BPE and domain similarity contribute a little in learning cross-lingual representation.",
"To better understand the robust transfer effects of the last section, we show that independently trained monolingual BERT models learn representations that are similar across languages, much like the widely observed similarities in word embedding spaces.",
"In this section, we show that independent monolingual BERT models produce highly similar representations when evaluated at the word level (5.1.1), contextual word-level (5.1.2), and sentence level (5.1.3) .",
"We also plot the cross-lingual similarity of neural network activation with center kernel alignment (5.2) at each layer.",
"We consider five languages: English, French, German, Russian, and Chinese.",
"To measure similarity, we learn an orthogonal mapping using the Procrustes (Smith et al., 2017) approach:",
"with U VT = SVD ( Y XT ) , where X and Y are representation of two monolingual BERT models, sampled at different granularities as described below.",
"We apply iterative normalization on X and Y before learning the mapping (Zhang et al., 2019).",
"In this section, we align both the non-contextual word representations from the embedding layers, and the contextual word representations from the",
"hidden states of the Transformer at each layer.",
"For non-contextualized word embeddings, we define X and Y as the word embedding layers of monolingual BERT, which contain a single embedding per word (type).",
"Note that in this case we only keep words containing only one subword.",
"For contextualized word representations, we first encode 500k sentences in each language.",
"At each layer, and for each word, we collect all contextualized representations of a word in the 500k sentences and average them to get a single embedding.",
"Since BERT operates at the subword level, for one word we consider the average of all its subword embeddings.",
"Eventually, we get one word embedding per layer.",
"We use the MUSE benchmark (Con-neau et al., 2017), a bilingual dictionary induction dataset for alignment supervision and evaluate the alignment on word translation retrieval.",
"As a baseline, we use the first 200k embeddings of fastText (Bojanowski et al., 2017) and learn the mapping using the same procedure as 5.1.",
"Note we use a subset of 200k vocabulary of fastText, the same as BERT, to get a comparable number.",
"We retrieve word translation using CSLS (Conneau et al., 2017) with K=10.",
"In Figure 4, we report the alignment results under these two settings.",
"Figure 4a shows that the subword embeddings matrix of BERT, where each subword is a standalone word, can easily be aligned with an orthogonal mapping and obtain slightly better performance than the same subset of fastText.",
"Figure 4b shows embeddings matrix with the average of all contextual embeddings of each word can also be aligned to obtain a decent quality bilingual dictionary, although underperforming fastText.",
"We notice that using contextual representations from higher layers obtain better results compared to lower layers.",
"BERT models in contextual setting, and evaluate performance on cross-lingual transfer for NER and parsing.",
"We take the Transformer layers of each monolingual model up to layer i , and learn a mapping W from layer i of the target model to layer i of the source model.",
"To create that mapping, we use the same Procrustes approach but use a dictionary of parallel contextual words, obtained by running the fastAlign (Dyer et al., 2013) model on the 10k XNLI parallel sentences.",
"For each downstream task, we learn task-specific layers on top of i -th English layer: four Transformer layers and a task-specific layer.",
"We learn these on the training set, but keep the first i pretrained layers freezed.",
"After training these task-specific parameters, we encode (say) a Chinese sentence with the first i layers of the target Chinese BERT model, project the contextualized representations back to the English space using the W we learned, and then use the task-specific layers for NER and parsing.",
"In Figure 5, we vary i from the embedding layer (layer 0) to the last layer (layer 8) and present the results of our approach on parsing and NER.",
"We also report results using the first i layers of a bilingual MLM (biMLM).",
"2 We show that aligning monolingual models (MLM align) obtain relatively good performance even though they perform worse than bilingual MLM, except for parsing on English-French.",
"The results of monolingual alignment generally shows that we can align contextual representations of monolingual BERT models with a simple linear mapping and use this approach for crosslingual transfer.",
"We also observe that the model obtains the highest transfer performance with the middle layer representation alignment, and not the last layers.",
"The performance gap between monolingual MLM alignment and bilingual MLM is higher in NER compared to parsing, suggesting the syntactic information needed for parsing might be easier to align with a simple mapping while entity information requires more explicit entity alignment.",
"In this case, X and Y are obtained by average pooling subword representation (excluding special token) of sentences at each layer of monolingual BERT.",
"We use multi-way parallel sentences from XNLI for alignment supervision and Tatoeba (Schwenk et al., 2019) for evaluation.",
"2 In Appendix A, we also present the same alignment step with biMLM but only observed improvement in parsing.",
"Figure 6 shows the sentence similarity search results with nearest neighbor search and cosine similarity, evaluated by precision at 1, with four language pairs.",
"Here the best result is obtained at lower layers.",
"The performance is surprisingly good given we only use 10k parallel sentences to learn the alignment without fine-tuning at all.",
"As a reference, the state-of-the-art performance is over 95%, obtained by LASER (Artetxe and Schwenk, 2019) trained with millions of parallel sentences.",
"These findings demonstrate that both word-level, contextual word-level, and sentence-level BERT representations can be aligned with a simple orthogonal mapping.",
"Similar to the alignment of word embeddings (Mikolov et al., 2013), this shows that BERT models are similar across languages.",
"This result gives more intuition on why mere parameter sharing is sufficient for multilingual representations to emerge in multilingual masked language models.",
"Based on the work of Kornblith et al. (2019), we examine the centered kernel alignment (CKA), a neural network similarity index that improves upon canonical correlation analysis (CCA), and use it to measure the similarity across both monolingual and bilingual masked language models.",
"The linear CKA is both invariant to orthogonal transformation and isotropic scaling, but are not invertible to any linear transform.",
"The linear CKA similarity measure is defined as follows: CKA ( X, Y ) = (cid:107) YTX (cid:107) 2 F ( (cid:107) XTX (cid:107) F (cid:107) YTY (cid:107) F ) , B ili n g u a l M o n o li n g u a l R a n d o m L0 L1 L2 L3 L4 L5 L6 L7 L8 AVER 0.76 0.75 0.52 0.75 0.77 0.6 0.74 0.74 0.58 0.75 0.71 0.58 0.73 0.66 0.6 0.69 0.58 0.52 0.64 0.48 0.44 0.48 0.24 0.32 0.55 0.4 0.3 0.68 0.59 0.5 en-en' B ili n g u a l M o n o li n g u a l R a n d o m 0.61 0.65 0.46 0.74 0.71 0.55 0.71 0.7 0.52 0.73 0.7 0.53 0.73 0.64 0.55 0.72 0.59 0.48 0.71 0.5 0.41 0.67 0.34 0.31 0.62 0.4 0.28 0.69 0.58 0.46 en-fr B ili n g u a l M o n o li n g u a l R a n d o m 0.66 0.64 0.46 0.76 0.7 0.54 0.72 0.69 0.52 0.73 0.69 0.54 0.73 0.63 0.56 0.74 0.6 0.49 0.7 0.52 0.42 0.6 0.39 0.31 0.64 0.43 0.28 0.7 0.59 0.46 en-de B ili n g u a l M o n o li n g u a l R a n d o m 0.56 0.56 0.42 0.67 0.65 0.5 0.64 0.63 0.47 0.65 0.64 0.48 0.65 0.61 0.5 0.64 0.56 0.44 0.63 0.5 0.37 0.6 0.34 0.29 0.5 0.39 0.26 0.62 0.54 0.41 en-ru B ili n g u a l M o n o li n g u a l R a n d o m 0.56 0.6 0.44 0.65 0.67 0.51 0.61 0.65 0.49 0.59 0.64 0.5 0.58 0.6 0.52 0.59 0.56 0.46 0.57 0.51 0.39 0.5 0.37 0.3 0.51 0.4 0.27 0.57 0.56 0.43 en-zh Figure 7: CKA similarity of mean-pooled multi-way parallel sentence representation at each layers.",
"where X and Y correspond respectively to the matrix of the d -dimensional mean-pooled (excluding special token) subword representations at layer l of the n parallel source and target sentences.",
"In Figure 7, we show the CKA similarity of monolingual models, compared with bilingual models and random encoders, of multi-way parallel sentences (Conneau et al., 2018) for five languages pair: English to English (cid:48) (obtained by back-translation from French), French, German, Russian, and Chinese.",
"The monolingual en (cid:48) is trained on the same data as en but with different random seed and the bilingual en-en (cid:48) is trained on English data but with separate embeddings matrix as in 4.3.",
"The rest of the bilingual MLM is trained with the Default setting.",
"We only use random encoder for non-English sentences.",
"Figure 7 shows bilingual models have slightly higher similarity compared to monolingual models with random encoders serving as a lower bound.",
"Despite the slightly lower similarity between monolingual models, it still explains the alignment performance in 5.1.",
"Because the measurement is also invariant to orthogonal mapping, the CKA similarity is highly correlated with the sentence-level alignment performance in Figure 6 with over 0.9 Pearson correlation for all four languages pairs.",
"For monolingual and bilingual models, the first few layers have the highest similarity, which explains why Wu and Dredze (2019) finds freezing bottom layers of mBERT helps cross-lingual transfer.",
"The similarity gap between monolingual model and bilingual model decrease as the languages pair become more distant.",
"In other words, when languages are similar, using the same model increase representation similarity.",
"On the other hand, when languages are dissimilar, using the same model does not help representation similarity much.",
"Future work could consider how to best train multilingual models covering distantly related languages.",
"In this paper, we show that multilingual representations can emerge from unsupervised multilingual masked language models with only parameter sharing of some Transformer layers.",
"Even without any anchor points, the model can still learn to map representations coming from different languages in a single shared embedding space.",
"We also show that isomorphic embedding spaces emerge from monolingual masked language models in different languages, similar to word2vec embedding spaces (Mikolov et al., 2013).",
"By using a linear mapping, we are able to align the embedding layers and the contextual representations of Transformers trained in different languages.",
"We also use the CKA neural network similarity index to probe the similarity between BERT Models and show that the early layers of the Transformers are more similar across languages than the last layers.",
"All of these effects were stronger for more closely related languages, suggesting there is room for significant improvements on more distant language pairs."
] | [
"method",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"result",
"objective",
"objective",
"result",
"result",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"method",
"result",
"abstain"
] |
[
"It is popular that neural graph-based models are applied in existing aspect-based sentiment analysis (ABSA) studies for utilizing word relations through dependency parses to facilitate the task with better semantic guidance for analyzing context and aspect words.",
"However, most of these studies only leverage dependency relations without considering their dependency types, and are limited in lacking efficient mechanisms to distinguish the important relations as well as learn from different layers of graph based models.",
"To address such limitations, in this paper, we propose an approach to explicitly utilize dependency types for ABSA with type-aware graph convolutional networks (T-GCN), where attention is used in T-GCN to distinguish different edges (relations) in the graph and attentive layer ensemble is proposed to comprehensively learn from different layers of T-GCN.",
"The validity and effectiveness of our approach are demonstrated in the experimental results, where state-of-the-art performance is achieved on six English benchmark datasets.",
"Further experiments are conducted to analyze the contributions of each component in our approach and illustrate how different layers in T-GCN help ABSA with quantitative and qualitative analysis.",
"1 1 Introduction Aspect-based sentiment analysis (ABSA) processes fine-grained sentiment polarities towards specific aspects, where in many cases, it is required to identify different sentiments for multiple aspects in the same context.",
"For example, in the sentence The drink menu is limited but the wines are excellent. , the sentiment polarity towards drink menu is negative while that towards wines is positive; an * Equal contribution.",
"Corresponding author.",
"ABSA system may predict wrong if it fails to capture the important contextual information for each aspects.",
"Therefore, to model such contextual information, neural models (e.g., Bi-LSTM and Transformer (Vaswani et al., 2017)) have been widely used for ABSA and demonstrated to be useful for this task (Wang et al., 2016; Tang et al., 2016a; Chen et al., 2017; Ma et al., 2017; Fan et al., 2018).",
"As a further enhancement of encoding contextual information for ABSA, there are studies (Sun et al., 2019; Huang and Carley, 2019; Zhang et al., 2019a) using graph convolutional networks (GCN) to learn from a graph that is often built over the dependency parsing results of the input texts.",
"As a result, the GCN models are able to learn from distant word-word relations that are more helpful to ABSA.",
"However, GCN models used in these studies are limited by omitting the information carried in dependency types and treating all word-word relations in the graph equally, therefore unimportant relations may not be distinguished and mislead ABSA accordingly.",
"For example, Figure 1 illustrates an example sentence with an aspect highlighted in red, where the aspect word menu is connected with three others words, i.e., the , drink , and limited .",
"The connection between menu and limited could be the most important one since its dependency type, i.e., nsubj , suggests that menu is the nominal subject of limited , which strongly guides sentiment analysis towards menu .",
"In this case, if the dependency type is not modeled, one may not be able to leverage such beneficial information.",
"In addition, although previous GCN models learn such Figure 2: The overall architecture of our approach with an example sentence-aspect pair input (the aspect words dink menu are in boldface) from a sentence.",
"word-word relations by multiple GCN layers, they only use the output from the last layer for ABSA, where the encodings from intermediate layers are omitted and some essential information may be lost because different context information are modeled across layers.",
"Thus an appropriate approach is required to enhance current GCN models for ABSA.",
"In this paper, we propose a type-aware graph convolutional networks (T-GCN) with multiple layers to enhance ABSA by incorporating both word relations and their dependency types to comprehensively learn from dependency parsing results.",
"Specifically, we firstly obtain the dependency parsing results of the input texts through off-the-shelf toolkits, then build the graph over the dependency tree with each edge labeled by the corresponding dependency type between the two connected words, later apply an attention mechanism to the graph to weight all edges according to their contributions to the task, and finally use attentive layer ensemble to weight and combine the contextual information learned from different GCN layers.",
"In doing so, our proposed T-GCN model can not only model word-word relations and their dependency types, but also distinguish the important contextual information from such relations to enhance ABSA.",
"Experiments on six English benchmark datasets are conducted to evaluate the proposed model, where the results illustrate its effectiveness and state-of-the-art performance is observed over previous studies on all datasets.",
"We also perform further analysis to investigate the contribution of each component (i.e., type-aware graph, attention for edges, and attentive layer ensemble) in our approach, and illustrate how different layers in T-GCN helps ABSA with quantitative and qualitative studies.",
"Given an input sentence X = x 1 , x 2 , , x n and the aspect terms A X ( A is usually a sub-string of X ), the conventional ABSA approaches often take the sentence-aspect pair as the input and predicts A 's sentiment polarity b y (Tang et al., 2016b; Ma et al., 2017; Xue and Li, 2018; Hazarika et al., 2018; Fan et al., 2018; Huang and Carley, 2018; Tang et al., 2019; Chen and Qian, 2019; Tan et al., 2019; Tang et al., 2020).",
"We follow this paradigm and the overview of our approach is illustrated in Figure 2, with a contextual encoder (i.e., BERT), the proposed T-GCN and the attentive layer ensemble (ALE).",
"The overall conceptual formalism of our approach can be written as b y = arg max y 2 T p ( y | ALE ( T GCN ( X , A ))) (1) where T denotes the set of all sentiment labels for y (i.e., positive , neutral , and negative ) and p computes the probability of predicting y 2 T given X and A through T-GCN and ALE.",
"In the following texts, we firstly describe the construction of the graph with dependency types, then elaborate the details of our T-GCN model, and the ALE to incorporate contextual information from different T-GCN layers, and finally illustrate incorporating T-GCN to ABSA.",
"Contextual features such as n-grams and syntactic information have been demonstrated to be useful to enhance text representation and thus improve model performance for many NLP tasks (Sun and Xu, 2011; Song and Xia, 2012; Gong et al., 2012; Song et al., 2012; Xu et al., 2015; Chen et al., 2017; Zhang et al., 2019b; Tang et al., 2020).",
"In addition, it is demonstrated by many recent studies that GCN models are effective in capturing contextual features that are represented in graph-like signals, i.e., dependencies among words, of an input sentence (Sun et al., 2019; Huang and Carley, 2019; Zhang et al., 2019a; Tian et al., 2020c; Chen et al., 2020).",
"In the graph for conventional GCN models, each edge between any two words x i and x j in the input sentence is added to the graph if there is a Figure 3: An illustration of how we build the type-aware graph from dependency parsing results and the detail of a T-GCN layer that consumes the graph.",
"dependency relation on them.",
"Therefore, they fail to comprehensively use the dependency parsing results because dependency types are always omitted in the graph.",
"To leverage the such type information, we propose the type-aware graph for feeding our T-GCN via the following steps.",
"First, we use off-the-shelf toolkits to obtain the dependency results, which can be represented by a list of dependency tuples ( x i , x j , r i,j ) with r i,j denoting the dependency type between x i and x j .",
"Second, we use an adjacency matrix A = { a i,j } n n to present the graph by recording word relations in all tuples and a relation type matrix R = { r i,j } n n to represent the edges with their dependency types.",
"Therefore, A is a 0-1 matrix where a i,j = 1 if there is an edge between x i and x j , and a i,j = 0 otherwise.",
"For R , each element r i,j in it uses a mark to denote the dependency type between x i and x j .",
"Figure 3 illustrates the dependency parsing results of an example sentence as well as its type-aware graph represented by A and R , with the marks for r i,j listed in the Type Reference.",
"Finally, to leverage the relation types, Figure 4: The illustration of how we compute h ( l ) i for x 3 = menu through a T-GCN layer.",
"we use a transition matrix to map all r i,j to their embeddings e ri,j .",
"With the type-aware graph, we propose an L -layer T-GCN and for each layer we apply attention to the edges in the graph to weight them by their contributions to the ABSA task.",
"Figure 4 illustrates the processes of doing so for the aspect word menu in the sentence The drink menu is limited but all the wines are excellent. .",
"In detail, for a each edge between x i and x j , the l -th GCN layer takes the hidden vectors h ( l \u0000 1) i and h ( l \u0000 1) j of x i and x j from the ( l \u0000 1) -th GCN layer ( h (0) i and h (0) i are from the context encoder) and concatenate them with the embeddings of their dependency types e ri,j by s ( l ) i = h ( l \u0000 1) i \u0000 e r i,j (2) and s ( l ) j = h ( l \u0000 1) j \u0000 e ri,j (3) Then, we compute the weight p ( l ) i,j for this edge by p ( l ) i,j = a i,j exp s ( l ) i s ( l ) j P nj =1 a i,j exp s ( l ) i s ( l ) j (4) and align the dimension of e ri,j to h ( l \u0000 1) j by a trainable matrix W ( l ) R of the l -th GCN layer by h ( l \u0000 1) 0 j = h ( l \u0000 1) j + W ( l ) R e r i,j (5) Finally, we apply p ( l ) i,j to this edge and compute the output for x i at l -th layer following a similar process in the conventional GCN by h ( l ) i = \u0000 0 @ n X j =1 p ij W ( l ) h ( l \u0000 1) 0 j + b ( l ) 1 A (6) where W ( l ) and b ( l ) denote trainable parameters in the l -th GCN layer and \u0000 refers to the ReLU activation function.",
"The above process is conducted for every x i and throughout all GCN layers, thus the information of dependency types are incorporated into the GCN to enhance ABSA accordingly.",
"For each word x i , since every T-GCN layer incorporates information from the words that directly connect to it, so that multiple T-GCN layers could learn indirect word relations from long distance.",
"Thus it is assumed that different layers have their unique capabilities to encode contextual information.",
"To utilize such capabilities, we propose to comprehensively learn from all T-GCN layers with attentive layer ensemble.",
"In doing so, we firstly obtain the output o ( l ) from each T-GCN layer by averaging the output hidden vectors of all aspect terms x k 2 A : o ( l ) = 1 |A| X x k 2 A h ( l ) k (7) where |A| is the number of words in the aspect terms A .",
"Then we attentively ensemble the output of all T-GCN layers through a weighted average: o = LX l =1 \u0000 ( l ) o ( l ) (8) where o is the final vector output for ABSA and \u0000 ( l ) is a trainable weight assigned to o ( l ) to balance its contribution and satisfying P Ll =1 \u0000 ( l ) = 1 .",
"To support applying T-GCN for ABSA, there are necessary encoding and decoding processes.",
"For encoding, there are two ways in doing so.",
"The first is to take the sentence X as the input and obtain the hidden vectors h (0) i for all x i by HX = BERT ( X ) (9) where HX is the hidden vectors of all words in X , and we use BERT as the encoder (same below).",
"The second is to take the sentence-aspect pair as the input, which can be formalized by [ HX , HA ] = BERT ( X , A ) (10) Datasets Pos.",
"where HA is the hidden vectors of all aspect words.",
"Then, the hidden vectors from HX or HA are feed into the T-GCN model as that described in 2.2.",
"For decoding, after we obtain o from ALE, we firstly map o to the label space by a fully connected layer, u = W o + b , where W and b are the trainable matrix and the bias, respectively, and each dimension of u corresponds to a sentiment type.",
"Thus, we employ a softmax function to u and predict the output sentiment y for the aspect A in X by: y = arg max exp ( u t ) P |T | t =1 exp ( u t ) (11) where u t is the value at dimension t in u .",
"In the experiments, we employ five widely used English benchmark datasets: LAP 14 and REST 14 from Pontiki et al. (2014), REST 15 from Pontiki et al. (2015), REST 16 from Pontiki et al. (2016), and TWITTER from Dong et al. (2014), with their official train/test splits.",
"In addition, we try another recently released English dataset, named MAMS 2 (Jiang et al., 2019), with the official train/dev/test splits for ABSA, which is much larger than the aforementioned five datasets.",
"It is worth noting that, in addition to the positive , neutral , and negative sentiment labels, LAP 14, REST 14, and REST 16 2 We use the ATSA part of MAMS obtained from https: //github.com/siat-nlp/MAMS-for-ABSA .",
"contain another conflict label, which identifies the aspects that have conflict sentiment polarities.",
"For example, the aspect sushi is assigned by a conflict label in Certainly not the best sushi in New York, however, it is always fresh. from REST 14.",
"Therefore, we follow Tang et al. (2016b) to clean the datasets by removing all aspects with the aforementioned conflict label, as well as sentences without an aspect.",
"The statistics (number of aspects with positive , neutral , and negative labels) of the processed six datasets are reported in Table 1.",
"To build the graph for T-GCN, we firstly use the current best performing constituency parser, i.e., SAPar 3 (Tian et al., 2020d), to parse all input text into constituency trees, then convert the trees into dependency trees by Stanford Converter 4 , and finally build the graph over the dependency relations and types from the trees.",
"5 Since high quality text representations can improve the performance of NLP models (Mikolov et al., 2013; Song et al., 2017; Bojanowski et al., 2017; Song and Shi, 2018; Song et al., 2018), we employ BERT (Devlin et al., 2019) as the context encoder, which and whose variants (Diao et al., 2020; Dai et al., 2019; Joshi et al., 2020) have demonstrated their effectiveness 3 https://github.com/cuhksz-nlp/SAPar 4 We use the converter of version 3.3.0 from https:// stanfordnlp.github.io/CoreNLP/index.html .",
"5 We also try Stanford CoreNLP Toolkits ( https: //stanfordnlp.github.io/CoreNLP/ ) (Manning et al., 2014) and spaCy ( https://spacy.io/ ) dependency parsers with similar results obtained.",
"to encode context information and achieved state-of-the-art performance in many NLP tasks (Huang and Carley, 2019; Tian et al., 2020a,b; Tang et al., 2020; Nie et al., 2020; Wang et al., 2020).",
"Specifically, we use the uncased BERT-base and BERT-large 6 with their default settings, i.e., 12 layers of self-attention with 768 dimensional hidden vectors for BERT-base and 24 layers of self-attention with 1024 dimensional hidden vectors for BERT-large, and use three T-GCN layers.",
"We try two ways to encode the input, where the first encodes the single sentence and the second encodes the sentence-aspect pair.",
"For all models, we use the pre-trained parameters of BERT and initialize all other trainable parameters by Xavier (Glorot and Bengio, 2010).",
"Moreover, we use the cross-entropy loss function for our models and follow previous studies (Tang et al., 2016a; Chen et al., 2017; He et al., 2018a; Sun et al., 2019; Zhang et al., 2019a) to evaluate them via accuracy and macro-averaged F1 scores over all sentiment polarities.",
"For datasets without the official development set, we randomly sample 10% instances from the training set and regard them as the development set to find the best hyper-parameter setting which is then used to train different models on the entire training set.",
"7 6 We obtain the BERT models from https://github.",
"com/huggingface/pytorch-pretrained-BERT .",
"In the main experiments, for each encoder (i.e., BERT base and large), we run two baselines: 1, only using BERT and 2, BERT with normal GCN where all edges are equally treated and the ABSA result is predicted based on the output of the last GCN layer.",
"Table 2 reports the experimental results from all baselines and our models.",
"8 There are several observations.",
"First, for both BERT-base and BERT-large encoders, although the models with normal GCN are able to enhance the BERT baselines, our models can further improve the performance in both accuracy and F1 socres on all datasets.",
"This observation clearly illustrate the effectiveness of incorporating dependency type information into GCN and thus improves ABSA accordingly.",
"Second, in most cases, our models that encode the sentence-aspect pair achieve higher results than the ones encoding the single sentence, which is not surprising because the aspect is therefore emphasized in the input and provide more contextual information to be modeled for ABSA.",
"model (i.e., T-GCN using BERT-large encoder with sentence-aspect pair input), with previous studies on all datasets.",
"The results are reported in Table 3, where our model outperforms previous studies, including the ones (Huang and Carley, 2019; Wang et al., 2020; Tang et al., 2020) using BERT-large (marked by *) and dependency information (marked by ), on all datasets in terms of both accuracy and F1 scores.",
"In particular, compared with our approach, Huang and Carley (2019) use a variant of graph attention networks (GAT), while they do not use dependency types; Wang et al. (2020) also use a variant of GAT and they use the relation type as well, but they do not assign different weight to separate word-word relations; Tang et al. (2020) use a variant of GCN but they do not use the dependency type information.",
"Our model shows its superiority to the aforementioned studies since we not only assign different weights to dependencies, but also comprehensively leverage the dependency parsing results with both word relations and their dependency type information, as well as fined-grained encoding results from multiple T-GCN layers.",
"To explore the effectiveness of different components in our model, i.e., type-aware graph (TG),",
"attention (Att), and ALE, we conduct an ablation study based on our best model (i.e., T-GCN on BERT-large encoder with sentence-aspect pair in-put).",
"The experimental results on all datasets with respect to using different combinations of such components are reported in Table 4, with the results of the full model and the baseline with normal GCN illustrated on the first (ID:",
"1) and last row (ID: 8), respectively.",
"Herein, models without ALE (ID: 4-6) use the output of the last T-GCN layer (i.e., the third layer) to predict the sentiment polarity.",
"9 Here are some observations.",
"First, it is clearly indicated in results that, the model performance drops on all datasets if any component is excluded from the full model.",
"This observation indicates that all three components play important roles in our approach to enhance ABSA; each one has its unique contribution to the full model.",
"Second, for each single components, compared with the results from GCN baseline (ID: 8), the results from models with a particular module (ID: 5-7) demonstrate that the attention mechanism is the most important one to improve model performance, where on all datasets, the model (ID:",
"6) with attention outperforms the others.",
"This observation complies with our intuition because the attention directly guides the model to distinguish the contextual information to the aspect words, so that informative words are highlighted so as to improve ABSA accordingly.",
"Besides those components, we also investigate the effect of each layer when our model is trained on different datasets.",
"In doing so, we perform experiments on all datasets using our best performing model and use the weight ( \u0000 ( l ) in Eq.",
"(8)) assigned 9 We obtain similar results when using the output of intermediate layers.",
"to each T-GCN layer to identify the contribution of them.",
"The results are illustrated in Figure 5, with the weights for the 1st, 2nd, and 3rd T-GCN layers drawn in blue, green, and orange bars, respectively.",
"We have following observations.",
"First, all layers contribute to the final prediction for ABSA, which complies with our expectation and confirms the validity of leveraging the information from all layers of GCN.",
"Therefore, the model is able to provide comprehensive contextual information comparing to that only uses the output from the last layer.",
"Second, interestingly, as shown in the histograms, for most datasets (i.e. LAP 14, REST 14, REST 15, REST 16, and MAMS), the second layer of T-GCN contributes the most among all three layers.",
"A possible reason behind is that (1) the second layer is able to encode contextual information from a larger range (because the edges in the first layer only cover words with direct relations, while the second and third layer provide indirect relations, i.e., second and third order dependencies in practice); (2) comparing to the third layer, the second layer may introduce less irrelevant information from multi-word relations.",
"Third, we also notice that for TWITTER , the weight distribution among three layers is rather different from the other food was cop nsubj self conj poor OK nsubj cop self conj poor OK was food Dependency Information = he food was ok but the service was so poor that the food was cold by the time everyone in my party was served the det 2 nd layer ok self nsubj The OK food cop was 1 st layer = food det self nsubj The OK food cop was 2 nd layer det self nsubj OK food cop was 3 rd layer det The = food = food the det 1 st layer ok cop nsubj det The food was OK but the service poor ROOT conj cc det nsubj Figure 6: Visualization of the weights assigned to different edges and dependency types in each T-GCN layer for an example sentence with two aspects (in red) in conflict sentiment polarities.",
"datasets, where the first and last layer contributes more to ABSA.",
"This observation can be explained by that, TWITTER is social medial data, where, in general, sentences in such data are short and less organized, so that our model may require the information from either local context or the entire sentence for ABSA.",
"To further illustrate the effectiveness of T-GCN on leveraging the information of dependency types and weighting salient word relations for improving ABSA, we conduct a case study on using our model to process the sentence The food was OK but the service was so poor that the food was cold by the time everyone in my party was served from REST 16.",
"In this sentence, there are two aspects with contrast sentiment polarities, i.e., food and service have positive and negative sentiment suggested by OK and limited , respectively.",
"To demonstrate the effectiveness of our model to process such sentence with conflict sentiments, on the right part of Figure 6, we visualize weights (in green) assigned to the edges connected to food from the attention in all T-GCN layers, and the ALE weights (in yellow) for each layer, where deeper color refers to higher weight.",
"For those edges, except for its self-connection, the edge between food and OK receives the highest weight in every layer, and the second layer receives the highest weight in ALE.",
"Note that in this case, the reason why T-GCN works can be explained by that, when there are more than two layers are used in a GCN model, the edges connecting to OK also influence the ABSA results because indirect relations are introduced across layers.",
"As a result, the noisy connection between OK and poor may contribute to the prediction and the normal GCN could possibly fail on this case because of lacking a mechanism to distinguish it from other edges.",
"Therefore, as shown in the left part of Figure 6, we also visualize the weights for edges connecting to OK from the first and second T-GCN layers, 10 where the informative word relations and their dependency types receive much heavier weights than that for noisy ones.",
"Moreover, it is noticed that the dependency type for the edge between OK and poor is conj (conjunction), which suggests that poor is syntactically parallel with OK and is thus less likely to provide essential sentiment guidance for OK .",
"Overall, this case study illustrates that our model successfully identifies that OK is the most important contextual information to determine the sentiment for food , with the help of dependency type and attention used in T-GCN, and also shows that the final prediction relies on the contributions from different T-GCN layers.",
"categoriz-10 Note that we do not visualize the weights for OK in the third layer because its resulting hidden vector does not contribute to the final sentiment prediction.",
"ing sentiment polarities for a specific aspect (e.g., chicken ) or category (e.g., food ) in a sentence.",
"Conventionally, this task is formulated as to classify a sentence-aspect pair and most of studies try to explore the contextual information between aspect and the entire sentence to facilitate the analysis of sentiment (Dong et al., 2014; Wang et al., 2016; Tang et al., 2016a; Ma et al., 2017; Chen et al., 2017; Xue and Li, 2018; Li et al., 2018; Xu et al., 2019; Wang et al., 2020; Tang et al., 2020).",
"To further enhancing the modeling of contextual information, dependency parses were leveraged by many studies, where adaptive recursive neural networks (Dong et al., 2014), attention mechanism (He et al., 2018a), and key-value memory networks (Tian et al., 2021) are used.",
"Later, Huang and Carley (2019); Sun et al. (2019); Zhang et al. (2019a); Wang et al. (2020); Tang et al. (2020) leveraged graph neural models (e.g., GCN) for ABSA with their graph built upon the dependency tree obtained from off-the-self dependency parsers, and demonstrated promising results.",
"The models in their studies normally focus on building the graph with the dependency structure without considering dependency types, meanwhile treating the edges in the graph equally.",
"In addition, they usually use the output of the last layer to predict sentiment labels although their models consist multiple layers.",
"Thus, our approach differs from previous graph-based ones on several aspects, including the integration of depdendency type information, applying attention to edges, and ensemble of multiple layers to comprehensively learn from the graph model.",
"In this paper, we propose a neural approach for ABSA with T-GCN, where the input graph is built on the dependency tree of the input sentence.",
"Specifically, the edges in the graph are constructed on top of both dependency relations and types for the input sentence; for each word, we use attention to weight all such type-aware edges associated to it in the T-GCN; we also apply attentive layer ensemble to comprehensively learn contextual information from different T-GCN layers.",
"Experimental results on six widely used English benchmark datasets demonstrate the effectiveness of our approach, where state-of-the-art performance are achieved on all datasets.",
"Further analyses illustrate the validity of incorporating type information into our model as well as applying attentive ensemble to learning from its multiple layers.",
"This work is supported by Chinese Key-Area Research and Development Program of Guangdong Province (2020B0101350001) and also partially supported by NSFC under the project The Essential Algorithms and Technologies for Standardized Analytics of Clinical Texts (12026610)."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"objective",
"abstain",
"objective",
"other",
"method",
"method",
"method",
"other",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"method",
"objective",
"method",
"other"
] |
[
"Generating educational questions of fairytales or storybooks is vital for improving children's literacy ability.",
"However, it is challenging to generate questions that capture the interesting aspects of a fairytale story with educational meaningfulness.",
"In this paper, we propose a novel question generation method that first learns the question type distribution of an input story paragraph, and then summarizes salient events which can be used to generate high-cognitive-demand questions.",
"To train the event-centric summarizer, we fine-tune a pre-trained transformer-based sequence-to-sequence model using silver samples composed by educational question-answer pairs.",
"On a newly proposed educational question-answering dataset FairytaleQA , we show good performance of our method on both automatic and human evaluation metrics.",
"Our work indicates the necessity of decomposing question type distribution learning and event-centric summary generation for educational question generation.",
"Listening to and understanding fairy tales or storybooks are very crucial for children's early intellectual and literacy development (Sim and Berthelsen, 2014).",
"During the storybook reading process, prompting suitable questions with educational purposes can help children understand the content and inspire their interests (Zevenbergen and Whitehurst, 2003; Ganotice et al., 2017).",
"There is evidence that high-cognitive-demand (HCD) questions usually relate to good learning achievement (Winne, 1979).",
"HCD questions usually correspond to application, analysis, synthesis, and evaluation questions in Bloom's taxonomy of cognitive process (Winne, 1979; Anderson et al., 2000), which are salient events merged from different elements across a session (Greatorex This work was done while Mo was at IBM Research. and Dhawan, 2016).",
"However, it is challenging even for humans to ask educationally meaningful questions to engage children in storybook reading, which could be due to adults lacking the skills or time to integrate such interactive opportunities (Golinkoff et al., 2019).",
"Recent research shows that AI-powered conversational agents can play the role of language partners to read fairy tales to children and ask them educational questions (Xu et al., 2021).",
"This motivates us to investigate techniques to generate HCD educational questions for children's storybooks automatically.",
"Automating the generation of such questions can have great value in supporting children's language development through guided conversation.",
"During storybook reading, HCD questions require children to make inferences and predictions.",
"In contrast to low-cognitive-demand (LCD) questions describing facts in stories ( e.g. , Who is Snow White's mother? ), HCD questions are often related to events and their relations ( e.g. , Why did the queen want to kill Snow White? or What happened after the huntsman raised his dagger in the forest? ).",
"Most previous work on question generation (QG) focuses on generating questions based on predefined answer spans (Krishna and Iyyer, 2019; Pyatkin et al., 2021; Cho et al., 2021).",
"Such sys-tems that use keywords or specific events often generate LCD questions that are factual questions based on local context, but cannot work well on HCD cases, where we need to capture the salient events and understand the relations across multiple elements/events.",
"Recently, Yao et al. (2021) released a fairytale question answering dataset FairytaleQA containing around 10.5k question-answer pairs annotated by education experts.",
"Each question is assigned to a specific type, and some types, such as action , causal relationship , are high-cognitive-demanding.",
"This makes it possible to investigate generating educational questions to support children's interactive storybook reading.",
"In this paper, we propose a novel framework combining question type prediction and event-centric summarization to generate educational questions for storybooks.",
"In the first stage, we learn to predict the question type distribution for a given input and add pseudo-label so that after prediction, we can know both the types of questions and how many questions of each type.",
"In the second stage, conditioned on question types and the order of the question under the current question type, we extract salient events that are most likely for educators to design questions on and then generate an event-centric summarization of the original input.",
"Finally, in the third stage, we use the output of the second stage to generate questions.",
"Each summarization is used to generate one question.",
"Note that it is difficult to obtain gold annotations for event-centric summarization.",
"Instead, we rewrite annotated questions, and their corresponding hypothesized answers into question-answer statements (Demszky et al., 2018) as silver training samples.",
"We hypothesize that HCD questions are around main plots in narratives and can guide our summarization model to focus on salient events.",
"We evaluate our system on the FairytaleQA dataset and show the superiority of the proposed method on both automatic and human evaluation metrics compared to strong baselines.",
"Question answering based on context has achieved remarkable results (Rajpurkar et al., 2016; Zhang et al., 2020b).",
"The reverse problem, namely, question generation (Duan et al., 2017; Chan and Fan, 2019), usually relies on pre-selecting spans from an input text as answers and a single sentence as the context.",
"However, to generate questions across a long paragraph in which the key information may come from multiple different sentences in fairy tales (Yao et al., 2021), these existing models relying on one text segment usually do not work well.",
"A few studies are focusing on generating questions that are based on multi-sentence or multi-document information fusion (Pan et al., 2020; Xie et al., 2020; Tuan et al., 2020).",
"NarrativeQA (Ko cisk et al., 2018) is an effort that tries to integrate key information across multiple locations of a paragraph for question answering/generation.",
"Similarly, MS MARCO (Nguyen et al., 2016) is a dataset that integrates multiple locations of answers for users' queries in search engines.",
"In Cho et al. (2021), a contrastive method is proposed that first trains a supervised model to generate questions based on a single document and then uses a reinforcement learning agent to align multiple questions from multiple documents.",
"In Lyu et al. (2021), the authors use a rule-based method to generate questions with summaries and report to achieve good performance.",
"The methods mentioned above usually do not consider the educational dimension and may not work well on fairy tales.",
"Considering our research focus of fairytales, it is vital to generate questions that have educational purposes.",
"In FairytaleQA (Yao et al., 2021), experts usually write different types of questions for separate paragraphs.",
"We hypothesize that context plays a significant role in deciding the type of questions that should be asked during the interactive storybook reading with children.",
"Therefore it is necessary to investigate not only how to summarize salient events but also how to learn the question type distribution.",
"Summarization methods can be classified into extractive summarization and abstractive summarization.",
"Extractive methods select sentences from the source documents to compose a summary; abstractive methods applies neural generative models to generate the summary token-by-token.",
"Extractive summarization methods, such as TextRank (Mihalcea and Tarau, 2004), feature-based methods (Jagadeesh et al., 2005; Luhn, 1958; Nal-lapati et al., 2017), and topic-based methods (Oz-soy et al., 2010), do not work to generate HCD questions on the fairytale scenario because such questions often are based on multiple sentences.",
"Abstractive methods based on encoder-decoder architectures usually encode an input document token-by-token sequentially (Rush et al., 2015) and cannot capture the fine-grained hierarchical relations in a document, such as actions, causal relationships.",
"Graph neural network (GNN) models are recently used in summarization research (Wu et al., 2021; Wang et al., 2020; Xu et al., 2020; Li et al., 2021), thanks to their ability to model the complex relations in a document.",
"For example, in Xu et al. (2020), researchers used a discourse-level dependency graph to encode a document and then decoded discourse-level embeddings to select sentences extractively.",
"Similarly, in Wang et al. (2020), 5074 researchers have used a heterogeneous graph to encode both token-level and sentence-level relations in a document and then used it to extract sentences.",
"Still, in the education domain, summarizing salient events of one paragraph that can be used to generate educational questions is an open problem.",
"In this paper, we develop an event-centric summarization method based on BART (Lewis et al., 2020).",
"To obtain the training data, we compose educational question-answer pairs through a rule-based method and use them as silver ground-truth samples.",
"The overview of our educational question generation system for storybooks is shown in Figure 1, which contains three modules: question type distribution learning, event-centric summary generation, and educational question generation.",
"Given an input paragraph d , we first predict the type distribution of output questions p = ( p 1 , p 2 , . . . , p T ) , where p i denotes the probability of question type i , T is the total number of question types.",
"We then transform the distribution into the number of questions under each question type l = ( l 1 , l 2 , . . . , l T ) .",
"Afterwards, we first generate l i summaries of type i with the input paragraph d , and then generate l i questions of type i with the corresponding summaries.",
"We fine-tuned a BERT model (Devlin et al., 2019), and adapt the output m dimensional class token h c R m to learn the question type distribution.",
"Specifically, the predicted distribution is obtained by p i = e ( Wh c + b ) i (cid:80) Ti =1 e ( Wh c + b ) i , where W RT m , b RT are learnable parameters, ( ) i denotes the operator of selecting the i -th element of a vector.",
"Assuming there are N training samples, we minimize the K-L divergence loss LK L = (cid:80) Nj =1 1 N (cid:80) Ti =1 p ( j ) i log p ( j ) i p ( j ) i , where p ( j ) i denotes the probability of question type i for the j -th sample, and p ( j ) i is our predicted value.",
"To improve the prediction performance, similar to Zhang et al. (2018), we also conduct a multi-label classification task, where we use the question type with the maximal probability as the class of the output.",
"In particular, we add a cross entropy loss LCE = (cid:80) Nj =1 1 N (cid:80) Ti =1 1 ( y ( j ) i ) log y ( j ) i , where 1 ( y ( j ) i ) equals to 1 if i is the question type with the maximal probability for the sample j .",
"In summary, we conduct a multi-task learning for question type distribution prediction, and the final training loss is a weighted sum of the K-L loss and the cross entropy loss: L = LK L +(1 ) LCE , where is a weight factor.",
"To predict the number of questions for each question type during training, we add a pseudo label 1 to the original label l = ( l 1 , l 2 , . . . , l n ) , i.e. , l = ( l 1 , l 2 , . . . , l n , 1) .",
"We can then normalize it to get the ground-truth probability distribution l = ( l 1 (cid:80) nk =1 l k +1 , . . . , l n (cid:80) nk =1 l k +1 , 1 (cid:80) nk =1 l k +1 ) .",
"During testing, assuming we get the predicted distribution p = ( p 1 , p 2 , . . . , p n , p pseudo ) , we can obtain the number of each type of questions by diving the probability of this pseudo label p pseudo as: n i = p i p pseudo + 0 .",
"5 .",
"In FairytaleQA, one paragraph usually has multiple questions with different question types, and information in one educational question may scatter across multiple parts.",
"As mentioned before, we assume that context plays a big role to decide the type and the number of questions to be asked during the interactive storybook reading, and HCD questions are around salient events and the relations.",
"With the output from the previous component, we can use the predicted question type distribution as a control signal, and select corresponding events for one particular question type.",
"In particular, we add two control signals before an input paragraph: question type signal <t> and question order signal <c> , where <t> T , <c> C , T denotes the set of all question types, C denotes the set of order, i.e. , { <first>, <second>, <third>, ...}.",
"We train a BART summarization model (Lewis et al., 2020) to conduct the event-centric summary generation task.",
"The input of the BART model is: <t> <c> d , and the output of the BART model is a summary that collects related events for an educational question type, where d denotes the input paragraph.",
"Obtaining the golden summaries is difficult.",
"However, a QA dataset, like FairytaleQA, provides both questions and their corresponding answers.",
"We can therefore re-write the annotated questions and answers together to obtain question-answer statements, which are used as silver summaries to train our summarization model.",
"We used the rule-based method in Demszky et al. (2018) which inserts answers into the semantic parsed questions 5075 When he got to the forest, he too met the little grey old man, who greeted he and said [CLS] BERT [CLS] H 0 0.5 1 input paragraph Encoder Decoder BART summary question question type distribution learning event-centric summary generation educational question generation Token k [SEP] Token k [SEP] control signals question type distribution Encoder Decoder BART <OUTCOME_RESOLUTION> <FIRST> <ACTION> <FIRST> Dullhead brought out a cake and some sour beer.",
"With the summary generated in the second stage, generating an educational question is fairly straightforward.",
"Because the summary has already contained all key events for the target educational question type, we can train a question generation model directly on top of it using the annotated questions.",
"We fine-tune another BART model to generate questions, with the type and order control signals added before the input summary to control the generated results.",
"Note that our question generation model does not reply on pre-selected answer spans.",
"To demonstrate the effectiveness of our proposed method, we conducted a set of experiments on the FairytaleQA dataset.",
"The FairytaleQA dataset (Yao et al., 2021) contains annotations of 278 books, including 232 training books, 23 test books, and 23 validation books.",
"Each book has multiple paragraphs, and for each paragraph of one book, there are several educational question-answer pairs annotated by education experts.",
"The question type distribution is consistent among annotators.",
"In total, there are seven types: Character : questions that contain the character of the story as the subject and ask for additional information about that character; Setting : questions that start with Where/When; Feeling : questions that start with How did/do/does X feel?; Action : questions that start with \"What did/do/does X do?\" or \"How did/do/does X\" or questions that contain a focal action and ask for additional information about that action; Causal relationship : questions that start with Why or What made/make; Outcome resolution : questions ask about logic relations between two events, such as What hap-pened...after...; Prediction : questions that start with What will/would happen....",
"The first three are factual questions that are low-cognitive-demanding, and can be handled well by traditional span-based question generation methods (Yao et al., 2021).",
"The remaining four types usually require people to make inferences from multiple elements (Paris and Paris, 2003), which correspond to high-level cognitive skills in Bloom's taxonomy (Anderson et al., 2000), and can be viewed as HCD questions.",
"For the question type prediction , it usually asks for events that do not appear in storybooks, which is not our focus in this paper.",
"We only consider action, causal relationship , and outcome resolution .",
"There is a small portion (985 out of 10580) of questions that span multiple paragraphs.",
"To control the cognitive-demand level for children, we also removed those questions.",
"The statistics of the selected data is shown in section A of the appendix.",
"We compared our system with two baselines: 1) the method proposed in Yao et al. (2021) (denoted as QAG), which is the only method that considers generating educational questions; 2) using FairytaleQA, we trained an end-to-end BART model.",
"QAG.",
"The QAG model (Yao et al., 2021) use keywords (semantic role labeling) to identify entities and events and then generate questions, which contains four steps: 1) generate a set of answers based on semantic roles of verbs; 2) generate questions based on these answers; 3) generate answers based on the questions generated in the second step; 4) 5076 rank generated question-answer pairs and choose the top questions.",
"We trained the question generation model in the second step and the answer generation model in the third step using the selected questions.",
"We use the top 10/5/3/2/1 generated questions as baselines, denoted as QAG (top10), QAG (top5), QAG (top3), QAG (top2), and QAG (top1), respectively.",
"E2E.",
"Using FairytaleQA with question types action, causal relationship , and outcome resolution , we trained one BART-large model to generate questions based one paragraph end-to-end.",
"During testing, we used a maximal length 100 tokens (roughly 7 questions according to Table 11) and selected the first 2 questions as the output for evaluation.",
"We denote this method as E2E.",
"We adopt both automatic and human evaluation to measure the performance of our method.",
"For automatic evaluation, similar to Yao et al. (2021), we use the Rouge-L score (Lin, 2004), and report the average precision, recall, and F1 values.",
"Meanwhile, we also use BERTScore (Zhang et al., 2020a) to evaluate the semantic similarity of generated questions with the ground-truth questions, and report the average precision, recall, and F1 values.",
"In contrast to Yao et al. (2021), we mainly consider concatenating all generated questions into one sentence and comparing it with the concatenated ground-truth questions.",
"This is because for each paragraph, we need to evaluate the generated quality of not only each question but also the question type distribution for sub-skills required in education as a whole (Paris and Paris, 2003).",
"Since the question order does not have much effects on Rouge-L, concatenating questions also partially takes individual question quality into consideration.",
"Moreover, we also consider the same setup used in Yao et al. (2021) that takes the max score of each gold question against the generated questions, then averages the scores of all generated questions.",
"To evaluate the quality of our generated questions and their educational significance, we further conducted a human evaluation session.",
"After regular group meetings, we concluded the following four dimensions, where children appropriateness is the main metric for our educational application:",
"1. Question type : whether the generated questions belong to any of the three event types.",
"2. Validity : whether the generated questions are valid questions according to the original paragraph.",
"3. Readability : whether the generated questions are coherent and grammatically correct.",
"4. Children appropriateness : to what extent would you like to ask this question when you read the story to a five year's old child?",
"For re-writing silver summaries, there are 8 sentences that cannot be parsed successfully.",
"In this case, we wrote the silver statements manually.",
"We also corrected 5 low-quality statements manually.",
"The weight factor for question type distribution learning is set as 0 .",
"7 empirically.",
"For question type distribution learning, we used a BERT cased large model.",
"For summary generation, we used a BART cased base model.",
"For question generation, we used a BART cased large model.",
"The batch sizes of all training are set as 1 .",
"For the generation process, we only used a greedy decoding method.",
"Automatic evaluation results were calculated with open sourced packages 1 .",
"For all methods, we removed duplicated questions and questions that has less than 3 tokens.",
"All experiments were conducted on a Ubuntu server with Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz, 32G Memory, Nvidia GPU 2080Ti, Ubuntu 16.04.",
"Training our model took about three hours.",
"The results of automatic evaluation on both validation and test datasets are shown in Table",
"1. For Rouge-L, compared to E2E and QAG, our method can achieve the best results except for the recall values.",
"In particular, our method outperforms E2E by about 20 points, and outperforms the best QAG model (top2) by about 10 points on the precision scores.",
"For F1, our method outperforms E2E by about 10 points, and outperforms the best QAG model (top2) by about 5 points.",
"These results show 1 We used the package from https://github.com/ google-research/google-research/tree/master/rouge to calculate Rouge-L, and the package from https://github.com/Tiiiger/bert_score to calculate BERTScore.",
"Method Pre(val/test) Rec(val/test) F1(val/test) E2E 31.29/30.80 36.21/36.53 31.77/31.65 QAG (top2) 35.17/33.51 35.33/33.83 34.21/32.64 Ours 48.30 / 44.05 39.55 / 36.68 41.78 / 38.29 Table 2: The comparison results with the setup used by Yao et al. (2021).",
"that our method can match the ground-truth questions lexically better than other methods.",
"However, the recall score of our method is not as good as E2E and QAG (top5 & 10).",
"This is because for E2E and QAG (top5 & 10), they generally generate more questions than our method 2 .",
"For BERTScore, our method achieves the best results on precision, recall, and F1.",
"Although our method outperforms QAG (top2) by a small margin, it still outperforms other QAG models by at least 1 point.",
"For the setup used by Yao et al. (2021), as shown in Table 2, our method also outperforms the best QAG model, i.e. , QAG (top 2), and E2E by a large margin in terms of Rouge-L.",
"We believe that decomposing question types explicitly and using event-centric summaries to generate questions can capture the internal structure of educational question annotations and fit the data distribution in a more accurate way.",
"Some examples of the generated questions can be seen in Table",
"3. Our method usually can predict the correct question types, and cross multiple elements to generate HCD questions, with a limitation of factuality errors.",
"More examples and comparison can be found in section C of the appendix.",
"Apart from the overall performance, we also investigated the performance of each module of our method.",
"Because the performance values on both the validation and test data are similar, to simplify our experiment, in the following sections, we only conducted experiments on the test data.",
"Question Type Distribution Learning.",
"On the test set, the K-L divergence between the prediction results of our BERT-based model and ground-truth is 0 .",
"0089 , which shows that the performance of our question type distribution learning module is relatively satisfactory.",
"We also use the ground-truth question type distribution as an input and calculate the final Rouge-L score with our system.",
"The results are shown in Table",
"4. Compared to the ground-truth question type distribution, our system still has lower precision and F1 scores.",
"Having a more accurate question type distribution prediction is beneficial for improving the overall performance.",
"Event-centric Summary Generation To investigate the quality of the generated summaries, we compare the generated results with the silver summary ground-truth.",
"Similar to the evaluation method of generated questions, we concatenated the generated summaries and calculated the Rouge-L score with the concatenated ground-truth summaries.",
"The results are 15 .",
"41 precision, 30 .",
"60 recall, and 18 .",
"85 F1, which shows that there is still a lot of room to improve the summarization module.",
"Upper-bound Results with Silver Summary To see how the upper-bound performance is if we have perfect summaries, we input the silver summaries to our educational question generation model.",
"The Rouge-L scores of generated questions are 92 .",
"71 precision, 85 .",
"65 recall, 87 .",
"67 F1, which shows the potential that once a good summary containing salient events is available, generating an educational question is relatively easy.",
"The core challenge is to obtain good summaries, which we believe will be a valuable next step in future work.",
"We conducted a human evaluation with consent of our method against the best-performed baseline QAG (top2).",
"We first randomly sampled 10 books from the test set.",
"For each book, we randomly sam-5078 Questions QAG (top2) P1 : Once upon a time there was a farmer who had carted pears to market",
"pled 5 paragraphs.",
"We then conducted experiments to evaluate the generated results on question type and quality.",
"Participants are researchers or PhD students based in Europe, U.S., and China working on natural language processing and human-computer interaction in the education domain with at least 3 years of experience, and were recruited through word-of-mouth and paid $30.",
"We had a training session to ensure the annotation among participants is consistent.",
"This study is approved by IRB.",
"Question type.",
"Three human participants annotated the types of all generated questions.",
"The inter-coder reliability score (Krippendoff's alpha (Krippendorff, 2011)) among three participants is 0.86, indicating a relatively high consistency.",
"The annotated results are shown in Table",
"5. Overall, our method demonstrates a much smaller K-L distance ( 0.28 ) to the ground-truth distribution, compared to QAG ( 0.60 ).",
"We can see that our method has a better estimation of the distribution of question types, which is closer to the distribution of the ground-truth.",
"QAG has a biased question type distribution and generates more outcome resolution questions.",
"Question quality.",
"We invited another five human participants and conducted a human evaluation to further evaluate the quality of the generated questions from our model against the ground-truth and QAG, including validity , readability , and children appropriateness .",
"Among the three dimensions, the children appropriateness is most closely related to the educational purpose; the former two dimensions mainly measure the factual correctness and fluency respectively.",
"For the total 10 5 paragraphs, each participant is assigned 20 different paragraphs randomly, and each paragraph has annotation results from two participants.",
"For each paragraph, participants need to read the paragraph and its corresponding questions and answers, and then rate the three dimensions on a five-point Likert-scale.",
"The Krippendoff's alpha scores along the four dimensions are between 0.60 and 0.80 (validity: 0.80, readability: 0.69, children appropriateness: 0.60), indicating an acceptable consistency (Gretz et al., 2020).",
"We conducted an independent-samples t-test to compare the performance of each model.",
"Our model is significantly better than QAG on the main evaluation dimension of children appropriateness : the mean score of our model and QAG are 2.56 and 2.22, with corresponding standard derivation 1.31 and 1.20 respectively.",
"This gives a significant score with p-value = 0.009, showing that the questions generated by our model can indeed better fit the education scenario.",
"For reference, the ground-truth has a mean score and standard derivation of 3.96 and 1.02, indicating a still large space to improve.",
"On validity and readability , our model is on par with QAG.",
"This is not surprising because both models are based on large pre-trained BART models that are good at generating natural and fluent sentences.",
"For validity, our model (avg: 3.19, std: 1.53) is a bit lower than QAG (avg: 3.27, std: 1.62); 5079 for readability, our model (avg: 4.19, std: 1.53) is a bit higher than QAG (avg: 4.12, std: 1.33).",
"A further breakdown in Table 6 shows that QAG wins mainly on action questions, because it directly generates questions conditioned on verbs.",
"For causal relationship and outcome resolution questions, our method generally outperforms QAG.",
"To further investigate the effectiveness of our method, we conducted a set of ablation studies.",
"To investigate the effects of our question type distribution learning, we conducted a comparison study.",
"In particular, we removed the question type distribution learning module (denoted as w/o tdl), and directly trained the summarization and question generation models.",
"In other words, during training, we concatenate all silver summaries as the output of the summarization model.",
"During testing, we extract the first 2 sentences as the predicted summaries.",
"The results are shown in Table 7.",
"From the comparison, we can see that without knowing question types, the Rouge-L scores drop about 3 points overall, which implies the importance of our question type distribution learning module.",
"To investigate the effects of our event-centric summary generation module, we conducted a comparison with different summarization methods.",
"The summarization methods include: 1) Lead3 .",
"We select the first three sentences of a paragraph as the summary, and use them as input to the question generation model; 2) Last3 .",
"We select the last three sentences of a paragraph as the summary, and use them as input to the question generation model.",
"3) Random3 .",
"We select the random three sentences of a paragraph as the summary, and use them as input to the question generation model.",
"4) Total .",
"We use each sentence of a paragraph as the Method Pre Rec F1 Ours (w/o tdl) 32.62 29.89 27.42 Ours 37.50 31.54 30.58 Table 7: The Rouge-L scores of our method with and without question type distribution learning.",
"summary, and use them as input to the question generation model.",
"5) TextRank .",
"TextRank is a typical extractive summarization method.",
"We use TextRank to extract a summary, and for each sentence in the summary, we input it to the question generation model.",
"For other summarization methods, they cannot get the question type distribution like our method.",
"For a fair comparison, we also remove the question type distribution learning module of our method, which is the same as the setting in section 6.1.",
"The results are shown in Table 8, from which we can see that extracting sentences from the paragraph is not enough for covering salient events for educational question generation.",
"Our event-centric summary generation method is an effective way for extracting educational events of fairy tales.",
"Using all sentences (total) can have the highest recall score at the expense of accuracy, but the overall F1 score is still relatively low.",
"Currently, we use control signals to constrain generating questions of different types, which can be viewed as a multi-task learning framework for multi-type question generation.",
"To investigate whether sharing parameters is a good way for our task, we trained individual summarization and question generation models using different question types.",
"The results in Rouge-L are shown in Table 9.",
"We can find that sharing parameters generally can achieve better performance because of the use of more training data.",
"For only using one type of training data, owing to the error of question type distribution learning, the performance drops a lot, showing the importance of combining question 5080 Method Pre Rec F1 Action 35.97 20.68 24.29 Causal 13.70 11.23 11.54 Outcome 6.15 4.97 5.30 Ours (individual) 25.71 33.08 26.27 Ours (overall) 37.50 31.54 30.58 Table 9: The comparison results of training separate summarization and question generation models on each question type.",
"type distribution learning and multi-task learning with different types of training data.",
"In this paper, we propose a novel method for educational question generation for fairy tales, which can potentially be used in early childhood education.",
"Our method contains three modules: question type distribution learning, event-centric summary generation, and educational question generation.",
"Through question type distribution learning, we can decompose the challenges of educational question generation by extracting related events of one question type and generating educational questions with a short event-centric summary, which improves the performance significantly.",
"On both automatic evaluation and human evaluation, we show the potential of our method.",
"In the future, we plan to further investigate the event-centric summary generation module by considering discourse-level information to improve the summarization performance and improve the factuality error problem.",
"We are also interested in deploying the system in real scenarios to benefit childcare-related domains.",
"The authors thank constructive suggestions from all anonymous reviewers, as well as all participants in our human evaluation session.",
"This work is supported by the Hong Kong General Research Fund (GRF) with grant No. 16203421.",
"Zhenjie Zhao is supported by the National Natural Science of China Foundation under Grant No. 62106109 and the Startup Foundation for Introducing Talent of NUIST."
] | [
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"result",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"result",
"result",
"objective",
"abstain",
"other",
"other",
"other"
] |
[
"Code completion, which aims to predict the following code token(s) according to the code context, can improve the productivity of software development.",
"Recent work has proved that statistical language modeling with transformers can greatly improve the performance in the code completion task via learning from large-scale source code datasets.",
"However, current approaches focus only on code context within the file or project, i.e. internal context.",
"Our distinction is utilizing external context, inspired by human behaviors of copying from the related code snippets when writing code.",
"Specifically, we propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.",
"We adopt a stagewise training approach that combines a source code retriever and an auto-regressive language model for programming language.",
"We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.",
"With the growth of software engineering field, large-scale source code corpus gives a chance to train language models in code domain (Hindle et al., 2016; Tu et al., 2014).",
"And benefiting from the large transformer models (Vaswani et al., 2017) and pre-training techniques (Devlin et al., 2018; Radford et al., 2018), a rapid progress has been made in many code-related tasks like code search (Feng et al., 2020; Guo et al., 2020), code summarization (Clement et al., 2020; Ahmad et al., 2020), bug fixing (Mashhadi and Hemmati, 2021; Drain et al., 2021) and code completion (Svyatkovskiy et al., 2020; Liu et al., 2020; Kim et al., 2021; Clement et al., 2020).",
"Code completion is considered as an essential feature towards efficient software development in modern Integrated Development Environments (IDEs).",
"The task is formulated by predicting the following code token(s) based on the code context.",
"Traditionally, code completion requires real-time program analysis and recommends type-correct code tokens (Tu et al., 2014).",
"Recently, statistical language models trained on large-scale source code data have shown high accuracy in the code completion task.",
"Primitive approaches take the given context only (Liu et al., 2016; Karampatsis et al., 2020), some methods use richer information, e.g., adding code token types (Liu et al., 2020), abstract syntax tree (AST) structures (Li et al., 2018; Kim et al., 2021), or extended hierarchical context (Clement et al., 2021).",
"However, one key limitation of existing methods is the scope of information they utilize; all the information is bounded in the given input file.",
"This is unnatural from human perspective, as studies demonstrate that programmers tend to reuse an existing code snippet by copying part of code with or without minor modifications to accelerate software development (Roy and Cordy, 2008; Baker, 2007), leading a software repository usually containing 7-23% cloned codes (Svajlenko and Roy, 2015).",
"Motivated by this phenomenon, in this paper, we argue the utility of extending the information scope beyond the input file, i.e., into a large code-base.",
"We conjecture that using codes with similar semantics as auxiliary information are beneficial to predict the following code tokens.",
"Therefore, we propose ReACC a Re trievalA ugmented C ode C ompletion framework (See Figure 1).",
"The code completion task under our framework can be re-formulated by, given a source code corpus for search and an unfinished code snippet to complete, using the unfinished code as a query to retrieve similar code snippets from search corpus, and predicting the following code tokens by reusing the 6227 Retriever def read_as_jsonl(self, json_file): lines =",
"retrieved code.",
"ReACC consists of two core components: (1) a dual-encoder model served as the code-to-code search retriever (2) an auto-regressive language model served as the code completion generator .",
"ReACC adopts the stage-wise training strategy which is widely used in other tasks like open-domain question answering (Karpukhin et al., 2020; Izacard and Grave, 2021), natural language to code generation (Hashimoto et al., 2018; Parvez et al., 2021), etc.",
"The simplest technique for retrieving code is to build a sparse vector retriever like TF-IDF or BM25 (Robertson and Zaragoza, 2009) which are both based on keyword matching algorithms.",
"The sparse retriever can capture lexical information and is sensitive to the names of code identifiers.",
"The dense retriever, on the contrary, can capture syntactic and semantic information by mapping a code snippet to a dense vector.",
"In the code completion task, the code retriever is expected to comprehend the source code's intent in order to retrieve the semantically similar codes.",
"On the other hand, considering programmers are prone to copy-and-paste existing code, the retriever should evaluate lexical similarity as well.",
"To that end, we adopt the hybrid retriever (Karpukhin et al., 2020; Ma et al., 2021), which combines results of dense and sparse retriever.",
"We employ a dual-encoder model architecture as the dense retriever since the cross-encoder model has a high computational complexity.",
"To achieve a better understanding ability, we initialize our dense retriever with GraphCodeBERT (Guo et al., 2020), which is a pre-trained BERT-based programming language understanding model.",
"Then we continue pre-training the retriever by contrastive learning to enhance sentence embedding.",
"As the labeled data containing similar code pairs is rare, we utilize various transformations to generate programs with similar functionality for data augmentation.",
"We implement the generator with a decoder-only transformer model.",
"To incorporate the external information from retrieved similar code, we concatenate the obtained code and code context as input.",
"The generator is initialized by CodeGPT-adapted (Lu et al., 2021) which is a domain-adaptation model from GPT-2 (Radford et al., 2018) pre-trained on code corpus.",
"We evaluate our ReACC framework on two benchmark datasets CodeXGLUE (Lu et al., 2021) and CodeNet (Puri et al., 2021), in Python and Java programming languages.",
"ReACC achieves a state-of-the-art performance on both datasets.",
"The experimental results demonstrate that external source code retrieved by our retriever is useful for auto-completing the partial code.",
"To summarize, our main contributions are: We propose a retrieval-augmented method to assist the code auto-completion task.",
"1 To adapt to the code completion scenario, where the retrieval query is an unfinished code snippet, we propose the partial code-to-code search task and create datasets for evaluation.",
"1 Our codes are available at https://github.com/ celbree/ReACC 6228 2 Related Work 2.1 Code completion Code completion is an essential task for code intelligence.",
"Hindle et al. (2016) are the first to use language model for code completion by N-gram technique.",
"Deep neural networks (Liu et al., 2016; Alon et al., 2020; Karampatsis et al., 2020) and pretraining approaches (Liu et al., 2020; Svyatkovskiy et al., 2020) are later frequently utilized to accomplish this.",
"Besides considering source code as code token sequences, some research focuses on completing an abstract syntax tree (AST) by anticipating the next node in the flattened tree (Li et al., 2018; Kim et al., 2021).",
"Guo et al. (2021) complete codes by generating sketches, i.e. code snippets with holes.",
"Svyatkovskiy et al. (2021) and Clement et al. (2021), on the other hand, investigate ways to improve the efficiency and long-range modeling in the code completion task, respectively.",
"All of these works employ previously written code context as inputs, along with AST structural information or token types.",
"But none of them has attempted to leverage existing external code as auxiliary information.",
"Contrastive learning on code Inspired by the great success of contrastive learning in other domains (Wu et al., 2018; Reimers and Gurevych, 2019; Fang et al., 2020; Chen et al., 2020; He et al., 2020; Radford et al., 2021; Gao et al., 2021), researchers have deployed this technique to source code for better code fragment understanding.",
"Jain et al. (2020) and Bui et al. (2021) propose Contra-Code and Corder, respectively.",
"Both models use the self-supervised contrastive learning framework and generate code snippets as data augmentations via compiler-based semantic-preserving transformations.",
"Their models have shown the effectiveness of contrastive learning in code clone detection, code search and code summarization tasks.",
"SYNCOBERT (Wang et al., 2022) and UniXcoder (Guo et al., 2022) are both pre-training models that utilize multi-modal data, including code, comment, and AST, for better code fragment representation through contrastive learning.",
"Retrieval for code-related tasks Many code intelligence tasks benefit from information retrieval (Xia et al., 2017).",
"A common scenario for information retrieval in code domain is code search with natural language description as a query (Arwan et al., 2015; Gu et al., 2018; Cambronero et al., 2019).",
"As for other code intelligence tasks, Hayati et al. (2018) propose an action subtree retrieval method called ReCode for generating general-purpose code.",
"Hashimoto et al. (2018) propose a retrieve-and-edit framework for code autocom-pletion and code generation.",
"Luan et al. (2019) propose Aroma, which utilizes code-to-code structural search and intersecting candidate code snippets to recommend relevant code given another code snippet as a query.",
"Both Wei et al. (2020) and Li et al. (2021) leverage the retrieve-and-edit/refine framework to improve model's performance in code summarization.",
"Parvez et al. (2021) propose RED-CODER, using a dense retriever trained on paired NL-code pairs to retrieve relevant comments or codes as a supplement for code summarization or code generation tasks.",
"In most circumstances where a dense retriever is utilized, a natural language comment is treated as a query to retrieve code.",
"In the code completion scenario, however, we focus on using code as query, particularly partial code, which is a more difficult task since there are few labeled data with semantically similar code pairs and in partial code search, semantics in query is incomplete.",
"We first introduce the formulation of retrieval-augmented code completion task.",
"Then we give detailed descriptions on the retriever and generator in ReACC.",
"We show how we continue pretraining GraphCodeBERT (Guo et al., 2020) with contrastive learning on code and how we address the problem that there is no labeled data for positive instances of similar programs in section 3.2.",
"In section 3.3 we talk about the way to aggregate retrieved code and code context in the generator.",
"Assume that we have a source code database containing a large collection of software repositories, which consist of D source code files, f 1 , f 2 , ..., f D .",
"Following the Dense Passage Retriever (DPR) model (Karpukhin et al., 2020), we split each of the files into code fragments of equal lengths as the basic retrieval units.",
"Such splitting not only leads to a better retrieval results as stated by Karpukhin et al. (2020), but also supports extreme long code files where each part of a file represents differ-6229 def normalize(a):ma= np.mean(a) sa = np.std(a) return (a-ma)/sa def standardization(arr):mu np.mean np.std def normalize(a):ma= np.mean(a) sa np.mean def sort(a1, a2): tmp sorted Original code Transformation Positive example Query In-batch negatives E n c o d e r Minimize Maximize Truncate Partial code API seq Figure 2: Illustration on the training process of the retriever in our proposed framework ReACC.",
"ent semantics.",
"Thus we get M code fragments as the retrieval database C = { c 1 , c 2 , ..., c M } .",
"Let X = { x 1 , x 2 , ..., x k } be the unfinished code written previously, a retriever R : ( X, C ) C retrieves the most similar code fragment c s in C .",
"The generator G predicts the following code token(s) Y = { x k +1 , ..., x k + n } , where n = 1 in the token-level code completion task, based on context and retrieved code.",
"Formally, P ( Y ) = (cid:81) ni =1 P ( x k + i | c s , x 1: k + i 1 ) .",
"The retrieval module in ReACC is expected to retrieve semantically equivalent code given an incomplete code.",
"We adopt the hybrid retriever (Karpukhin et al., 2020; Ma et al., 2021) framework by combining scores of sparse and dense retriever.",
"The sparse retriever we use is BM25 (Robertson and Zaragoza, 2009) based on the implementation of ElasticSearch 2 .",
"As a term-based retrieval method, BM25 considers each code fragment as a code token sequence and employs bag-of-words representations.",
"The matching score computed by BM25 indicts lexical similarity between the query and document.",
"As for the dense retriever, it maps each code fragment to a d -dimension dense vector.",
"We construct it in this paper based on the DPR model (Karpukhin et al., 2020).",
"Figure 2 illustrates the training process of the dense retriever of ReACC.",
"In the following, we will walk through it in detail.",
"Dense Retriever Our dense retriever consists of two bidirectional transformer-based encoders EC and EQ .",
"EC encodes each code fragment in the retrieval database C and builds indexes for them.",
"The query is encoded by EQ .",
"We take the representation of [CLS] token as output and the similarity is computed by sim ( q, c ) = EC ( c ) TEQ ( q ) .",
"Since both EC and EQ take source code as inputs with the only difference being whether they are partial or not, the dual encoders share weights in ReACC.",
"At the training stage, following DPR (Karpukhin et al., 2020), we adopt in-batch negatives to calculate the contrastive loss by InfoNCE (Oord et al., 2018): L ( q, c + , c 1 , c 2 , ..., c m ) = log e sim ( q,c + ) e sim ( q,c + ) + (cid:80) mi =1 e sim ( q,c i ) (1) However, unlike DPR, we don't employ \"hard\" negatives which are retrieved from BM25.",
"Because programmers tend to copy tokens directly, a code with distinct semantics but substantial lexical similarity can help with code completion.",
"Data Augmentation The purpose of contrastive learning of the dense retriever in ReACC is to learn a representation of code fragments that keeps codes with similar or equivalent semantics close and dissimilar codes far apart.",
"It requires numerous positive and negative code pairs.",
"However, it is difficult to identify similar programs based on an unlabeled code corpus, e.g., certain widely used datasets (Al-lamanis and Sutton, 2013; Raychev et al., 2016; Husain et al., 2019) mined from GitHub repositories.",
"Searching semantically equivalent code requires extra code compilation and execution costs (Mas-salin, 1987; Churchill et al., 2019), which is unrealistic in a large database.",
"Instead of searching, an alternative way is to create code snippets with same functionalities for data augmentation.",
"To do so, we apply several semantic-preserving transformations to the original source code to construct a set of variants.",
"There exists several attempts to apply such transformation to code (Jain et al., 2020; Rabin et al., 2021; Bui et al., 2021).",
"In this paper, we mainly adopt identifier renaming and dead code (unreachable or unused code) insertion.",
"Figure 3 shows an example of performing such transformations to a Python code.",
"Identifier renaming is a method of renaming an identifier with another.",
"We only rename variable and method names as other identifiers cannot be changed arbitrarily like built-in types or API calls.",
"Different from previous works, we preserve 6230 import socket def echo_server(client, timeout, bufsize): try:if timeout > 0: client.settimeout(timeout) get_buf = client.recv(bufsize) client.send(get_buf) except socket.timeout: pass client.close() import socket def get_mean(c, doc, local): try:if doc > 0: c.settimeout(doc) _user_id = c.recv(local) c.send(_user_id) except socket.timeout: pass c.close() import socket def echo_server(client, timeout, bufsize): try:if timeout > 0: client.settimeout(timeout) get_buf = client.recv(bufsize) if True:tmp= [x**2 for x in range(10)] client.send(get_buf) except socket.timeout: pass client.close() original python code After renaming all variables After inserting dead code Figure 3: An example of applying semantic-preserving transformations to Python code.",
"part of the lexical information while modifying the names at the same time based on the consideration that identifier names typically convey the meanings for humans and lexical similarity contributes a lot for retrieving (It is verified in section 4.4).",
"To do so, we mask all the identifiers in a program and leverage GraphCodeBERT (Guo et al., 2020) to predict each identifier like in the masked language model task.",
"The top-10 predictions (excluding the original identifier) are selected as the candidate set for renaming.",
"Dead code insertion is to insert a dead code into a code fragment at a proper location.",
"Dead code is a code snippet which can never be reached (Xi, 1999) or is reachable but whose result can never be used in any other computation (Debray et al., 2000).",
"In software engineering, dead code insertion is one of the most common techniques for code obfuscation (You and Yim, 2010), whose goal is to modify a code to make it hard to understand but remain its functionality, which is similar to our goal.",
"We first randomly select variable names which don't appear in this program and then use them to form a statement from a predefined set of dead code (See Appendix A for details), such as assignment, method invocations, looping statement, conditional statement and so on.",
"We traverse the AST and identify all the statements.",
"Then we choose a statement at random and insert the dead code after it, leading a new subtree in the AST.",
"Input Format We integrate both the code token sequence and the API usage sequence as inputs.",
"API usage sequence is highly related to the functionality of a code snippet (Gu et al., 2016; Hu et al., 2018).",
"To improve the code representation, we extract the API sequence and append it to the source code token sequence.",
"Finally, we use a random truncation of the original code as the query and the entire created program as the positive example during training to address the problem on how to retrieve based on incomplete semantics.",
"The output of retriever is the retrieved code c s .",
"Considering c s is queried by code context x while our target is the following code of x , so we propose fragment alignment using the next fragment c (cid:48) s of c s in the same file (we have split each file into code fragments for retrieval as discussed in Section 3.1) for completing the next fragment of x .",
"Thus, the input sequence for the generator is the concatenation of c (cid:48) s and x : x (cid:48) = c (cid:48) s x .",
"The generator module in ReACC supports any model architecture that can perform code completion task.",
"In our experiments, we adopt CodeGPT-adapted (Lu et al., 2021), which is a decoder-only transformer model pre-trained on Python and Java datasets from CodeSearchNet (Husain et al., 2019) via casual language model.",
"CodeGPT-adapted has shown promising results in the code completion task in CodeXGLUE benchmark (Lu et al., 2021) on two widely used code completion datasets.",
"In order to evaluate the effectiveness of the code-to-code retrieval module in ReACC, we perform code clone detection task which aims to retrieve semantic equivalent programs.",
"In this section, we describe how we create the test dataset for this task and how we evaluate the performance of ReACC's retriever.",
"CodeNet (Puri et al., 2021) dataset consists of a large collection of programs which are derived from online judge websites.",
"We respectively create a code clone detection evaluation dataset from CodeNet in Python and Java with zero-shot setting.",
"We collect code solutions for thousands problems and solutions for the same problem are considered as semantically equivalence.",
"The data statistics are shown in Table 1.",
"Retrieval Training Set The dense retriever in ReACC is pre-trained on CodeSearchNet dataset (Husain et al., 2019), a large-scale source code corpus extracted from GitHub repositories.",
"We employ 1.6M Java methods and 1.2M Python functions from it.",
"CodeBERT (Feng et al., 2020) is a pre-trained model for programming language, which is trained on NL-PL pairs from CodeSearchNet dataset in six programming languages.",
"GraphCodeBERT (Guo et al., 2020) is also pre-trained on CodeSearchNet NL-PL pairs and considers the inherent structure of code i.e. data flow.",
"The retrieval encoder is initialized with GraphCodeBERT.",
"It is continual pre-trained with both masked language model objective and contrastive learning.",
"We use in-batch negatives with a batch size of 256.",
"With a learning rate of 5e-5, We train the retriever for Python and Java for 30 epochs each.",
"We implement the code clone detection experiment in the partial search way, which is ideally adapted to code completion scenarios as it accepts a partial program as a query while maintaining the same goal.",
"Table 2 shows the results in the zero-shot code clone detection task on CodeNet dataset, with the partial search setting.",
"Models are measured by MAP@K (Mean Average Precision at K), which is the evaluation metric in the CodeXGLUE clone detection task, and precision at 1, as we only care about the most similar code for code completion.",
"From the comparison with other transformer-based encoders, we can see CodeBERT and GraphCodeBERT can hardly retrieve equivalent code.",
"While our model significantly outperforms them, which indicts our model is capable of retrieving the semantically equivalent code even when the query's semantics is incomplete.",
"We also find that BM25 performs splendidly in this task, which is quite different from the performance on other tasks like open-domian QA (Karpukhin et al., 2020), code summarization (Parvez et al., 2021), etc.",
"The findings suggest that semantically related codes are likely to be lexically similar, which leads lexical similar to contribute more for retrieval, making code-to-code search easier than text-to-code or question-to-passage search using the term-based retrieval method.",
"In this section, we evaluate ReACC on end-to-end code completion.",
"CodeXGLUE (Lu et al., 2021) is a benchmark dataset containing 14 datasets for 10 diversified code intelligence tasks.",
"We use PY150 dataset (Raychev et al., 2016) in Python and GitHub Java Corpus dataset (Allamanis and Sutton, 2013) in Java from it for code completion task.",
"Table 1 shows the data statistics.",
"CodeGPT/CodeGPT-adapted (Lu et al., 2021) are both pre-trained on Python and Java datasets from CodeSearchNet.",
"CodeGPT is trained from scratch while CodeGPT-adapted is a domain adaptation model which is initialized by GPT-2 (Rad-ford et al., 2019).",
"PLBART (Ahmad et al., 2021) is based on BART (Lewis et al., 2020) architecture which employs denoising sequence-to-sequence (Seq2Seq) 6232 Model Python Java MAP@100 Precision MAP@100 Precision CodeBERT 1.47 4.75 1.15 4.58 GraphCodeBERT 5.31 15.68 4.54 16.05 BM25 10.32 23.17 8.67 25.85 ReACC-retriever 9.60 27.04 9.31 27.55 Table 2: Results on zero-shot code clone detection dataset created from CodeNet.",
"CodeT5 (Wang et al., 2021) is also an encoder-decoder pre-trained model which adapts T5 (Raf-fel et al., 2019) architecture and considers the identifier-aware token type information in code.",
"X-CodeGPT is a variant of CodeGPT which adapts eWASH (Clement et al., 2021) to CodeGPT.",
"Clement et al. (2021) propose eWASH, a method for leveraging the syntax hierarchy of source code to give the model wider field of vision in a file and achieving a new SOTA performance on the CodeXGLUE code completion task.",
"We reproduce their method and develop X-CodeGPT by adapting eWASH to CodeGPT-adapted.",
"Fine-tune We fine-tune CodeGPT-adapted on PY150 and GitHub Java Corpus datasets, respectively, and use it as the generator in ReACC.",
"The number of epochs for training PY150 is 30 and Java Corpus is 10, with a batch size of 96 and a learning rate of 2e-5.",
"Except for X-CodeGPT, all other baseline models are fine-tuned with the same settings.",
"As for X-CodeGPT, we pre-train it with a training set extracted from CodeSearchNet in eWASH format, where each example is a function body with its corresponding extended context, as described by Clement et al. (2021).",
"Since eWASH requires codes parsed into ASTs but codes in CodeXGLUE have been tokenized and cannot be parsed, we build a new dataset from PY150 to fine-tune X-CodeGPT on CodeXGLUE.",
"As a result, we download the origin files in PY150 and create a new dataset that retains the train/valid/test split, as seen in Table 1.",
"Evaluation Following Lu et al. (2021), we conduct two code completion scenarios, token-level and line-level completion, to measure models' ability of predicting one and more tokens.",
"Perplexity is the evaluation metric for token-level completion, whereas exact match accuracy (EM) and edit similarity are used for line-level completion.",
"For token-level completion, based on the consideration of efficiency, instead of applying retrieval at each step, we retrieve similar codes based on current context after predicting the first 100 tokens, and leverage it for further prediction.",
"Retrieval Database We use the training set of PY150 and Java Corpus as retrieval database for test.",
"We don't use the contrastive pre-training corpus (i.e., CodeSearchNet) in order to avoid the duplication between CodeXGLUE and CodeSearchNet as they are both extracted from GitHub.",
"Hybrid Retriever A linear combination of scores from BM25 and our dense retriever forms a hybrid retriever.",
"Specifically, we calculate the score by sim ( q, c ) + BM 25( q, c ) and let = 0 .",
"9 based on the results on dev set for both PY150 and Java Corpus datasets.",
"Table 3 and Table 4 compare different baseline models on code completion task in the CodeXGLUE Python and Java datasets.",
"ReACC framework with the hybrid retriever outperforms consistently than other baselines on all datasets, which proves our conjection that the external context is beneficial to the code completion task.",
"The comparison with X-CodeGPT in Table 4 demonstrates that utilizing external context could be more useful than making the most of the current code file.",
"Among three configurations of the retriever in ReACC, hybrid retriever performs best on almost all metrics except the exact match score in the new test set of PY150.",
"From Table 3, we can observe that comparing the two datasets, the improvement in the PY150 dataset is greater than that in the Java Corpus dataset.",
"The reason for this is that the retrieval database for Java (i.e., the training set) is much smaller.",
"The CodeXGLUE Java Corpus dataset contains only 12,934 files for training so that it's more difficult to retrieve similar code from them.",
"Another finding is that BM25 shows comparable results with dense retriever and even performs better in perplexity and exact match metrics.",
"The findings indict that the code completion task can benefit from both semantically and lexically similar codes.",
"ReACC in specific domain Both PY150 and Java Corpus datasets are extracted from GitHub repositories which are distributed in a wide domain.",
"As some people frequently write codes in a more specific domain, e.g., data mining/pattern recogni-EM Edit Sim ReACC-dense 45.32 73.95 Retriever-identifier renaming 44.91 73.14 dead code insertion 45.11 73.57 API sequence 44.77 73.01 query truncation 43.93 72.65 Generator-fragment alignment 45.08 73.56 Table 6: Ablation study for both retriever and generator module.",
"tion domain for Kaggle 3 users, algorithm domain for ACM community, etc.",
"To evaluate ReACC in a specific code domain, we construct a code completion Python dataset from CodeNet, which can be considered in algorithm domain.",
"Table 5 reveals that ReACC significantly outperforms CodeGPT-adapted in CodeNet by 10% and 18% absolute improvement in edit similarity and exact match, respectively.",
"According to the findings, ReACC is more effective in a specific domain.",
"We also notice that ReACC with dense retriever outperforms BM25 significantly in CodeNet.",
"It can be explained by the fact that in algorithm domain, semantically similar code may be more valuable than for code completion lexically similar code.",
"Ablation study To further understand how our training options affect model performance, we conduct ablation experiments.",
"As seen in Table 6, when data argumentation and training strategies in retriever or generator are eliminated, the metrics degrade.",
"The most essential factor among them is query truncation.",
"Comparing the two semantic-preserving transformations, identifier renaming contributes more than dead code insertion.When fragment alignment is removed from generator, i.e. using the retrieved code snippet itself for generator, performance suffers slightly.",
"ReACC vs GitHub Copilot GitHub Copilot 4 is a powerful technique for code completion which uses OpenAI Codex (Chen et al., 2021) as the model backend.",
"We run some qualitative examples with its extension in VSCode, which are shown in the Appendix B. It worth noting that Codex is more powerful than CodeGPT since it is a large-scale pre-trained model that is trained on all source codes in GitHub based on GPT-3 (Brown et al., 2020).",
"However, in some cases, ReACC with CodeGPT as 3 https://www.kaggle.com/ 4 https://copilot.github.com/ 6234 the generator outperforms Copilot.",
"And in 6 Copilot itself can benefit from ReACC when it takes advantage of ReACC's retriever, which indicates the effectiveness of retrieval-augmented method for strong generative models.",
"We propose ReACC, a retrieval-augmented code completion framework that utilizes external context for the code completion task by retrieving semantically and lexically similar codes from existing codebase.",
"We pre-train a dual-encoder as a retriever for partial code search, which retrieves code fragments given a partial code.",
"Our method can adopt any architecture that can perform code completion as the generator.",
"On the CodeXGLUE benchmark, ReACC achieves a state-of-the-art performance in the code completion task.",
"This work is supported by Microsoft and IITP grants (2021-0-01696, High Individuals Global Training Program)",
"Finding clones with dup: Analysis of an experiment.",
"IEEE Transactions on Software Engineering , 33(9):608621.",
"Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020.",
"Language models are few-shot learners.",
"arXiv preprint arXiv:2005.14165 .",
"Nghi DQ Bui, Yijun Yu, and Lingxiao Jiang.",
"2021.",
"Self-supervised contrastive learning for code retrieval and summarization via semantic-preserving transformations.",
"In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval , pages 511521.",
"Jose Cambronero, Hongyu Li, Seohyun Kim, Koushik Sen, and Satish Chandra.",
"2019.",
"When deep learning met code search.",
"In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering , pages 964974.",
"Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton.",
"2020.",
"A simple framework for contrastive learning of visual representations.",
"In International conference on machine learning , pages 15971607.",
"PMLR.",
"Saumya K Debray, William Evans, Robert Muth, and Bjorn De Sutter.",
"2000.",
"Compiler techniques for code compaction.",
"ACM Transactions on Programming languages and Systems (TOPLAS) , 22(2):378 415.",
"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.",
"2018.",
"Bert: Pre-training of deep bidirectional transformers for language understanding.",
"arXiv preprint arXiv:1810.04805 .",
"6235 Dawn Drain, Chen Wu, Alexey Svyatkovskiy, and Neel Sundaresan."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"result",
"method",
"method",
"method",
"result",
"abstain",
"method",
"abstain",
"objective",
"objective",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"objective",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"Measuring document similarity plays an important role in natural language processing tasks.",
"Most existing document similarity approaches suffer from the information gap caused by context and vocabulary mismatches when comparing varying-length texts.",
"In this paper, we propose an unsupervised concept representation learning approach to address the above issues.",
"Specifically, we propose a novel Concept Generation Network (CGNet) to learn concept representations from the perspective of the entire text corpus.",
"Moreover, a concept-based document matching method is proposed to leverage advances in the recognition of local phrase features and corpus-level concept features.",
"Extensive experiments on real-world data sets demonstrate that new method can achieve a considerable improvement in comparing length-varying texts.",
"In particular, our model achieved 6.5% better F1 Score compared to the best of the baseline models for a concept-project benchmark dataset.",
"Measuring the similarity between documents is a fundamental problem in several natural language tasks such as information retrieval (Manning et al., 2008), paraphrase identification (Yin and Schtze, 2015) and question routing (Zhang et al., 2020).",
"A wide range of document similarity approaches (Kusner et al., 2015; Huang et al., 2016) have been proposed to handle the fundamental problem; however, most of them are based on the assumption that the documents being compared have similar document length.",
"However, varying-length document matching tasks are ubiquitous in many real-world scenarios.",
"For instance, in the news categorization task, the news articles may include both short reports for breaking news or narrative reports with cumbersome details.",
"introduce the information gap between two documents in the following two aspects:",
"(i) context mismatch, which is caused by the long-length documents usually provide more detailed context to support the key information while the short-length documents contain limited context information.",
"The issue renders the existing pre-trained natural language representation models (Conneau et al., 2017a; Devlin et al., 2018) pay more attention to the long but less important contexts, which makes their document representations distinct from the short-length documents with little context information.",
"(ii) vocabulary mismatch, which is usually caused by the different terms usage between short and long texts, which leads them do not share majority terms.",
"Existing document distance such as word mover's distance (Kusner et al., 2015) focus on comparing the local features.",
"Still, the vocabulary mismatch issue makes the local features hard to be matched while the majority of vocabulary is not shared.",
"To address the above challenges, our approach proposes a concept-based document matching method that incorporates both local phrase features and corpus-level concepts in an unsupervised setting, where concepts can be interpreted as a group of representative features that are interpretable for humans.",
"The main contributions of this paper can be summarized as follows:",
"(i) A novel unsupervised concept generation network is proposed to learn corpus-level concepts in the perspective of entire text corpus.",
"Specifically, each concept and its phrase assignment is iteratively optimized by the reconstruction loss between local phrase features and global concept representations.",
"(ii) A new concept-based document comparison method is proposed to measure the similarity between two text documents based on augmented concept representations, which leverages the advances of local phrases and corpus-level concepts.",
"Moreover, an enhanced concept-weight constraint is proposed to improve the performance in optimizing the con-cept-based document similarity.",
"(iii) Extensive experiments on several length-varying text matching datasets demonstrate that the effectiveness of our proposed approach consistently outperforms existing state-of-the-art methods.",
"In particular, our method improved 7.1% Accuracy and 6.5% F1-score in concept-project dataset compared to the best baseline method.",
"The rest of this paper is organized as follows.",
"Section 2 reviews related work, and Section 3 provides a detailed description of our proposed model.",
"The experiments on multiple real-world data sets are presented in Section 4. The paper concludes with a summary of the research in Section 5. 2 Related Work In this section, we briefly describe recent advances in document similarity research.",
"We start our discussion with recent progress in supervised methods, and then we shift our focus to unsupervised settings.",
"A large group of previous studies (Parikh et al., 2016; Liu et al., 2018; Zhang et al., 2019; Gupta et al., 2020; Zhang et al., 2020) learns document matching model between two text sequences in supervised settings.",
"Tan et al. (2016) exploit attention mechanism to distil important words from sentences.",
"Yang et al. (2019a) propose an inter-sequence alignment approach considering both previous aligned features and original point-wise features.",
"Zhou et al. (2020) present a neural approach for general-purpose text matching with deep mutual information estimation.",
"However, these semantic alignment approaches require massive human annotations in their training process, which are expensive and infeasible to obtain in many real-world scenarios.",
"Some approaches can be used to match document in unsupervised manners, including traditional statistical approaches (Metzler et al., 2007; Pincombe, 2004; Hua et al., 2016; Zhang et al., 2017).",
"In past few years, neural-network-based methods have been used for document representation, which includes Doc2Vec (Conneau et al., 2017b), Skip-Thought vectors (Kiros et al., 2015).",
"More recently, the state-of-the-art representation methods focus on the contextual representations to encode words in their context such as BERT (Devlin et al., 2018) and XLNet (Yang et al., 2019b).",
"A comparably long text may lose its local information after being encoded as a fix-length representation due to the informative contexts.",
"Word Mover's Distance (WMD) approaches (Yokoi et al., 2020; Wang et al., 2019) can partially solve the problem since they focus on local feature matching.",
"However, these methods still suffer from the vocabulary mismatch issue from length-varying texts, which makes the local features hard to be matched since texts share different majority of vocabulary terms.",
"Few approaches consider the length-varying texts in unsupervised settings.",
"Hongyu Gong and Xiong (2018) proposed an unsupervised document matching approach by comparing documents in a common space of hidden topics (DSHT), which is optimized by Singular Value Decomposition (SVD).",
"Compared to this approach, our method leverages both local features and global corpus-level concepts while DSHT only compares corpus-level topics.",
"Moreover, the proposed CGNet can generate concepts in more scalable data set compared to the matrix decomposition solution in DSHT.",
"We now describe our approach to calculate the document similarity for length-varying texts.",
"We begin by introducing the overview of our model in Section 3.1.",
"Then we provide details of the concept generation and document matching components in Section 3.2 and 3.3.",
"Last, the implementation details are described in Section 3.4.",
"Given a corpus of documents D = { d 1 , d 2 , ..., d n } , we propose a concept-based document matching approach to compute the document distance dist ( d i , d j ) between any two documents d i and d j in the corpus.",
"The overall architecture is shown in Figure 1, which includes two main components: 1) Concept Generation , which is to generate the corpus-level concepts from the entire document corpus.",
"Each concept c i consists of a group of document phrases by minimizing the reconstruction loss between local phrase representation and global concept representation.",
"Moreover, both cluster divergence and evidence regularization terms are proposed to regularize the generated concepts.",
"2) Doc-Text Corpus Document d i Document d j PhraseExtraction ConceptAssignment Corpus Phrase Extraction & Encoding Concept Generation Concept-based Reconstruction Concept Divergence Concept Evidence Concept Representations Concept-based Document Similarity Phrase Representation PhraseEncoding Concept Feature Space Phrase Feature Space P1 P2 P3 P4 P4 P3 P2 P1 P2 P4 P1 P3 P3 P1 P2 C3 C2 P4 C1 C7 C5 C4 P4 Concept C1 C8 Document Matching P4 C1 C3 C2 Figure 1: Overall Architecture ument Matching .",
"After the corpus-level concepts are learned from previous step, document matching is to calculate the document similarity based on concept-based document comparison method.",
"Specifically, the concept-based similary adopt the Wasserstein distance (Fournier and Guillin, 2015) to compute similarity between two documents' concept representations in terms of enhanced concept-weight constraint.",
"To generate concepts from a document corpus, we propose an unsupervised Concept Generation Network (CGNet).",
"First, we extract a set of phrases S p from the text corpus D .",
"The extracted phrases can be in different formats such as word tokens, noun phrases or n-grams according to the data corpus and language.",
"Then, pre-trained language representation models such as Transformers (Devlin et al., 2018; Yang et al., 2019b) can be adopted to encode the extracted phrases into embeddings as their semantic representations.",
"Specifically, we denote the embedding of the i -th phrase in document d j as p ( j ) i R , where is the dimension of the phrase embedding.",
"Suppose ( d j ) is the number of phrases in document d j , we denote the phrase embedding set P ( j ) for document d j as P ( j ) = (cid:8) p ( j ) i | i ( d j ) , i Z + (cid:9) , where Z + represents the set of positive integers.",
"Specifically, we use P = (cid:83) n i =1 P ( i ) to represent the entire phrase set for all the documents.",
"We assume each document can not only be represented as a group of phrases but a set of corpus-level concepts, which are treated as good approximations of phrase representations.",
"Especially for short-length texts, the limited phrases makes phrase representation hard to represent both text semantics and phrase importance.",
"Instead, our concept representation can represent short-text semantics and weight document features in corpus perspective rather than individual document.",
"To learn the corpus-level concepts, we first randomly initialize concept centroid embeddings in the same feature space of phrases, where is the number of concepts.",
"Specifically, we denote c i R as the embedding of the i -th concept centroid, where the concept dimension shares the same dimension as phrase representation.",
"Noted that the concept centroid embeddings will be trained as model parameters in our CGNet model.",
"Then we assign each phrase to concepts based on its phrase embedding and concept centroids by student-t distribution as follows: s ( j ) ik = (cid:0) 1 + (cid:107) p ( j ) i c k (cid:107) 2 / (cid:1) +12 (cid:80) k (cid:48) =1 (cid:0) 1 + (cid:107) p ( j ) i c k (cid:48) (cid:107) 2 / (cid:1) +12 , (1) where s ( j ) ik can be interpreted as the probability of the i -th phrase in document d j assigned to the k -th concept.",
"Since Student-t distribution has heavier tails, which makes it more prone to producing values that fall far from its mean.",
"This characteristics can help to assign lower probability to phrases that do not belong to any concept.",
"The parameter can control the degrees of freedom of Student's t-distribution.",
"Since our unsupervised setting, we let = 1 for all experiments.",
"where C ( j ) is the set of concept centroid embeddings for document d j and is a threshold to assign concepts for each document.",
"When the probability s ( j ) ik is greater than , the concept c k is added into the concept embedding set C ( j ) ; otherwise, the concept c k is excluded.",
"To improve the concept assignment, we propose to optimize the concept centroids by minimizing the reconstruction loss between local phrases and corpus-level concepts for each document.",
"The reconstruction loss is defined as follows: L r = 1 n n (cid:88) i sinkhorn ( P ( i ) , C ( i ) ) , (3) where P ( i ) and C ( i ) represents the embedding sets of phrases and concepts for the i -th document, respectively.",
"Function sinkhorn ( ) represents sinkhorn divergence (Cuturi, 2013), a sensible approximation of the Wasserstein distance (Fournier and Guillin, 2015) at a low computational cost.",
"The experimental results in Section 4.2.4 show the sinkhorn divergence achieves empirical better performance than traditional mean squared error (MSE).",
"Only minimizing the reconstruction loss can easily get trivial local optima that assigns all the phrases to one concept.",
"Thus, we propose two regularization terms, concept divergence loss and concept evidence loss, to regularize the concept centroid and avoid trivial solutions.",
"Concept Divergence.",
"To prevent the similar or even duplicate concepts, we propose a divergence regularization term L d that penalizes on concepts that are close to each other.",
"The regularization term L d is defined as follows: L d = (cid:88) i =1 (cid:88) j = i +1 max (cid:0) 0 , (cid:107) c i c j (cid:107) 22 (cid:1) , (4) where is a threshold that justifies whether two concepts are similar or not.",
"We set to 1.0 in our experiments.",
"The divergence regularization exerts a large penalty when the L 2 norm distance between two concept embeddings are smaller than the threshold ; otherwise, no penalty is produced.",
"Concept Evidence.",
"To encourage each concept as close to encoded phrase instances, we propose a concept evidence regularization term L e , which penalizes the long distance between each concept embedding and its corresponding closest encoded phrases.",
"The evidence regularization term L e is defined as follows: L e = 1 (cid:88) k =1 (cid:88) j =1 min j (cid:18) (cid:91) p i P (cid:107) c k p i (cid:107) 22 (cid:19) , (5) where min j ( ) represents the j -th minimum value in the given set and p i P is one of the phrase embedding from the entire phrase embedding set P .",
"We denote as the union operator to combine all the L 2 norm distance between concept centroids and phrase embeddings.",
"We choose the sum of top minimum distances as the concept evidence loss for each concept.",
"The value of determines the minimum number of phrases we desired in each concept.",
"By default, we set to five, which indicates a large penalty is produced while the k -th ( k ) closest phrases has a long distance to the concept centroid.",
"Finally, our loss function is the combination of reconstruction loss L r , concept divergence loss L d and concept evidence loss L e with their corresponding weights r , d and e .",
"Our concept-based document matching method is based on the concept generated in Section 3.2.",
"According to the document concept assignment in Equation (2), some local phrases are excluded from any concept while its concept assignment probability s ( j ) ik < for k < .",
"However, these local phrases may contain distinguished semantics that cannot be grouped with enough phrases as a concept, but play an important role in distinguishing the difference between documents.",
"To involve the local phrases into our document matching task, we generate a local-feature augmented representation C ( j ) as follows: C ( j ) = C ( j ) (cid:40) p ( j ) i (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) (cid:91) i ( d j ) j (cid:8) c k (cid:12)(cid:12) s ( j ) ik (cid:9) = (cid:41) , (6) where all the local phrases that do not belong to any concept in C ( j ) are added into the augmented concept embedding set C ( j ) .",
"The parameter is the same threshold as in Equation (2).",
"Based on the idea of the Wasserstein distance (Fournier and Guillin, 2015), we propose the concept-based document similarity between augmented concept representations of documents d i and d j as follows: ( C ( p ) , C ( q ) ) = max (cid:88) c i C ( p ) (cid:88) c j C ( q ) f i,j c i c j (cid:107) c i (cid:107)(cid:107) c j (cid:107) s.t. (cid:88) i Z + f i,j w p,i , i (cid:12)(cid:12) C ( p ) (cid:12)(cid:12) (cid:88) j Z + f i,j w q,j , j (cid:12)(cid:12) C ( q ) (cid:12)(cid:12) , (7) where the f i,j is a flow from concept representation c i in C ( p ) to c j in C ( q ) .",
"Parameters w p,i and w q,j represent the weight of concept i and j in document p and q , respectively.",
"We choose the concept weight as the averaged TF-IDF weight of phrases that are assigned to the concept, which is used as upper bound constraint of the flow parameters.",
"Overall, the concept-based document similarity is to find a flow between concept representations of two documents that maximize the similarity score.",
"The proposed CGNet model described in this section is implemented using the Pytorch 1 framework and trained on a single Nvidia Quadro RTX 6000 GPU with 24GB memory.",
"For phrase extraction, we set the minimum phrase frequency to 10 and maximum document frequency to 0.5.",
"The phrases embeddings are initialized with pre-trained fastText model (Bojanowski et al., 2016) using the default dimensionality of 300.",
"We set the number of training epochs of CGNet to 100 and batch size to 8.",
"For the sinkhorn divergence used in Equation (3), we apply an approximate Wasserstein distance implementation 2 .",
"For the settings of concepts, we set number of concept to 100 and concept threshold to 0.8.",
"It should be noted that while we train our CGNet with the text corpus, the model once trained can be applied to new document in the same domain that is not included in the text corpus.",
"In this section, we evaluate the performance of the model described in Section 3 on document matching task for length-varying texts.",
"We begin by introducing the evaluation settings, with details on the datasets, metrics and baselines",
"that we use in our experiments.",
"4.1.1 Datasets and Labels We conducted experiments on three publicly available datasets in different tasks:",
"(i) Concept-Pro-ject (Hongyu Gong and Xiong, 2018).",
"The dataset is to match science projects and concepts when people intend to search related projects that match a given concept.",
"It includes 537 pairs of projects and concepts involving 53 unique concepts from the Next Generation Science Standards 3 (NGSS) and 230 unique projects from Science Buddies 4 .",
"Each pair is labeled by human beings with the decision wther it is a good match or not.",
"(ii) CL-S-ciSumm 2017 (Prasad, 2017).",
"The dataset consists of 494 ACL Computational Linguistics research papers covering 30 categories in total.",
"Each category contains a reference paper and its corresponding human-annotated summary.",
"We compare the reference summary with its corresponding reference paper and use all the citing papers as negative cases.",
"The matching task is formulated as a ranking problem and use the reference paper as the top-1 ground-truth.",
"(iii) CL-SciSumm 2018 (Jaidka et al., 2019).",
"The dataset consists of 605 research papers with reference papers including summaries and citing papers, which covers 40 categories.",
"Different from dataset CL-SciSumm 2017, we randomly select 5 corresponding citing papers as the true candidate for each reference summary and choose the other 15 citing papers from all the citing papers as distractors.",
"For Concept-Project dataset, we use Accuracy , Precision , Recall and F1-score as evaluation metrics based on the binary classification predictions.",
"The metrics including Precision, Recall and F1-score are based on positive predictions.",
"For the CL-SciSumm 2017 dataset, we use popular ranking evaluation metrics from the literature, which includes:",
"(i) Precision@1 : The proportion of predicted instances where the true reference paper appears in the ranked top-1 result.",
"(ii) Mean Reciprocal Rank (MRR) : the average multiplicative inverse of the rank of the correct answer, represented mathematically as MRR = 1 N (cid:80) Ni =1 1 rank i , 3 https://www.nextgenscience.org/ 4 https://www.sciencebuddies.org/ where N is the number of samples and rank i is the rank assigned to the true comment by a model.",
"(iii) Normalized Discounted Cumulative Gain (NDCG) : the normalized gain of each reference paper based on its ranking position in the results.",
"We set the relevance score of the true comment to one and those of the distractors (citing papers) to zero.",
"For the CL-SciSumm 2018 dataset, the results are evaluated by Precision@K : The proportion of predicted instances where the true citing papers appear in the ranked top-K result.",
"For example, P@3 or Precision at 3\" corresponds to the percentage of cases where the true citing appears in the top 3 ranked results.",
"We vary the value of K from 1 to 5 in our experiments.",
"The following methods are included in the performance comparison:",
"(i) TF-IDF , which uses the cosine similarity between the TfIdf-weighted vectors of the document as a measure of document similarity.",
"(ii) Infersent , which finds the cosine similarity between document embeddings generated by the state-of-the-art sentence embedding method InferSent (Conneau et al., 2017a).",
"(iii) BERT , which uses inner product between the document representation generated by the pre-trained deep bidirectional transformer (Devlin et al., 2018).",
"(iv) WMD (Kusner et al., 2015), which uses word mover's distance metric based on the embeddings of document words generated by fastText 5 .",
"(v) WRD (Yokoi et al., 2020), which is a variant of traditional WMD method.",
"WRD separates word importance and word meaning by decomposing word vectors into their norm and direction.",
"The align-ment-based similarity is computed by earth mover's distance.",
"(vi) DSHT (Hongyu Gong and Xiong, 2018), which matches documents by comparing them in a common space of hidden topics.",
"We now present and discuss the empirical results of our evaluation for the three document matching tasks.",
"Table 1 summarizes results of the concept-project document matching task.",
"Our model significantly outperforms all the baselines in accuracy, precision and F1-score.",
"In particular, our model achieves 5 https://fasttext.cc/ 87.2% accuracy and 88.4% F1 score, which is 7.1% and 6.5% better than the best baseline method (DSHT).",
"The improvements over all the baselines are statistically significant at a p-value of 0.01.",
"The baseline methods including InferSent, BERT have high recalls, but low F1 scores and precision.",
"This is because these approach cannot distinguish unmatched documents but predict most of documents are matched.",
"Table 2 shows the result of Summary-Reference Matching task in CL-SciSumm 2017 dataset.",
"From the results, we conclude that our approach outperforms all the baselines on all metrics.",
"The results are statistically significant at p < 0 .",
"01 using the Wilcoxon signed rank test (Smucker et al., 2007).",
"Since the summary-reference task only has one true reference, the other citing papers being distractors, the P@1 result becomes especially important for this task.",
"Our approach achieves 90% precision, which is 3.3% better than the precision of the best baseline method (WMD).",
"We also find the global representation methods such as BERT and InferSent performs worse than approaches using local features such as WMD and TF-IDF, which is different from the results in concept-project dataset.",
"But our concept-based approach that utilizes both local and global features has consistently outperforms these baseline methods.",
"Figure 2 shows the precision at K result of summary-citance matching task in CL-SciSumm 2018 dataset when k is set from one to five.",
"Both mean and variance are presented by 10 experimental runs.",
"From the result, we conclude that our method can significantly outperform the other baselines for all the settings of K. Specifically, our model performs around 5% better than the best baseline method, WMD.",
"Moreover, the variance of our model is also much smaller than the other baselines, which indicates that our model is less impacted by random selected distractors.",
"The result of WRD is not available to compute due to its out-of-memory issue.",
"In addition, we also find the similar results that local feature-based approaches performs better than global representation methods.",
"following settings:",
"(i) w/o Sinkhorn : To demonstrate the effectiveness of the sinkhorn-based reconstruction loss, we remove the sinkhorn loss and instead simply use mean-square-error between the embeddings of phrases and concepts.",
"(ii) w/o Cluster Divergence Loss (CDL) : We remove the cluster divergence loss in Equation (4) in our training process.",
"(iii) w/o Cluster Evidence Loss (CEL) : To show the effectiveness of the concept evidence loss, we remove the CEL in the concept learning process in Equation (5).",
"(iv) w/o Enhanced Concept-weight Constraint (ECC) : We replace the concept-weight constraint to one in our concept mover's distance to demonstrate the performance of the module.",
"Table 4 shows the results of the ablation study, which demonstrates that each component improves the overall performance in concept-project matching task, across our evaluation metrics.",
"This indicates that our modeling choices are suited to tackle the inherent challenges involved in matching the length-varying documents.",
"In particular, the cluster divergence loss has great impact on the performance since the loss can avoid assigning all the cluster centroids to the same value.",
"We conduct several experiments to investigate the impact of the following two hyper-parameters: concept number and phrase length.",
"(i) Concept Num-1 2 3 4 5 k 0.2 0.4 0.6 0.8 P r e c i s i on @ k TF-IDF InferSent BERT WMD DSHT CGNet Figure 2: Result of Summary-Citance Matching 0 5 10 15 20 25 30 Num of Concepts 0.55 0.60 0.65 0.70 0.75 0.80 0.85 S c o r e F1 Accuracy Figure 3: Parameter Analysis of Concepts Number ber .",
"Figure 3 shows the results in concept-project dataset using different concept numbers from 1 to 30.",
"We conclude that both the F1 score and Accuracy can continuously be improved when the number of concepts are increased to 10.",
"After the concept number reaches to 10, the performance starts to degrade but still keep in a high level, which indicates that our model is not sensitive to the setting of concept number.",
"(ii) Phrase Length .",
"Figure 4 shows the performance results in concept-project dataset using different settings of phrase length.",
"From the results, we conclude that the token-level phrases have the best performance compared to other settings even including the combination of length 1 and 2.",
"The main reason is that the 2-gram features contain a large portion of noisy phrases that make the extracted concepts less effective for document matching.",
"The running time of training are shown in Table 5. We can see the training time is increased linearly when the data size is increased.",
"Since our model can be converged in a few epochs (usually Phrase assignments for each concept Concept-1 fat, fur, fats, blubber, adipose, whale, beluga, oils, mammal, blubber adipose, carbohydrates, warm blooded, colligative, or, fats, animal, calories, adipose, protein, mitochondria Concept-2 sea, isle, tide, seas, tides, compass, vessel, boat, islands, ocean, waters, pirate, currents, winds, oceans, atlantic, ship, coast, oceanic, waves Concept-3 bug, bugs, ants, bee, bees, insect, insects, katydid, spiders, grasshoppers, sowbugs, weevil, pillbugs, crickets, snails, peanuts, flies, do more, lions, peanut Concept-4 odor, smell, scent, smells, rancid, taste, rancidity, emit, fishy, gone, emitting, tastes, auditory, sounds, emits, emitted, stimuli, unpleasant, bloom, sensation Concept-5 cow, cows, age, milk, rex, ages, formula, rennet, lactase, horses, tablet, formulas, gestation, pasteurizing, calcium, matrix, ratio, breast, milkshake, cream Table 3: Case Study of Concepts Acc Prec Recall F1 w/o Sinkhorn 0.866 0.890 0.859 0.874 w/o CDL 0.562 0.555 0.976 0.707 w/o CEL 0.859 0.848 0.900 0.873 w/o ECC 0.805 0.780 0.890 0.832 CGNet 0.872 0.865 0.904 0.884 Table 4: Result of Ablation Study 1 2 3 4 5 1,2 1,3 1,2,3 Ngrams 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 S c o r e F1 Accuracy Figure 4: Parameter Analysis of Ngrams less than 100 epochs), our model can be trained in a reasonable duration.",
"Moreover, we find the evaluation time of each dataset has less difference compared to the training time since the evaluation time is related to the size of phrases and concepts.",
"Table 3 give some interpretation of concepts, showing top-20 phrases ranked by the phrase assignment probability in Equation (2) of five concepts generated by our CGNet model.",
"From the results, we conclude that:",
"(i) The generated concept is capable of representing high-level topics.",
"For instance, the Concept-1 relates to fat and energy of sea mammals when phrases such as fat, blubber adipose and carbohydrates appears; the Concept-2 relates to the sea sailing when phrases such as tide, isle, compass Concept-SummarySummary-Project Reference Citance Training Time 20.76 6.35 10.95 (sec/epoch) Eval Time 0.412 0.143 0.281 (sec/pair) Table 5: Efficiency Result for Training (second/epoch) and Testing (second/pair).",
"are assigned to the concept.",
"(ii) phrases in concepts are not only grouped by the similar semantics but the inherent co-occurrence in the text corpus.",
"For example, the calories and mammal shares very few semantic similarity but these two terms can be connected by documents that introduce the energy storage system of sea mammals.",
"(iii) The 2-gram phrases can introduce useful phrases such as blub-ber adipose\" in Concept-1 . However, sometimes it produces some noises such as do more\" in Con-cept-3 .",
"In this paper, an unsupervised concept representation learning method is proposed to address the length-varying text comparison problem.",
"To achieve this, we propose a deep neural network based model to generate corpus-level concept representation and design a concept-based document matching method based on augmented concept representation that leverages the advances of both local phrase features and global concept features.",
"Extensive experiments on real-world datasets demonstrated that our proposed method dramatically outperforms competing methods, exhibiting a significant improvement in all the metrics in different length-vary text comparison tasks."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective"
] |
[
"Hyounghun Kim",
"Mohit Bansal",
"Abstract Videos convey rich information.",
"Dynamic spatio-temporal relationships between peo-ple/objects, and diverse multimodal events are present in a video clip.",
"Hence, it is important to develop automated models that can accurately extract such information from videos.",
"Answering questions on videos is one of the tasks which can evaluate such AI abilities.",
"In this paper, we propose a video question answering model which effectively integrates multi-modal input sources and finds the temporally relevant information to answer questions.",
"Specifically, we first employ dense image captions to help identify objects and their detailed salient regions and actions, and hence give the model useful extra information (in explicit textual format to allow easier matching) for answering questions.",
"Moreover, our model is also comprised of dual-level attention (word/object and frame level), multi-head self/cross-integration for different sources (video and dense captions), and gates which pass more relevant information to the classifier.",
"Finally, we also cast the frame selection problem as a multi-label classification task and introduce two loss functions, In-and-Out Frame Score Margin (IOFSM) and Balanced Binary Cross-Entropy (BBCE), to better supervise the model with human importance annotations.",
"We evaluate our model on the challenging TVQA dataset, where each of our model components provides significant gains, and our overall model outperforms the state-of-the-art by a large margin (74.09% versus 70.52%).",
"We also present several word, object, and frame level visualization studies.",
"1 1 Introduction Recent years have witnessed a paradigm shift in the way we get our information, and a lot of it 1 Our code is publicly available at: https://github.com/hyounghk/VideoQADenseCapFrameGate-ACL2020 is related to watching and listening to videos that are shared in huge amounts via the internet and new high-speed networks.",
"Videos convey a diverse breadth of rich information, such as dynamic spatiotemporal relationships between people/objects, as well as events.",
"Hence, it has become important to develop automated models that can accurately extract such precise multimodal information from videos (Tapaswi et al., 2016; Maharaj et al., 2017; Kim et al., 2017; Jang et al., 2017; Gao et al., 2017; Anne Hendricks et al., 2017; Lei et al., 2018, 2020).",
"Video question answering is a representative AI task through which we can evaluate such abilities of an AI agent to understand, retrieve, and return desired information from given video clips.",
"In this paper, we propose a model that effectively integrates multimodal information and locates the relevant frames from diverse, complex video clips such as those from the video+dialogue TVQA dataset (Lei et al., 2018), which contains questions that need both the video and the subtitles to answer.",
"When given a video clip and a natural language question based on the video, naturally, the first step is to compare the question with the content (objects and keywords) of the video frames and subtitles, then combine information from different video frames and subtitles to answer the question.",
"Analogous to this process, we apply dual-level attention in which a question and video/subtitle are aligned in word/object level, and then the aligned features from video and subtitle respectively are aligned the second time at the frame-level to integrate information for answering the question.",
"Among the aligned frames (which contain aggregated video and subtitle information now), only those which contain relevant information for answering the question are needed.",
"Hence, we also apply gating mechanisms to each frame feature to select the most informative frames before feeding them to the classifier.",
"Next, in order to make the frame selection more effective, we cast the frame selection sub-task as a multi-label classification task.",
"To convert the time span annotation to the label for each frame, we assign a positive label (1') to frames between the start and end points, and negative (0') label to the others, then train them with the binary cross-entropy loss.",
"Moreover, for enhanced supervision from the human importance annotation, we also introduce a new loss function, In-and-Out Frame Score Margin (IOFSM), which is the difference in average scores between in-frames (which are inside the time span) and out-frames (which are outside the time span).",
"We empirically show that these two losses are complementary when they are used together.",
"Also, we introduce a way of applying binary cross-entropy to the unbalanced dataset.",
"As we see each frame as a training example (positive or negative), we have a more significant number of negative examples than positive ones.",
"To balance the bias, we calculate normalized scores by averaging the loss separately for each label.",
"This modification, which we call balanced binary cross-entropy (BBCE), helps adjust the imbalance and further improve the performance of our model.",
"Finally, we also employ dense captions to help further improve the temporal localization of our video-QA model.",
"Captions have proven to be helpful for vision-language tasks (Wu et al., 2019; Li et al., 2019; Kim and Bansal, 2019) by providing additional, complementary information to the primary task in descriptive textual format.",
"We employ dense captions as an extra input to our model since dense captions describe the diverse salient regions of an image in object-level detail, and hence they would give more useful clues for question answering than single, non-dense image captions.",
"Empirically, our first basic model (with dual-level attention and frame-selection gates) outperforms the state-of-the-art models on TVQA validation dataset (72.53% as compared to 71.13% previous state-of-the-art) and with the additional supervision via the two new loss functions and the employment of dense captions, our model gives further improved results (73.34% and 74.20% re-spectively).",
"These improvements from each of our model components (i.e., new loss functions, dense captions) are statistically significant.",
"Overall, our full model's test-public score substantially outperforms the state-of-the-art score by a large margin of 3.57% (74.09% as compared to 70.52%).",
"2 Also, our model's scores across all the 6 TV shows are more balanced than other models in the TVQA leaderboard 3 , implying that our model should be more consistent and robust over different gen-res/domains that might have different characteristics from each other.",
"Our contributions are four-fold: (1) we present an effective model architecture for the video question answering task using dual-level attention and gates which fuse and select useful spatial-temporal information, (2) we employ dense captions as salient-region information and integrate it into a joint model to enhance the videoQA performance by locating proper information both spatially and temporally in rich textual semi-symbolic format, (3) we cast the frame selection sub-task as a multilevel classification task and introduce two new loss functions (IOFSM and BBCE) for enhanced supervision from human importance annotations (which could be also useful in other multi-label classification settings), and (4) our model's score on the test-public dataset is 74.09%, which is around 3.6% higher than the state-of-the-art result on the TVQA leaderboard (and our model's scores are more bal-anced/consistent across the diverse TV show gen-res).",
"We also present several ablation and visualization analyses of our model components (e.g., the word/object-level and the frame-level attention).",
"Visual/Video Question Answering Understanding visual information conditioned on language is an important ability for an agent who is supposed to have integrated intelligence.",
"Many tasks have been proposed to evaluate such ability, and visual question answering is one of those tasks (Antol et al., 2015; Lu et al., 2016; Fukui et al., 2016; Xu and Saenko, 2016; Yang et al., 2016; Zhu et al., 2016; Goyal et al., 2017; Anderson et al., 2018).",
"Recently, beyond question answering on a single image, attention to understanding and extracting information from a sequence of images, i.e., a video, is rising (Tapaswi et al., 2016; Maharaj et al., 2017; Kim et al., 2017; Jang et al., 2017; Lei et al., 2018; Zadeh et al., 2019; Lei et al., 2020; Garcia et al., 2020).",
"Answering questions on videos requires an 2 At the time of the ACL2020 submission deadline, the publicly visible rank-1 entry was 70.52%.",
"3 https://competitions.codalab.org/competitions/20415#results Frame-LevelAtt.",
"Softmax S o ft m a x qa 0 qa 1 qa i qa Tqa ... ... s t0 s t1 s tj s tTst ... ... sv 0 sv 1 sv k sv T ... ... sd 0 sd 1 sd l sd T ... ...",
"Softmax S o ft m a x qa 0 qa 1 qa i qa Tqa ... ... s t0 s t1 s tj s tTst ... ...",
"Before After before after -Q-ASUB Softmax S o ft m a x ... ...",
"Softmax S o ft m a x ... ... ... ...",
"Softmax S o ft m a x ... ... ... ... sv 0 sv 1 sv k sv T ... ... sd 0 sd 1 sd l sd T ... ...",
"... ...",
"Temporal Localization Temporal localization is a task that is widely explored in event/object detection in video context.",
"There has been work that solely processes visual information to detect ob-jects/actions/activity (Gaidon et al., 2013; Wein-zaepfel et al., 2015; Shou et al., 2016; Dai et al., 2017; Shou et al., 2017).",
"At the same time, work on natural language-related temporal localization task is less explored with recent work that focuses on the retrieval of a certain moment in a video by natural language (Anne Hendricks et al., 2017; Gao et al., 2017).",
"With deliberately designed gating and attention mechanisms, our work, in general, will greatly contribute to the task of temporal localization, especially under natural language context and multimodal data.",
"Multi-Head Self Attention ... ... ... ...",
"3 Model Our model consists of 2 parts: feature fusion and frame selection.",
"For feature fusion, we introduce dual-level (word/object and frame level) attention, and we design the frame selection problem as a multi-label classification task and introduce 2 new loss functions for enhanced supervision (Figure 1).",
"A B C D",
".... E F G",
"..",
"We sample frames at 0.5 fps and extract object features from each frame via Faster R-CNN (Gir-shick, 2015).",
"Then we use PCA to get features of 300 dimension from top-20 object proposals.",
"We also create five hypotheses by concatenating a question feature with each of five answer features, and we pair each visual frame feature with temporally neighboring subtitles.",
"We encode all the features using convolutional encoder.",
"what is cathy doing with her hand after she introduces her fiance to ted ?",
"she is doing sign language .",
"Dense Image Captioning Image captioning is another direction of understanding visual and language information jointly.",
"Single-sentence captions (Karpathy and Fei-Fei, 2015; Anderson et al., 2018) capture the main concept of an image to describe it in a single sentence.",
"However, an image could contain multiple aspects that are impor-tant/useful in different ways.",
"Dense captions (John-son et al., 2016; Yang et al., 2017) and paragraph captions (Krause et al., 2017; Liang et al., 2017; Melas-Kyriazi et al., 2018) have been introduced to densely and broadly capture the diverse aspects and salient regions of an image.",
"Especially, dense caption describes an image in object level and gives useful salient regional information about objects such as attributes and actions.",
"In this paper, we take advantage of this dense caption's ability to help our video QA model understand an image better for answering questions.",
"where E pos denotes positional encoding, f i,t convolution preceded by Layer Normalization and followed by ReLU activation, and g n the layer normalization.",
"The encoder is composed of N blocks iterations.",
"In each iteration, the encoded inputs are transformed L times of convolutions.",
"The L is set to 2, and N to 1 in our experiment (Figure 2).",
"In dual-level attention, features are sequentially aligned in word/object-level and frame-level (Fig-ure 3).",
"Word/Object-Level Attention The QA feature, qa = { qa 0 , qa 1 ,",
".., qa T qa } , are combined with subtitle feature, s t = { s t 0 , s t 1 ,",
".., s tT st } , and visual feature, v t = { v t 0 , v t 1 ,",
".., v tT vt } , of t -th frame respectively via word/object-level attention.",
"To be specific, we calculate similarity matrices following Seo et al. (2017)'s approach, S vt RT qa T s t and S st RT qa T vt , from QA/subtitle and QA/visual features respectively.",
"From the similarity matrices, attended subtitle features are obtained and combined with the QA features by concatenating and applying a transforming function.",
"Then, max-pooling operation is applied word-wise to reduce the dimension.",
"( S st ) ij = qa (cid:62) i s tj (2) s attt = softmax ( S st ) s t (3) qa ms = maxpool ( f 1 ([ qa ; s attt ; qa (cid:12) s attt ])) (4) where f 1 is a fully-connected layer followed by ReLU non-linearity.",
"The same process is applied to the QA features.",
"qa att = softmax ( S s (cid:62) t ) qa (5) s m t = maxpool ( f 1 ([ s t ; qa att ; s t (cid:12) qa att ])) (6) The fused features from different directions are integrated by concatenating and being fed to a function as follows: s wt = f 2 ([ qa ms ; s mt ; qa ms (cid:12) s mt ; qa ms + s mt ]) (7) where f 2 is the same function as f 1 with non-shared parameters.",
"All this process is also applied to visual features to get word/object-level attended features.",
"what is cathy doing with her hand after she introduces her fiance to ted ?",
"she is doing sign language .",
"Frame-Level Attention The fused features from word/object-level attention are integrated frame-wise via frame-level attention.",
"Similar to the word/object-level attention, a similarity matrix, S RTF TF , is calculated, where TF is the number of frames.",
"Also, from the similarity matrix, attended frame-level features are calculated.",
"Q: what is cathy doing with her hand after she introduces her fiance to ted ?",
"A: she is doing sign language .",
"u sv = s + v (14) 3.3 Video and Dense Caption Integration We also employ dense captions to help further improve the temporal localization of our video-QA model.",
"They provide more diverse salient regional information (than the usual single non-dense image captions) about object-level details of image frames in a video clip, and also allow the model to explicitly (in textual/semi-symbolic form) match key-words/patterns between dense captions and questions to find relevant locations/frames.",
"( S ) kl = s w (cid:62) k v wl (9) s att = softmax ( S ) s w + s w (10) v = f 3 ([ v w ; s att ; v w (cid:12) s att ; v w + s att ]) (11) v att = softmax ( S (cid:62) ) v w + v w (12) s = f 3 ([ s w ; v att ; s w (cid:12) v att ; s w + v att ]) (13) where f 3 is the same function as f 1 and f 2 with non-shared parameters.",
"before after Video Q-A Subtitle Dense Capt Q-A Subtitle Word/ObjectLevel Att.",
"We apply the same procedure to the dense caption feature by substituting video features with dense caption features to obtain u sd .",
"To integrate u sv and u sd , we employ multi-head self attention (Figure 4).",
"To be specific, we concatenate u sv and u sd frame-wise then feed them to the self attention function.",
"where g a denotes self-attention.",
"In this way, u sv and u sd attend to themselves while attending to each other simultaneously.",
"We split the output, u svd into the same shape as the input, then add the two.",
"To select appropriate information from the frame-length features, we employ max-pooling and gates.",
"Features from the video-dense caption integration are fed to the CNN encoder.",
"A fully-connected layer and sigmoid function are applied sequentially to the output feature to get frame scores that indicate how relevant each frame is for answering a given question.",
"We get weighted features by multiplying the output feature from the CNN encoder with the scores.",
"z = en2 ( z ) (18) g L = sigmoid ( f L ( z )) (19) z gl = z (cid:12) g L (20) We calculate another frame scores with a different function f G to get another weighted feature.",
"Finally, following Lei et al. (2020)'s work, we also apply frame-wise max-pooling.",
"The three features (from local gate, global gate, and max-pooling, respectively), are then concatenated and fed to the classifier to give scores for each candidate answer.",
"We get the logits for the five candidate answers and choose the highest value as the predicted answer.",
"loss cls = log ( e s g (cid:80) k e s k ) (25) where s g is the logit of ground-truth answer.",
"3.5 Novel Frame-Selection Supervision Loss Functions We cast frame selection as a multi-label classification task.",
"The frame scores from the local gate, g L , are supervised by human importance annotations, which are time spans (start-end points pair) annotators think needed for selecting correct answers.",
"To this end, we transform the time span into ground-truth frame scores, i.e., if a frame is within the time span, the frame has 1' as its label and a frame outside the span gets 0'.",
"In this way, we can assign a label to each frame, and frames should get as close scores as their ground-truth labels.",
"We train the local gate network with binary cross-entropy (BCE) loss.",
"where s fi is a frame score of i -th frame, and y is a corresponding ground-truth label.",
"In-and-Out Frame Score Margin For additional supervision other than the binary cross-entropy loss, we create a novel loss function, In-and-Out Frame Score Margin (IOFSM).",
"where OFS (Out Frame Score) is scores of frames whose labels are 0' and IFS (In Frame Score) is scores of frames whose labels are 1'.",
"Balanced Binary Cross-Entropy In our multi-label classification setting, each frame can be considered as one training example.",
"Thus, the total number of examples and the proportion between positive and negative examples vary for every instance.",
"This variation can cause unbalanced training since negative examples usually dominate.",
"To balance the unbalanced training, we apply a simple but effective modification to the original BCE, and we call it Balanced Binary Cross-Entropy (BBCE).",
"To be specific, instead of summing or averaging through the entire frame examples, we divide the positive and negative examples and calculate the average cross-entropy scores separately, then sum them together.",
"loss bbce = (cid:16) T Fin (cid:88) i log ( s f in i ) /T F in + T Fout (cid:88) j log (1 s f out j ) /T F out (cid:17) (28) where s f in i and s f out j are i -th in-frame score and j -th out-frame score respectively, and TF in and TF out are the number of in-frames and out-frames respectively.",
"Thus, the total loss is: loss = loss cls + loss ( b ) bce + loss io (29) 4 Experimental Setup TVQA Dataset TVQA dataset (Lei et al., 2018) consists of video frames, subtitles, and question-answer pairs from 6 TV shows.",
"The number of examples for train/validation/test-public dataset are 122,039/15,253/7,623.",
"Each example has five candidate answers with one of them the ground-truth.",
"4 At the time of the ACL2020 submission deadline, the publicly visible rank-1 entry was 70.52%.",
"Since then, two more entries have appeared in the leaderboard; however, our method still outperforms their scores by a large margin (71.48% and 71.13% versus 74.09%).",
"So, TVQA is a classification task, in which models select one from the five candidate answers, and models can be evaluated on the accuracy metric.",
"Dense Captions We use Yang et al. (2017)'s pre-trained model to extract dense captions from each video frame.",
"We extract the dense captions in advance and use them as extra input data to the model.",
"5 Training Details We use GloVe (Pennington et al., 2014) word vectors with dimension size of 300 and RoBERTa (Liu et al., 2019) with 768 dimension.",
"The dimension of the visual feature is 300, and the base hidden size of the whole model is 128.",
"We use Adam (Kingma and Ba, 2015) as the optimizer.",
"We set the initial learning rate to 0.001 and drop it to 0.0002 after running 10 epochs.",
"For dropout, we use the probability of 0.1.",
"As seen from Table 1, our model outperforms the state-of-the-art models in the TVQA leaderboard.",
"Especially our model gets balanced scores for all the TV shows while some other models have high variances across the shows.",
"As seen from Table 2, the standard deviation and max-min' value over our model's scores for each TV show are 0.65 and 1.83, respectively, which are the lowest values among all models in the list.",
"This low variance could mean that our model is more consistent and robust across all the TV shows.",
"Model Ablations As shown in Table 3, our basic dual-attention and frame selection gates model shows substantial improvement over the strong single attention and frame span baseline (row 4 vs 1: p < 0 . 0001 ), which is from the best published model (Lei et al., 2020).",
"Each of our dual-attention and frame selection gates alone shows a small improvement in performance than the baseline (row 3 vs 1 and 2 vs 1, respectively).",
"6 However, when they are applied together, the model works much better.",
"The reason why they are more effective when put together is that frame selection gates basically select frames based on useful information 5 This is less computationally expensive and dense captions from the separately trained model will be less biased towards the questions of TVQA dataset, and hence provide more diverse aspects of image frames of a video clip.",
"6 Although the improvements are not much, but performing word/object level attention and then frame level attention is more intuitive and interpretable than a non-dual-attention method, allowing us to show how the model works: see visualization in Sec. 6.",
"from each frame feature and our dual-attention can help this selection by getting more relevant information to each frame through the frame-level attention.",
"Next, our new loss functions significantly help over the dual-attention and frame selection gates model by providing enhanced supervision (row 5 vs 4: p < 0 . 0001 , row 7 vs 6: p < 0 . 005 ).",
"Our RoBERTa version is also significantly better than the GloVe model (row 6 vs 4: p < 0 . 0005 , row 7 vs 5: p < 0 . 01 ).",
"Finally, employing dense captions further improves the performance via useful textual clue/keyword matching (row 8 vs 7: p < 0 . 005 ).",
"7 7 Statistical significance is computed using the bootstrap test (Efron and Tibshirani, 1994).",
"8 Two more entries have appeared in the leaderboard since the ACL2020 submission deadline.",
"However, our scores are still more balanced than their scores across all TV shows (std.: 2.11 and 2.40 versus our 0.65, max-min: 5.50 and 7.38 versus our 1.83).",
"IOFSM and BCE Loss Functions Ablation and Analysis To see how In-and-Out Frame Score Margin (IOFSM) and Binary Cross-Entropy (BCE) loss affect the frame selection task, we compare the model's performance/behaviors according to the combination of IOFSM and BCE.",
"As shown in Table 4, applying IOFSM on top of BCE gives a better result.",
"When we compare row 1 and 3 in Table 4, the average in-frame score of BCE+IOFSM is higher than BCE's while the average out-frame scores of both are almost the same.",
"This can mean two things: (1) IOFSM helps increase the scores of in-frames, and (2) increased in-frame scores help improve the model's performance.",
"On the other hand, when we compare row 1 and 2, the average in-frame score of IOFSM is higher than BCE's.",
"But, the average out-frame score of IOFSM is also much higher than BCE's.",
"This can mean that out-frame scores have a large impact on the performance as well as in-frame scores.",
"This is intuitively reasonable.",
"Because information from out-frames also flows to the next layer (i.e., classifier) after being multiplied by the frame scores, the score for the negative' label also has a direct impact on the performance.",
"So, making the scores as small as possible is also important.",
"Also, when we compare the row 2 and others (2 vs. 1 and 3), the gap between in-frame scores is much larger than the gap between out-frame scores.",
"But, considering the scores are average values, and the number of out-frames is usually much larger than in-frames, the difference between out-frame scores would affect more than the gap itself.",
"Balanced BCE Analysis We can see from row 1 and 4 of the Table 4 that BBCE shift the average scores of both in-frames and out-frames to higher values.",
"This can show that scores from the BCE loss are biased to the negative examples, and BBCE can adjust the bias with the separate averaging.",
"The score shift can help improve the model's performance.",
"But, when comparing row 2 and 4, the out-frame scores of BBCE are higher than IOFSM, and this may imply that the result from BBCE should be worse than IOFSM since out-frame scores have a large impact on the performance.",
"However, as we can see from row 2, the standard deviation of IOFSM's out-frame scores is larger than BBCE.",
"This could mean that a model with IOFSM has an unstable scoring behavior, and it could affect the performance.",
"As seen from row 5, applying BBCE and IOFSM together gives further improvement, possibly due to the increased in-frame scores and decreased out-frame scores while staying around at a similar standard deviation value.",
"In this section, we visualize the dual-level attention (word/object and frame level) and the frame score change by new losses application (for all these attention examples, our model predicts the correct answers).",
"Word/Object-Level Attention We visualize word-level attention in Figure 5.",
"In the top example, the question and answer pair is Where sat Rachel when holding a cup? Rachel sat on a couch.",
"Our word/object-level attention between QA pair and dense caption attend to a relevant description like holding a glass' to help answer the question.",
"In the middle example, the question and answer pair is, How did Lance react after Mandy insulted his character? Lance said he would be insulted if Mandy actually knew anything about acting.",
"Our word/object-level attention between QA pair and subtitle properly attend to the most relevant words such as insulted', knew', and act-ing' to answer the question.",
"In the bottom example, the question and answer pair is, What is Cathy doing with her hand after she introduces her fiance to Ted? She is doing sign language.",
"From the score of our word/object-level attention, the model aligns the word sign' to the woman's hand Frame-LevelAtt.",
"Frame-Level Attention As shown in Figure 6, our frame-level attention can align relevant frames from different features.",
"In the example, the question and answer pair is Where did Esposito search after he searched Carol's house downstairs? Up-stairs.",
"To answer this question, the model needs to find a frame in which he (Esposito) searched Carol's house downstairs', then find a frame which has a clue for where did Esposito search'.",
"Our frame-level attention can properly align the information fragments from different features (Frame 20 and 25) to help answer questions.",
"Frame Score Enhancement by New Losses As seen in Figure 7, applying our new losses (IOFSM+BBCE) changes the score distribution Frame-LevelAtt.",
"over all frames.",
"Before applying our losses (left fig-ure), overall scores are relatively low.",
"After using the losses, overall scores increased, and especially, scores around in-frames get much higher.",
"We presented our dual-level attention and frame-selection gates model and novel losses for more effective frame-selection.",
"Furthermore, we employed dense captions to help the model better find clues from salient regions for answering questions.",
"Each component added to our base model architecture (proposed loss functions and the adoption of dense captions) significantly improves the model's performance.",
"Overall, our model outperforms the state-of-the-art models on the TVQA leaderboard, while showing more balanced scores on the diverse TV show genres.",
"We thank the reviewers for their helpful comments.",
"This work was supported by NSF Award 1840131, ARO-YIP Award W911NF-18-1-0336, DARPA KAIROS Grant FA8750-19-2-1004, and awards from Google and Facebook.",
"The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"result",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"objective",
"result",
"method",
"result",
"abstain",
"result",
"result",
"abstain",
"method",
"other",
"objective",
"result",
"result",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"abstain",
"objective",
"other",
"other",
"other",
"method",
"method",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"result",
"result",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"result",
"objective",
"result",
"other",
"other",
"other"
] |
[
"Grammatical Error Correction (GEC) should focus not only on correction accuracy but also on the interpretability of the results for language learners.",
"However, existing neural-based GEC models mostly focus on improving accuracy, while their interpretability has not been explored.",
"Example-based methods are promising for improving interpretability, which use similar retrieved examples to generate corrections.",
"Furthermore, examples are beneficial in language learning, helping learners to understand the basis for grammatically incorrect/correct texts and improve their confi-dence in writing.",
"Therefore, we hypothesized that incorporating an example-based method into GEC could improve interpretability and support language learners.",
"In this study, we introduce an Example-Based GEC ( EB-GEC ) that presents examples to language learners as a basis for correction result.",
"The examples consist of pairs of correct and incorrect sentences similar to a given input and its predicted correction.",
"Experiments demonstrate that the examples presented by EB-GEC help language learners decide whether to accept or refuse suggestions from the GEC output.",
"Furthermore, the experiments show that retrieved examples also improve the accuracy of corrections.",
"Grammatical Error Correction (GEC) models, which generate grammatically correct texts from grammatically incorrect texts, are useful for language learners.",
"In GEC, various neural-based models have been proposed to improve the correction accuracy (Yuan and Briscoe, 2016; Chollampatt and Ng, 2018; Junczys-Dowmunt et al., 2018; Zhao et al., 2019; Kaneko et al., 2020; Omelianchuk et al., 2020).",
"However, the basis on which a neural GEC model makes corrections is generally uninterpretable to learners.",
"Neural GEC models rarely address correction interpretability, leaving language Figure 1: EB-GEC presents not only a correction but also an example of why the GEC model suggested this correction.",
"Interpretability plays a key role in educational scenarios (Webb et al., 2020).",
"In particular, presenting examples is shown to be effective in improving understanding.",
"Language learners acquire grammatical rules and vocabulary from examples (Johns, 1994; Mizumoto and Chujo, 2015).",
"Presenting examples of incorrect sentences together with correct ones improves the understanding of grammatical correctness as well as essay quality (Arai et al., 2019, 2020).",
"Recently, example-based methods have been applied to a wide range of natural language processing tasks to improve the interpretability of neural models, including machine translation (Khan-delwal et al., 2021), part-of-speech tagging (Wise-man and Stratos, 2019), and named entity recognition (Ouchi et al., 2020).",
"These methods predict labels or tokens by considering the nearest neighbor examples retrieved by the representations of the model at the inference time.",
"Khandelwal et al. (2021) showed that in machine translation, examples close to a target sentence in the representation space of a decoder are useful for translating the source sentence.",
"Inspired by this, we hypothesized that examples corrected for similar reasons are dis-7176 Figure 2: An illustration of how EB-GEC chooses examples and predicts a correction.",
"tributed closely in the representation space.",
"Thus, we assume that neighbor examples can enhance the interpretability of the GEC model, allowing language learners to understand the reason for a correction and access its validity.",
"In this paper, we introduce an example-based GEC ( EB-GEC ) 1 that corrects grammatical errors in an input text and provides examples for language learners explaining the reason for correction (Fig-ure 1).",
"As shown in Figure 2, the core idea of EB-GEC is to unify the token prediction model for correction and the related example retrieval model from the supervision data into a single encoder-decoder model.",
"EB-GEC can present the reason for the correction, which we hope will help learners decide whether to accept or to refuse a given correction.",
"Experimental results show that EB-GEC predicts corrections more accurately than the vanilla GEC without examples on the three datasets and comparably on one dataset.",
"Experiments with human participants demonstrate that EB-GEC presents significantly more useful examples than the baseline methods of example retrieval (Matsubara et al., 2008; Yen et al., 2015; Arai et al., 2020).",
"These results indicate that examples are useful not only to the GEC models but also to language learners.",
"This is the first study to demonstrate the benefits of examples themselves for real users, as existing studies (Wiseman and Stratos, 2019; Ouchi et al., 2020; Khandelwal et al., 2021) only showed example utility for improving the task accuracy.",
"EB-GECEB-GEC presents language learners with a correction and the related examples it used for generating the correction of the input sentence.",
"k -Nearest-Neighbor Machine Translation ( k NN-MT ; Khandelwal et al., 2021) was used as a base method to consider example in predicting corrections.",
"k NN-MT predicts tokens by considering the nearest neighbor examples based on representations from the decoder at the time of inference.",
"EB-GEC could use any method (Gu et al., 2018; Zhang et al., 2018; Lewis et al., 2020) to consider examples, but k NN-MT was used in this study because it does not require additional training for example retrieval.",
"Figure 2 shows how the EB-GEC retrieves examples using k NN-MT.",
"EB-GEC performs inference using the softmax distribution of target tokens, referred to as vanilla distribution, hereafter, obtained from the encoder-decoder model and the distribution generated by the nearest neighbor examples.",
"Nearest neighbor search is performed for a cache of examples indexed by the decoder hidden states on supervision data ( k NN distribution).",
"EB-GEC can be adapted to any trained autoregressive encoder-decoder GEC model.",
"A detailed explanation of retrieving examples using k NN-MT is provided in Section 2.1, and of presenting examples in Section 2.2.",
"Let x = ( x 1 , ..., x N ) be an input sequence and y = ( y 1 , ..., y M ) be an output sequence of the autoregressive encoder-decoder model.",
"Here, N and M are the lengths of the input and output se-7177 quences, respectively.",
"Vanilla Distribution.",
"In a vanilla autoregressive encoder-decoder model, the distribution for i -th token y i of the output sequence is conditioned from the entire input sequence x and previous output tokens y 1: i 1 , where y represents a sequence of generated tokens.",
"The probability distribution of the i -th token p ( y i | x, y 1: i 1 ) is calculated by a linear translation to the decoder's hidden state h ( x, y 1: i 1 ) followed by the softmax function.",
"Output Distribution.",
"Let p EB ( y i | x, y 1: i 1 ) denote the final probability distribution of tokens from EB-GEC.",
"We define p EB ( y i | x, y 1: i 1 ) as a linear interpolation of the vanilla distribution p ( y i | x, y 1: i 1 ) and p kNN ( y i | x, y 1: i 1 ) (explained later), which is the distribution computed using the examples in the datastore, p EB ( y i | x, y 1: i 1 ) = p kNN ( y i | x, y 1: i 1 ) + (1 ) p ( y i | x, y 1: i 1 ) .",
"Here, 0 1 is an interpolation coefficient between the two distributions.",
"This interpolation also improves the output robustness when relevant examples are not found in the datastore.",
"Datastore.",
"In the work of Khandelwal et al. (2021), the i -th hidden state h ( x, y 1: i 1 ) of the decoder in the trained model was stored as a key, and the corresponding next token y i was stored as a value.",
"In order to present examples of in-correct/correct sentences, we stored a tuple of the token y i , the incorrect input sentence x , and the correct output sentence y as a value of the datastore.",
"Thus, we built key-value pairs ( K , V ) from all decoder timesteps for the entire training data ( X , Y ) , ( K , V ) = { ( h ( x, y 1: i 1 ) , ( y i , x, y )) | y i y, ( x, y ) ( X , Y ) } .",
"k NN Distribution.",
"During inference, given a source x as input, the model uses the i -th hidden state h ( x, y 1: i 1 ) of the decoder as the query to search for k -nearest neighbors, N = { ( u ( j ) , ( v ( j ) , x ( j ) , y ( j ) )) ( K , V ) } k j =1 , (3) where u ( j ) ( j = 1 , . . . , k ) are the k -nearest neighbors of the query h ( x, y 1: i 1 ) measured by squared L 2 distance.",
"The tuple ( v ( j ) , x ( j ) , y ( j ) ) is the value associated with the key u ( j ) in the datastore ( K , V ) .",
"Then, the k NN-MT aggregates the retrieved tokens to form a probability distribution p kNN ( y i | x, y 1: i 1 ) with a softmax with temperature T to the negative L 2 distances 2 , p kNN ( y i | x, y 1: i 1 ) (cid:88) ( u , ( v, _ , _ )) N I v = y i exp (cid:18) (cid:107) u h ( x, y 1: i 1 ) (cid:107) T (cid:19) .",
"We used a pair of incorrect and correct sentences stored in the value retrieved for the predicted token y i as an example from the correction.",
"Figure 1 depicts an example where the retrieved value consists of the predicted token v ( j ) = a and the incorrect/correct sentences x ( j ) , y ( j ) corresponding to This has /a tremendous problem . .",
"In this study, we presented examples for each edited token in an output.",
"For example, when an input or output is They have /a tremendous problem . , we presented examples for the edit /a .",
"To extract edit operations from an input/output pair, we aligned the tokens in input and output sentences by using the Gestalt pattern matching (Ratcliff and Metzener, 1988).",
"There are several ways to decide which examples should be presented to a language learner.",
"For instance, we could use all the examples in k -nearest neighbors N and possibly filter them with a threshold based on L 2 distance.",
"In this paper, we present an example incorrect/correct sentence pair that is the nearest to the query in N , which is the most confident example estimated by the model.",
"This section investigates the effectiveness of the examples via manual evaluation and accuracy on the GEC benchmark to show that the EB-GEC does, in fact, improve the interpretability without sacri-ficing accuracy.",
"We first describe the experimental setup and then report the results of the experiments.",
"NUCLE (Dahlmeier et al., 2013), FCE-train (Yan-nakoudakis et al., 2011) and Lang-8 (Mizumoto et al., 2011) as training data and W&I-dev as development data.",
"We followed Chollampatt and Ng (2018) to exclude sentence pairs in which the source and target sentences are identical from the training data.",
"The final number of sentence pairs in the training data was 0.6M.",
"We used this training data to create the EB-GEC datastore.",
"Note that the same amount of data is used by EB-GEC and the vanilla GEC model.",
"We used W&I-test, CoNLL2014 (Ng et al., 2014), FCE-test, and JFLEG-test (Napoles et al., 2017) as test data.",
"To measure the accuracy of the GEC models, we used the evaluation metrics ERRANT (Felice et al., 2016; Bryant et al., 2017) for the W&I-test and FCE-test, M 2 (Dahlmeier and Ng, 2012) for CoNLL2014, and GLEU (Napoles et al., 2015) for the JFLEG-test.",
"M 2 and ERRANT report F 0 .",
"5 values.",
"We used Transformer-big (Vaswani et al., 2017) as the GEC model.",
"Note that EB-GEC does not assume a specific autoregressive encoder-decoder model.",
"The beam search was performed with a beam width of 5.",
"We tokenized the data into subwords with a vocabulary size of 8,000 using BPE (Sennrich et al., 2016).",
"The hyperparameters reported in Vaswani et al. (2017) were used, aside from the max epoch, which was set to 20.",
"In our experiments, we reported the average results of five GEC models trained using different random seeds.",
"We used four Tesla V100 GPUs for training.",
"We considered the k NN and vanilla distributions equally, with in Eq.",
"(1) set to 0.5, to achieve both accuracy and interpretability.",
"Based on the development data results, the number of nearest neighbors k was set to 16 and the softmax temperature T to 1,000.",
"We used the final layer of the decoder feedforward network as the datastore key.",
"We used Faiss (Johnson et al., 2021) with the same settings as Khandelwal et al. (2021) for fast nearest neighbor search in high-dimensional space.",
"We assessed the interpretability by human evaluation based on Doshi-Velez and Kim (2017).",
"The human evaluation was performed to determine whether the examples improved user understanding and helped users to accept or refuse the GEC corrections.",
"To investigate the utility of the examples presented by EB-GEC, we examined the relative effectiveness of presenting examples in GEC as compared to providing none.",
"Moreover, we used two baseline methods for example selection, token-based retrieval and BERT-based retrieval.",
"Note that, unlike EB-GEC, token-based and BERT-based retrievals do not directly use the representations in the GEC model; in other words, these baselines perform the task of choosing examples independently of the GEC model.",
"In contrast, EB-GEC uses examples directly for generating an output.",
"EB-GEC was expected to provide examples more related to GEC input/output sentences than the baseline methods.",
"Token-based Retrieval.",
"This baseline method retrieves examples from the training data where the corrections of the EB-GEC output match the corrections in the target sentence of the training data.",
"This is a similar method to the example search performed using surface matching (Matsubara et al., 2008; Yen et al., 2015).",
"If multiple sentences are found with matching tokens, an example is selected at random.",
"If the tokens do not match, this method cannot present any examples.",
"BERT-based Retrieval.",
"This baseline method uses BERT 3 (Devlin et al., 2019) to retrieve examples, considering the context of both the corrected sentence and example from the datastore.",
"This method corresponds to one based on context-aware example retrieval (Arai et al., 2020).",
"In order to retrieve examples using BERT, we create a datastore, ( KBERT , VBERT ) = { ( e ( y i ) , ( y i , x, y )) | y i y, ( x, y ) ( X , Y ) } .",
"Here e ( y i ) is the hidden state of the last layer of BERT for the token y i when the sentence y is given without masking.",
"This method uses e ( y i ) as a query for the model output sentence to then search the datastore for k nearest neighbors.",
"The input and output sentences of the GEC model and the examples from the baselines and EB-GEC were presented to the annotators with anonymized system names.",
"Annotators then decided whether the examples helped to interpret the GEC output or not, or whether they aided understanding of grammar and vocabulary.",
"The example 3 https://huggingface.co/ bert-base-cased 7179 Method Human evaluation score Token-based retrieval 28.8 BERT-based retrieval 52.4 EB-GEC 68.8 , Table 1: Results of the human evaluation of the usefulness of Token-based retrieval, BERT-based retrieval and EB-GEC examples.",
"sentence pair was labeled as 1 if it was useful for decision-making or understanding the correc-tion and 0 otherwise.",
"We then computed scores for Token-based retrieval, BERT-based retrieval, and EB-GEC models by counting the number of sentences labeled with 1.",
"We confirm whether corrections with examples were more beneficial for learners than those without, and whether EB-GEC could present more valuable examples than those from the baselines.",
"Since it is not always the case that only corrected parts are helpful for learners (Matsubara et al., 2008; Yen et al., 2015), the uncorrected parts were also considered during annotation.",
"We manually evaluated 990 examples provided by the three methods for 330 ungrammatical and grammatical sentence pairs randomly sampled from the W&I-test, CoNLL2014, FCE-test, and JFLEG-test.",
"The human evaluation was performed by two annotators with CEFR 4 proficiency level B and one annotator with level C 5 .",
"All three annotators evaluated different examples.",
"Human Evaluation of Examples.",
"Table 1 shows the results of human evaluation of Token-based retrieval, BERT-based retrieval, and EB-GEC models.",
"The percentage of useful examples has increased significantly for EB-GEC compared to token-based and BERT-based retrieval baselines.",
"The percentage of useful examples from EB-GEC 4 https://www.cambridgeenglish.org/ exams-and-tests/cefr 5 They are not authors of this paper.",
"In this human evaluation, annotators with a middle and high proficiency level are selected in case annotators cannot understand errors/corrections and make a judgment whether the presented example is necessary or unnecessary.",
"Therefore, this study does not focus on whether annotators with lower proficiency levels find it helpful to see examples without explanation.",
"is greater than 50, which indicates that presenting examples is more useful than providing none.",
"This result is non-trivial because the percentage for token-based retrieval is only 28.8, which indicates that those presented examples were mostly useless.",
"Therefore, the examples for interpretability in EB-GEC support language learners' understanding and acceptance of the model output.",
"GEC Accuracy.",
"We examined the impact of using examples for the prediction of GEC accuracy.",
"Table 2 shows the scores of the vanilla GEC and EB-GEC for the W&I, CoNLL2014, FCE, and JFLEG test data.",
"The accuracy of EB-GEC is slightly lower for JFLEG but outperforms the vanilla GEC for W&I, CoNLL2014, and FCE.",
"This indicates that the use of examples contributes to improving GEC model accuracy.",
"We analyzed the relationship between the interpolation coefficient (in Equation (1)) and the GEC accuracy.",
"A smaller value may reduce the interpretability as examples are not considered in prediction.",
"In contrast, a larger value may reduce robustness, especially when relevant examples are not included in the datastore; the model must then generate corrections relying more on k NN exam-7180 0.0 20.0 40.0 60.0 W&I CoNLL2014 FCE JFLEG Token BERT EB-GEC Figure 4: Matching percentage of edits and error types in model outputs and examples.",
"ples, which may not be present in the datastore for some inputs.",
"Figure 3 shows the accuracy of the GEC for each development data when the is changed from 0 to 1 in increments of 0.25.",
"We found that when was set to 1, the accuracy for all development datasets was lower than when was set to 0.50 or less.",
"It is shown that the highest accuracy was obtained for = 0.5, as this treats the vanilla output distribution and the output distribution equally.",
"In Section 1, we hypothesized that similar error-correcting examples are closely clustered in the representation space.",
"Therefore, we investigated the agreement between the GEC output and the examples for edits and error types.",
"We extracted edits and their error types , which were automatically assigned by ERRANT (Felice et al., 2016; Bryant et al., 2017) for incorrect/correct sentence pairs.",
"For example, for a GEC input/output pair They have /a tremendous problem . , the example pair is This has /a tremendous problem . , its edit is /a and the error type is the determiner error ( DET ).",
"We calculated the matching percentage of the edits and error types for EB-GEC outputs and for the examples retrieved using EB-GEC to show their similarity.",
"In addition, we used token-based and BERT-based retrieval as comparison methods for obtaining examples relevant to EB-GEC outputs.",
"Figure 4 shows the matching percentage of edits and error types between the GEC outputs and the k -nearest neighbors examples.",
"First, we see that EB-GEC has the highest percentage for all test data.",
"This indicates that of the methods tested, EB-GEC retrieves the most relevant examples.",
"This trend is consistent with the human evaluation re-Error type Freq.",
"sults.",
"Furthermore, we see that EB-GEC has a lower percentage on JFLEG compared to those on W&I, CoNLL2014, and FCE.",
"This corroborates the results of Table 2, which suggests that the accuracy of GEC improved further when examples more relevant to the corrections could be retrieved.",
"We analyzed the accuracy of EB-GEC for different error types to investigate the effect of error type on EB-GEC performance.",
"We used ERRANT to evaluate the accuracy of EB-GEC for each error type on the FCE-test.",
"Table 3 shows three error types selected as having the most significant increase and decrease in accuracy for EB-GEC compared to the vanilla GEC.",
"The three error types with the largest increases were preposition ( PREP ; e.g. I think we should book at/ the Palace Hotel . ), punctuation error ( PUNCT ; e.g. Yours ./ sincerely , ), and article error ( DET ; e.g. That should complete that/an amazing day . ).",
"The three error types with the largest decreases are adjective conjugation error ( ADJ:FORM ; e.g. I was very please/pleased to receive your letter . ), adjective error ( ADJ ; e.g. The adjoining restaurant is very enjoyable/good as well . ), and spelling error ( SPELL ; e.g. Pusan Castle is locted/located in the South of Pusan . ).",
"We concluded the following findings from these results.",
"Error types with the largest increase in accuracy have a limited number of tokens used for the edits compared to those with the largest decreases in accuracy (namely, error types referring to adjectives and nouns).",
"Furthermore, these error types are the most frequent errors in the datastore, (excluding the unclassified error type annotated as OTHER ), and the datastore sufficiently covers such edits.",
"Contrary to the error types with improved accuracy, ADJ and SPELL have a considerable 7181 Error type Error-correction pair Label Input/Output PREP You will be able to buy them in/at /a reasonable price .",
"number of tokens used in edits, and they are not easy to cover sufficiently in a datastore.",
"Moreover, ADJ:FORM is the second least frequently occurring error type in the datastore, and we believe such examples cannot be covered sufficiently.",
"These results show that EB-GEC improves the accuracy of error types that are easily covered by examples, as there are fewer word types rarely used for edits and they are better presented in datastore.",
"Furthermore, the results show that the accuracy deteriorates for error types that are difficult to cover, such as word types used for edits and infrequent error types in the datastore.",
"We investigated the characteristics of the EB-GEC examples by comparing specific examples for each error type with those from token-based and BERT-based retrieval.",
"Table 4 shows examples of Token-based retrieval, BERT-based retrieval and EB-GEC for the top three error types ( PREP , PUNCT and DET ) with accuracy improvement in EB-GEC.",
"Token-based retrieval showed that the tokens in the edits are consistent, including in/at , /, , and /a .",
"However, only surface information is used, and context is not considered.",
"So such unrelated examples are not useful for language learners.",
"BERT-based retrieval presented the same examples as EB-GEC for PREP and PUNCT error types, and the label for human evaluation was also 1.",
"However, the last example is influenced by the context rather than the correction and so presents an irrelevant example, labeled 0 by human evaluation.",
"This indicates that BERT-based retrieval overly focuses on context, resulting in examples related to the overall output but unrelated to the edits.",
"Conversely, EB-GEC is able to present examples in which the editing pair tokens are consistent for all corrections.",
"Furthermore, the contexts were similar to those of the input/output, for example purchase them in/at reasonable price/prices , for/For example /, and it takes /a long time to , and all the examples were labeled 1 during human evaluation.",
"This demonstrates that EB-GEC retrieves the most related examples that are helpful for users.",
"There are example search systems that support language learners by finding examples.",
"Before neural-based models, examples were retrieved and presented by surface matching (Matsubara et al., 2008; Yen et al., 2015).",
"Arai et al. (2019, 2020) proposed to combine Grammatical Error Detection (GED) and example retrieval to present both grammatically incorrect and correct examples of essays written by Japanese language learners.",
"This study showed that essay quality was improved by providing examples.",
"Their method is similar to EB-GEC in that it presents both correct and incorrect examples but incorporates example search systems for GED rather than the GEC.",
"Furthermore, the example search systems search for examples independently of the model.",
"Contrastingly, EB-GEC presents more related examples as shown in Section 3.4.",
"Cheng and Nagase (2012) developed a Japanese example-based system that retrieves examples using dependency structures and proofread texts.",
"Proofreading is a task similar to GEC because it also involves correcting grammatical errors.",
"However, this method also does not focus on using examples to improve interpretability.",
"There is a feedback comment generation task (Na-gata, 2019) that can generate useful hints and explanations for grammatical errors and unnatural expressions in writing education.",
"Nagata et al. (2020) used a grammatical error detection model (Kaneko et al., 2017; Kaneko and Komachi, 2019) and neural retrieval-based method for prepositional errors.",
"The motivation of this study was similar to ours, that is, to help language learners understand grammatical errors and unnatural expressions in an interpretable way.",
"On the other hand, EB-GEC supports language learners using examples from the GEC model rather than using feedback.",
"Various previous studies have used neural network models to retrieve words, phrases, and sentences for use in prediction.",
"Nagao (1984) proposed an example-based MT to translate sequences by analogy.",
"This method has been extended to a variety of other methods for MT (Sumita and Iida, 1991; Doi et al., 2005; Van Den Bosch, 2007; Stroppa et al., 2007; Van Gompel et al., 2009; Haque et al., 2009).",
"In addition, the example-based method has been used for summarization (Makino and Yamamoto, 2008) and paraphrasing (Ohtake and Yamamoto, 2003).",
"These studies were performed before neural networks were in general use, and the examples were not used to solve the neural network black box as was done in this study.",
"In neural network models, methods using examples have been proposed to improve accuracy and interpretability during inference.",
"Gu et al. (2018) proposed a model that during inference retrieves parallel sentences similar to input sentences and generates translations by the retrieved parallel sentences.",
"Zhang et al. (2018) proposed a method that, during inference, retrieves parallel sentences where the source sentences are similar to the input sentences and weights the output containing n -grams of the retrieved sentence pairs based on the similarity between the input sentence and the retrieved source sentence.",
"These methods differ from EB-GEC using k NN-MT in that they retrieve examples via surface matching, as done in baseline token-based retrieval.",
"not focus on the interpretability of the model.",
"Several methods have been proposed to retrieve examples using neural model representations and consider them for prediction.",
"Khandelwal et al. (2020, 2021) proposed the retrieval of similar examples using the nearest neighbor examples of pre-trained hidden states during inference and to complement the output distributions of the language model and machine translation with the distributions of these examples.",
"Lewis et al. (2020) combined a pre-trained retriever with a pre-trained encoder-decoder model and fine-tuned it end-to-end.",
"For the input query, they found the topk documents and used them as a latent variable for final prediction.",
"Guu et al. (2020) first conducted an unsupervised joint pre-training of the knowledge retriever and knowledge-augmented encoder for the language modeling task, then fine-tuned it using a task of primary interest, with supervised examples.",
"The main purpose of these methods was to improve the accuracy using examples, and whether the examples were helpful for the users was not verified.",
"Conversely, our study showed that examples for the interpretability in GEC could be helpful for real users.",
"We introduced EB-GEC to improve the interpretability of corrections by presenting examples to language learners.",
"The human evaluation showed that the examples presented by EB-GEC supported language learners' decision to accept corrections and improved their understanding of the correction results.",
"Although existing interpretive methods using examples have not verified if examples are helpful for humans, this study demonstrated that examples were helpful for learners using GEC.",
"In addition, the results of the GEC benchmark showed that EB-GEC could predict corrections more accurately or comparably to its vanilla counterpart.",
"Future work would include investigations of whether example presentation is beneficial for learners with low language proficiency.",
"In addition, we plan to improve the datastore coverage by using pseudo-data (Xie et al., 2018) and weight low frequency error types to present diverse examples.",
"We explore whether methods to improve accuracy and diversity (Chollampatt and Ng, 2018; Kaneko et al., 2019; Hotate et al., 2019, 2020) are effective for EB-GEC.",
"This paper is based on results obtained from a project, JPNP18002, commissioned by the New Energy and Industrial Technology Development Organization (NEDO).",
"We thank Yukiko Konishi, Yuko Unzai, and Naoko Furuya for their help with our experiments."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other"
] |
[
"Multi-hop question generation focuses on generating complex questions that require reasoning over multiple pieces of information of the input passage.",
"Current models with state-of-the-art performance have been able to generate the correct questions corresponding to the answers.",
"However, most models can not ensure the complexity of generated questions, so they may generate shallow questions that can be answered without multi-hop reasoning.",
"To address this challenge, we propose the CQG, which is a simple and effective controlled framework.",
"CQG employs a simple method to generate the multi-hop questions that contain key entities in multi-hop reasoning chains, which ensure the complexity and quality of the questions.",
"In addition, we introduce a novel controlled Transformer-based decoder to guarantee that key entities appear in the questions.",
"Experiment results show that our model greatly improves performance, which also outperforms the state-of-the-art model about 25% by 5 BLEU points on HotpotQA 1 .",
"Question generation (QG) aims to endow machines with the ability to ask relevant and to-the-point questions about a document.",
"QG plays a vital role in question answering (QA), dialogue systems, and automated tutoring applications: by enriching the training QA corpora (Tang et al., 2017; Yuan et al., 2017), helping chatbots start conversations with intriguing questions (Mostafazadeh et al., 2016), and automatically generating assessment questions (Heilman and Smith, 2010), respectively.",
"Most prior research on QG has focused on shallow factoid-based questions where answering the question simply by extracting the span of the text from a single input document (Zhou et al., Corresponding authors. 1 Our code and models are publicly available at https://github.com/sion-zcfei/CQG Figure 1: An example that the uncontrolled question generation model may generate the correct but shallow questions. In this example, the model ignores the important entity Western European in paragraph A and then generate a shallow question without multi-hop reasoning chains. 2018; Zhao et al., 2018; Kim et al., 2019; Fei et al., 2021).",
"Recently, motivated by building the NLP systems that are capable of understanding and reasoning (Kaushik and Lipton, 2018; Sinha et al., 2019), there is an increasing interest in developing systems that are capable of more complex multi-hop question generation, where answering the questions requires reasoning over multiple documents (Pan et al., 2020; Sachan et al., 2020; Xie et al., 2020; Yu et al., 2020; Su et al., 2020).",
"Compared with shallow QG, there are two 6896 challenges for multi-hop QG (MQG).",
"At first, generating multi-hop questions requires the model to understand the relationship between disjointed pieces of information in multiple context documents (Sachan et al., 2020).",
"Secondly, multihop questions must have complex chains of connecting the mentioned entities, which ensure the complexity of multi-hop questions, as such, multi-hop questions are also called deep questions (Pan et al., 2020).",
"To address the first challenge, existing research on MQG relies on the Graph-to-Sequence (G2S) architecture (Pan et al., 2020; Su et al., 2020; Yu et al., 2020).",
"These methods construct a semantic-level graph or entity-graph to capture the information among multiple context documents that employ a graph neural network(GNN) and then feed it to the decoder.",
"However, these models can not handle the second challenge because they can not ensure the complexity of generated questions; thus, they may generate shallow questions that can be answered without multi-hop reasoning chains.",
"We show an example in Figure 1, where the uncontrolled model generates a shallow question that can be answered by a single sentence but ignores the other sentences and entities.",
"To solve this issue, we propose the CQG, a simple and effective controlled framework.",
"(De Cao et al., 2018; Qiu et al., 2019) claim that the reasoning chains can be captured by propagating information along the edges in an entity graph using a GNN.",
"Motivated by this, we construct the entity graph from the input documents first and then employ the Graph Attention Network (GAT) to extract the key entities that appear in multihop reasoning chains.",
"Intuitively, all these key entities should appear in the generated questions to ensure the generated questions have complex and complete reasoning chains.",
"We introduce the flag tag (Wang et al., 2021), a lexical constraint for generation at each decoding step, which will assist the controlled generation.",
"In detail, in decoding progressing, each input token is provided a flag tag that indicates whether the constraint of this token has been satisfied.",
"Three possible types of flag tags exist for each token, is not a constrain , does not appear in question and appear in question .",
"As shown in Figure 2, the flag tag of six updates to appear in question at the fourth step because six is generated at this step.",
"We represent the three flag tags by training the embedding and injecting them into the Transformer generator.",
"The flag tag can explicitly inform the generator to satisfy as many as possible constraints.",
"In the training stage, when the generation is stopped, the flag tags for each token are either not a constrain or satisfied.",
"It is a strong signal for the model to try to satisfy all constrains.",
"We conduct experiments on HotpotQA (Yang et al., 2018): a challenging dataset in which the questions are generated by reasoning over text from separate Wikipedia pages.",
"Results show that our model greatly improves performance; it outperforms the state-of-the-art about 25% by 5 BLEU (Papineni et al., 2002) points.",
"We propose a simple and effective controlled generation framework for MQG; we are also the first one to provide a method to ensure the complexity of generated questions and the first one to introduce the controlled generation methods to MQG.",
"Experiment results show that our model greatly improves the performance; it also outperforms the state-of-the-art about 25% by 5 BLEU points.",
"Early works on QG (Mostow and Chen, 2009; Heilman and Smith, 2010) focus on the rule-based approaches that rely on heuristic rules or handcrafted templates, with low generalizability and scalability.",
"Recent works adopt the attention-based sequence-to-sequence neural model for QG tasks, taking sentences with the answer as input and outputting the question (Du et al., 2017), which proved to work better than the rule-based methods.",
"(Zhou et al., 2018) proposes the feature-enriched encoder to encode the input sentence.",
"To generate a question for a given answer, (Sun et al., 2018; Kim et al., 2019; Song et al., 2018) apply various techniques to encode answer location information into an annotation vector corresponding to the word positions, thus allowing for better quality answer focused questions.",
"(Chen et al., 2020) presents a syntactic feature-based method to represent words in a document and to decide what words to focus on while generating the question.",
"Furthermore, recent 6897 Figure 2: Overview architecture of the CQG model.",
"concurrent works apply the large-scale language model pre-training strategy for QG to achieve a new state-of-the-art performance (Chan and Fan, 2020).",
"Most prior research on QG has focused on shallow factoid-based questions, where answering the question simply by extracting the span of the text from a single input document.",
"Recently, there has been an increasing interest in MQG, to capture the complex information among different input documents, (Pan et al., 2020; Su et al., 2020) employ the GNN-based encoder in semantic graph and entity graph respectively, and (Sachan et al., 2020) use the strong transformer-based graph model.",
"However, all these methods can not to ensure the complexity of generated questions where the generated questions may degenerate into shallow questions.",
"Two different types of control can be applied over generation models: soft control and hard control.",
"Soft control aims at directing the option or the general topic of the generated text.",
"In contrast, hard control aims at ensuring that some explicit constraints are met, e.g., specific words are contained in the text.",
"The soft control can also be achieved via hard control, i.e., text that contains a set of words related to a certain topic should arguably revolve around that topic.",
"Some recent works employ soft control on unconstrained language generation by training or fine-tuning language models (Ziegler et al., 2019; Keskar et al., 2019).",
"While hard control of constrained generation, such as machine translation, can be attained with grid beam search methods (Hu et al., 2019; Post and Vilar, 2018), which is impractical to use the same approach for hard control of unconstrained generation.",
"Methods such as grid beam search rely on the assumption that there exists a core set of plausible candidates fulfilling the desired criteria, this is not often the case for open-ended generation tasks.",
"Recent work on stochastic search (Sha, 2020) has approached this problem by performing bidirectional search during generation and editing the text until the constraints are fulfilled.",
"Although stochastic search is suitable for bidirectional RNN models, it is not yet clear if it can be applied to forward generation models, e.g., transformer-based models.",
"In this section, we formalize the multi-hop question generation (MQG) task and introduce our CQG.",
"In particular, we first describe our Graph Attention Network (GAT) based key entities extractor.",
"Following this, we describe the flag tag and finally we introduce our novel controlled Transformer-based generator with flag tag.",
"The input to the MQG task is a set of context documents C = { d 1 ,",
".., d k } where the k is the number of documents and an answer A = [ a 1 , ..., a m ] where the m is the length of answer.",
"These documents can be long containing multiple sentences, d i = [ s 1 , ..., s n ] , where each s j = [ w j 1 , ..., w jt ] is composed of a sequence of tokens and the n and t are the number of sentences and the length of sentences respectively.",
"The desired goal of MQG is to generate a question y = [ y 1 , ..., y t ] conditioned on the context and the answer, where answering this question requires reasoning about the content in more than one of the context documents.",
"According to existing research in multi-hop QA (De Cao et al., 2018), the reasoning chains can be captured by propagating local contextual information along edges in entity graph using a GNN.",
"Motivated by this, we construct the entity graph from the input documents first and then employ the Graph Attention Network (GAT) (Velickovic et al., 2017).",
"We follow the (Qiu et al., 2019) to construct the entity graph and we use the Stanford corenlp toolkit (Manning et al., 2014) to recognize named entities from the context C .",
"The entity graph is constructed with the entities as nodes and edges built as follows.",
"The edges are added",
"1. for every pair of entities that appear in the same sentence in C (sentence-level links);",
"2. for every pair of entities with the same mentioned text in C (context-level links);",
"3. between a central entity node and other entities within the same paragraph (paragraph-level links).",
"The central entities are extracted from the title sentence for each paragraph.",
"We do not apply co-reference resolution for pronouns because it introduces both additional useful and erroneous links.",
"We concatenate the answer A with the context C and pass the resulting sequence to a pre-trained BERT model to obtain representations H = [ h 1 , h 2 , ..., h M ] where M is the length of the context and answer.",
"For each entity e i = [ w l , w l +1 , ..., w j ] , we obtain its representation by a MaxPool and use it as the node embedding E i in entity graph: e i = [ w l , w l +1 , ..., w j ] (1) E 0 i = MaxP ooling ( h l , h l +1 , ..., h r ) (2) The next step is to aggregate the information in the entity graph; here, we used a GAT to compute the multi-head attention score between two entity nodes by: ij = exp ( ( W [ h i , h j ])) (cid:80) k N i exp ( ( W [ h i , h k ]) (3) ( x ) = LeakyReLU ( x ) (4) where W is the trainable matrix and N i is the neighbors of entity i .",
"We aggregate the information by multi-head attention at each step: h t +1 i = Kk =1 ( (cid:88) j N i kij W k h j ) (5) where is the concatenate operation, W k is the trainable weighting matrix for the kth head and all nodes share the same parameters of W k .",
"6899 Figure 3: An example for flag tag update.",
"Then we obtain the updated node embedding E t +1 = [ h t +11 , h t +12 , ..., h t +1 n ] .",
"To generate a multi-hop question, we need to select the key entities in complex multi-hop reasoning chains.",
"We formulate this as a node classification task, i.e., deciding whether each node should be involved in the process of asking, i.e., appearing in the reasoning chain for raising a multihop question, as exemplified by Figure",
"2. To this end, we add one feed-forward layer on top of the final layer of the graph encoder, taking the output node representations ET for classification.",
"We deem a node as a positive ground-truth to train the key entities extract task if its contents appear in the ground-truth question and optimize it by cross-entropy loss.",
"In order to ensure the complexity of the generated question, the generated question must contain the key entities extracted from the entity graph.",
"To this end, we need a controlled generator G ( Y | X, Y ) where X is the input passage tokens and some x i correspond to lexical constraints that must be satisfied in the generated outputs.",
"We describe the flag tag firstly, at decoding step t , the flag tag indicates whether each lexical constraint has been satisfied up until this step.",
"Notably, the flag tag for each token at step t is that: flag ti = 0 x i is not a constrain 1 x i does not appear in y 1: t 2 x i appear in y 1: t where flag t i is the flag tag for ith input token at decoding step t, and y 1: t is the generated tokens thus far.",
"The tokens with the values 1 or 2 of the flag is a lexical constraint and the token with 0 is not constrained to appear in the question.",
"Obviously, the flag tag for any token can only remain unchanged or updated to value",
"2. As shown in Figure 3, the input tokens X is that X = [The, six, Celtic, nations, Western, Europe] and the flag tag at the beginning is that flag 0 = [0,1,0,0,1,1] because the tokens are not constrained except six , Western and Europe .",
"At step 4, the flags update to [0,2,0,0,0,1,1] because the token six has been generated but Western and Europe have not.",
"During the training of models, all the constraints have been satisfied before stopping the generation.",
"This is a strong signal for the model to satisfy all the constraints.",
"In addition, the flag tag is simple enough, which only adds the embedding with three tokens.",
"To utilize the rich information in flag tag, we employ a Transformer-based decoder as a generator to incorporate it and construct a simple controlled generation framework.",
"We inject the flag tag into the embedding vector and use this embedding as the relative position embedding to bridge the decoder and the encoder.",
"In particular, at decoding step t , we incorporate the flag tag embedding by cross-attention in decoder.",
"The conventional cross-attention module is computed by: Cross ( Q, K, V ) = softmax ( Q K d k ) V (6) where Q is the decoder states, K and V are encoder states and d k is the dimensions of K vectors.",
"We introduce the flag tag at step t F t R 3 lenP where lenP is the length of the input passage, to transformer decoder as relative position embedding to compute the cross attention at step t as follows: tcross = softmax ( E t ) (7) E t = Q t ( K + R t ) d (8) R t = Embedding ( F t ) (9) where Q t is the states of decoder at step t and the K is the outputs of encoder.",
"And then the outputs of cross module is: Cross ( Q t , K, V, F t ) = tcross V (10) where V is the outputs of encoder.",
"To evaluate the model's ability to generate multihop questions, we conduct experiments on HotpotQA (Yang et al., 2018), which contains about 100,000 crowd-sourced questions that require reasoning over separate Wikipedia articles.",
"Each question has two supporting documents that contain the necessary evidence to infer the answer.",
"In this paper, we take the fact supporting sentences with the answer as inputs to generate the multihop questions.",
"We follow the split of the original dataset including 90,447 and 7405 examples for training and developing respectively.",
"Because the test set is not available publicly, so we set the original developing set as the test set and extract 500 samples from the training set as the developing set.",
"Overall, we use the 89,947/500/7405 samples as training set, developing set and testing set, respectively.",
"Following the previous work, we employ BLEU (Papineni et al., 2002), ROUGE-L (Lin, 2004) and METEOR (Lavie and Agarwal, 2007) as automated evaluation metrics.",
"BLEU measures the average n-gram overlap on a set of reference sentences.",
"Both METEOR and ROUGE-L specialize BLEU's n-gram overlap idea for machine translation and text summarization evaluation, respectively.",
"We compare our proposed model against several strong baselines on question generation.",
"Seq2Seq + attn : (Bahdanau et al., 2014) the basic sequence-to-sequence (Seq2Seq) model with attention, which takes the document as input to decode the question.",
"NQG++ (Zhou et al., 2018): a Seq2Seq model with feature-enrich encoder.",
"s2sa-at-mp-gsa (Zhao et al., 2018): employs a gated attention encoder and a maxout pointer decoder to deal with long text inputs.",
"Semantic Graph (Pan et al., 2020): a graph-to-seq model for MQG, which constructs a semantic graph to capture the global information.",
"MuLQG (Su et al., 2020): a graph-to-seq model employs an encoder reasoning gate to capture the entity graph information.",
"IGND (Fei et al., 2021): a graph-to-seq model that introduces the copy tag and iterative graph-based decoder, it is the state-of-the-art model for shallow QG.",
"We construct the graph following (Pan et al., 2020) to match the HotpotQA dataset.",
"BART (Lewis et al., 2020): The strong pretraining generation model that obtains the state-of-the-art performance on shallow question.",
"UniLM (Dong et al., 2019): Another strong pretraining generation model.",
"Strong Transformers (Sachan et al., 2020): the state-of-the-art model for MQG, which propose a series of strong Transformer models for MQG.",
"We use the BERT base model loaded from transformers in huggingface library 2 .",
"The embedding size and head hidden size of the flag tag are 64.",
"The number of heads in BERT, transformer-based decoder and GAT attention is 8.",
"The number hop of GAT in the entity graph is",
"3. As for entity extracting, if the number of key entities is more than 5, we use the top-5 entities with the highest probability.",
"We use the AdamW (Loshchilov and Hutter, 2017) as the optimizer and the learning rate is set to 2e-5.",
"We stop the training if the validation BLEU-4 score stops improving for 10 epochs.",
"We clip the gradient at length 10.",
"The batch size is 128 and the beam search width 5.",
"All hyperparameters are tuned on the development set.",
"We implement all models in MindSpore.",
"Table 1 shows the experimental results of the HotpotQA dataset.",
"In terms of BLEU-4 regarded as the main evaluation metric for text generation, our model greatly improves performance; it outperforms the strong Transformers about 25% by 5 BLEU points.",
"We achieve state-of-the art results on HotpotQA.",
"Not only in BLEU-4, our CQG achieves the best performance and shows significant improvement in all metrics.",
"Metrics for automatic evaluation based on n-grams may not truly reflect the quality of generated questions.",
"Hence, we further randomly sample 300 examples in the test set for human evaluation.",
"Following by (Pan et al., 2020), we conduct human 2 huggingface.co/transformers 6901 Model Short Contexts Medium Contexts Long Contexts Average Flu.",
"evaluations on 300 random test samples consisting of 100 short (<50 tokens), 100 medium (50-200 tokens), and 100 long (>200 tokens) documents.",
"We ask three workers to rate the 300 generated questions as well as the ground-truth questions between 1 (poor) and 5 (good) on three criteria: (1) fluency, which indicates whether the question follows the grammar and accords with the correct logic; (2) relevance, which indicates whether the question is answerable and relevant to the passage; (3) complexity, which indicates whether the question involves reasoning over multiple sentences from the document.",
"We average the scores from raters on each question and report the performance of UniLM, MuLQG Semantic Graph, BART and our CQG.",
"Workers were unaware of the identity of the models in advance.",
"We show the results in Table",
"2. We can see that the performance of pre-training generation models is much better than MulQG and Semantic Graph.",
"Our CQG model shows the best performance for all three criteria and all lengths of context.",
"Furthermore, CQG is outstanding in complexity where other models are weak in it, and this result proves that our model is effective in solving the complexity control issue of MQG task.",
"To further evaluate and investigate the performance of different components and strategies in our model, we perform the ablation study in the HotpotQA test set and show the results in Table",
"3. CQG w/o entity graph The model removes the entity graph and employs the context embedding passed BERT to extract the entity, which does not change the setting of the controlled generator.",
"CQG w/o controlled decoder The model removes the controlled decoder and employs the standard transformer model, where the BERT encoder encodes both input passage and key entities and feeds then into the decoder.",
"CQG w/o inference dynamical flag tag The model does not update flag tag in inference stage , which means all the values of flag tag at the last step are the same as those at the first step.",
"CQG w/o key entities + controlled decoder The model removes the key entities extractor and controlled generator; we can see it as a baseline model consisting of a BERT encoder and a Transformer decoder.",
"First of all, there is a huge gap between CQG and CQG w/o key entities + controlled decoder, which demonstrates that our controlled generation framework plays an important role.",
"Comparing between CQG and CQG w/o controlled decoder, we find that the controlled generator with the flag tag is the critical module in CQG.",
"Secondly, CQG is higher than CQG w/o entity graph 0.97 of BLEU points.",
"We can see that the entity graph constructed from the input passage contains rich structure information among entities and captures the information by GAT, which can improve the performance for CQG.",
"Thirdly, although the CQG w/o dynamical inference flag tag is worse than CQG, it is much higher than CQG w/o controlled decoder.",
"This phenomenon shows that the flag tag is a strong signal that prompts the model to satisfy as many constraints as possible in the training stage.",
"Although CQG w/o dynamical inference flag tag does not update the flag tag in the inference stage, the model also tries to generate the key entities to improve the performance.",
"CQG w/o controlled decoder removes the hard controlled generator and employs the soft controlled method, which encodes the key tokens and feeds them to the decoder.",
"CQG w/o controlled decoder is 0.98 higher than CQG w/o key entities + controlled decoder, which shows the soft controlled method is effective but is far from the hard method in CQG.",
"We conduct some experiments to analyze the controlled generator in this section.",
"At first, we compare the key entity coverage percentage for different models.",
"In particular, we compute the coverage percentage of the appeared key entity in question generated by different models, where we think all the entities that appear in the gold question are key entities.",
"This metric reflects the complexity of generated questions because the multi-hop reasoning chains are composed of these key entities.",
"As shown in Figure 5, we can find that the coverage of CQG is much higher than in the other models, and this improvement is from the controlled generator according to the comparison between CQG and CQG w/o controlled generator.",
"This result shows that our CQG improves the control of the model generation process.",
"We present a case study to show the control ability of our model and compare the strong baseline BART model, CQG and the gold.",
"The cases are presented in Table",
"4. It is clearly shown the BART model generates the question only involved paragraph A, which is not the multi-hop question.",
"As for CQG, we provide three examples with the different key entity Paragraph A: Letters to Cleo Letters to Cleo are an alternative rock band from Boston, Massachusetts, best known for the 1994 single, \"Here & Now, from their full-length debut album, \"Aurora Gory Alice\".",
"The band's members are Kay Hanley, Greg McKenna, Michael Eisenstein, Stacy Jones, Scott Riebling, and later, Tom Polce.",
"Paragraph B: Screaming Trees Screaming Trees was an American rock band formed in Ellensburg, Washington in 1985 by vocalist Mark Lanegan, guitarist Gary Lee Conner, bass player Van Conner and drummer Mark Pickerel.",
"Pickerel had been replaced by Barrett Martin by the time the band reached its most successful period.",
"Answer: Letters to Cleo Gold Question: Which band, Letters to Cleo or Screaming Trees, had more members?",
"BART: Which band's members are Kay Hanley, Greg Mckenna, Michael Eisenstein, Stacy Jones, Scott Riebling, and Tom Polce ?",
"Key Entity: Letters to Cleo, Screaming Trees CQG: Which band has more members, Letters to Cleo or Screaming Trees?",
"Key Entity: Letters to Cleo, Kay Hanley CQG: Is Kay Hanley the member of Letters to Cleo's member ?",
"Key Entity: Boston CQG: Which rock band are from Boston ?",
"and the questions generated by CQG contain the given key entity.",
"The given key entity can control the semantic of the generated question, and we can see that the question in the first example, where the given key entities are the same entity as gold, have the same semantic as the gold question.",
"The examples demonstrate that our CQG can be controlled to generate the high-quality multi-hop question with the given key entity.",
"The MQG task is more challenging and worthy of exploration compared with conventional shallow QG.",
"To address the complexity control problem of MQG, we propose a simple control framework CQG, which consists of a GAT-based key entity extractor and a controlled generated.",
"CQG greatly improves the performance and we hope our model 6903 will help researchers to study the MQG task.",
"The authors wish to thank the anonymous reviewers for their helpful comments.",
"This work was partially funded by National Natural Science Foundation of China (No. 61976056, 62076069), Shanghai Municipal Science and Technology Major Project (No.2021SHZDZX0103).",
"This research was supported by Meituan, Beijing Academy of Artificial Intelligence(BAAI), and CAAI-Huawei MindSpore Open Fund."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"other",
"other",
"other"
] |
[
"Due to the great potential in facilitating software development, code generation has attracted increasing attention recently.",
"Generally, dominant models are Seq2Tree models, which convert the input natural language description into a sequence of tree-construction actions corresponding to the pre-order traversal of an Abstract Syntax Tree (AST).",
"However, such a traversal order may not be suitable for handling all multi-branch nodes.",
"In this paper, we propose to equip the Seq2Tree model with a context-based Branch Selector , which is able to dynamically determine optimal expansion orders of branches for multi-branch nodes.",
"Particularly, since the selection of expansion orders is a non-differentiable multi-step operation, we optimize the selector through reinforcement learning, and formulate the reward function as the difference of model losses obtained through different expansion orders.",
"Experimental results and in-depth analysis on several commonly-used datasets demonstrate the effectiveness and generality of our approach.",
"We have released our code at https: //github.com/DeepLearnXMU/CG-RL .",
"Code generation aims at automatically generating a source code snippet given a natural language (NL) description, which has attracted increasing attention recently due to its potential value in simplifying programming.",
"Instead of modeling the abstract syntax tree (AST) of code snippets directly, most of methods for code generation convert AST into a sequence of tree-construction actions.",
"This allows for using natural language generation (NLG) models, such as the widely-used encoder-decoder Joint work with Pattern Recognition Center, WeChat AI, Tencent Inc, China.",
"models, and obtains great success (Ling et al., 2016; Dong and Lapata, 2016, 2018; Rabinovich et al., 2017; Yin and Neubig, 2017, 2018, 2019; Hay-ati et al., 2018; Sun et al., 2019, 2020; Wei et al., 2019; Shin et al., 2019; Xu et al., 2020; Xie et al., 2021).",
"Specifically, an encoder is first used to learn word-level semantic representations of the input NL description.",
"Then, a decoder outputs a sequence of tree-construction actions, with which the corresponding AST is generated through pre-order traversal.",
"Finally, the generated AST is mapped into surface codes via certain deterministic functions.",
"Generally, during the generation of dominant Seq2Tree models based on pre-order traversal, branches of each multi-branch nodes are expanded in a left-to-right order.",
"Figure 1 gives an example of the NL-to-Code conversion conducted by a Seq2Tree model.",
"At the timestep t 1 , the model generates a multi-branch node using the action a 1 with the grammar containing three fields: type , name , and body .",
"Thus, during the subsequent generation process, the model expands the node of t 1 to sequentially generate several branches in a left-to-right order, corresponding to the three fields of a 1 .",
"The left-to-right order is a conventional bias for most human-beings to handle multi-branch nodes, which, however, may not be optimal for expanding branches.",
"Alternatively, if we first expand the field name to generate a branch, which can inform us the name e', it will be easier to expand the field type with a Exception' branch due to the high co-occurrence of e' and Exception'.",
"To verify this conjecture, we choose TRANX (Yin and Neubig, 2018) to construct a variant: TRANX-R2L, which conducts depth-first generation in a right-to-left manner, and then compare their performance on the DJANGO dataset.",
"We find that about 93.4% of ASTs contain multi-branch nodes, and 17.38% of AST nodes are multi-branch Percentage Only TRANX 8.47 Only TRANX-R2L 7.66 Table 1: The percentages of multi-branch nodes, which can only be correctly handled by different models.",
"ones.",
"Table 1 reports the experimental results.",
"We can observe that 8.47% and 7.66% of multi-branch nodes can only be correctly handled by TRANX and TRANX-R2L, respectively.",
"Therefore, we conclude that different multi-branch nodes have different optimal branch expansion orders, which can be dynamically selected based on context to improve the performance of conventional Seq2Tree models.",
"In this paper, we explore dynamic selection of branch expansion orders for code generation.",
"Specifically, we propose to equip the conventional Seq2Tree model with a context-based Branch Selector , which dynamically quantifies the priorities of expanding different branches for multi-branch nodes during AST generations.",
"However, such a non-differentiable multi-step operation poses a challenge to the model training.",
"To deal with this issue, we apply reinforcement learning to train the extended Seq2Tree model.",
"Particularly, we augment the conventional training objective with a reward function, which is based on the model training loss between different expansion orders of branches.",
"In this way, the model is trained to determine optimal expansion orders of branches for multi-branch nodes, which will contribute to AST generations.",
"Through in-depth analysis, we point out that different orders of branch expansion are suitable for handling different multi-branch AST nodes, and thus dynamic selection of branch expansion orders has the potential to improve conventional Seq2Tree models.",
"We propose to incorporate a context-based Branch Selector into the conventional Seq2Tree model and then employ reinforcement learning to train the extended model.",
"To the best of our knowledge, our work is the first attempt to explore dynamic selection of branch expansion orders for code generation.",
"As shown in Figure 1, the procedure of code generation can be decomposed into three stages.",
"Based on the learned semantic representations of the input NL utterance, the dominant Seq2Tree model (Yin and Neubig, 2018) first outputs a sequence of abstract syntax description language (ASDL) grammar-based actions.",
"These actions can then be used to construct an AST following the preorder traversal.",
"Finally, the generated AST is mapped into surface code via a user-specified function AST to MR( ).",
"In the following subsections, we first describe the basic ASDL grammars of Seq2Tree models.",
"Then, we introduce the details of TRANX (Yin and Neubig, 2018), which is selected as our basic model due to its extensive applications and competitive performance (Yin and Neubig, 2019; Shin et al., 2019; Xu et al., 2020).",
"1 1 Please note that our approach is also applicable to other Seq2Tree models.",
"Formally, an ASDL grammar contains two components: type and constructors .",
"The value of type can be composite or primitive.",
"As shown in the ActionSequence ' and AST z ' parts of Figure 1, a constructor specifies a language component of a particular type using its fields, e.g., ExceptHandler (expr? type, expr? name, stmt body) .",
"Each field specifies the type of its child node and contains a cardinality (single, optional ? and sequential ) indicating the number of child nodes it holds.",
"For instance, expr?",
"name denotes the field name has optional child node.",
"The field with composite type (e.g. expr ) can be instantiated by constructors of corresponding type, while the field with primitive type (e.g. identifier ) directly stores token.",
"There are three kinds of ASDL grammar-based actions that can be used to generate the action sequence: 1) APPLYCONSTR [ c ] .",
"Using this action, a constructor c is applied to the composite field of the parent node with the same type as c , expanding the field to generate a branch ending with an AST node.",
"Here we denote the field of the parent node as frontier field .",
"2) REDUCE .",
"It indicates the completion of generating branches for a field with optional or multiple cardinalities.",
"3) GENTOKEN [ v ] .",
"It expands a primitive frontier field to generate a token v .",
"Obviously, a constructor with multiple fields can produce multiple AST branches 2 , of which generation order has important effect on the model performance, as previously mentioned.",
"Similar to other NLG models, TRANX is trained to minimize the following objective function:",
"where a t is the t -th action, and p ( a t | a <t , x ) is modeled by an attentional encoder-decoder network (Yin and Neubig, 2018).",
"For an NL description x = x 1 , x 2 , ..., x N , we use a BiLSTM encoder to learn its word-level hidden states.",
"Likewise, the decoder is also an LSTM network.",
"Formally, at the timestep t , the temporary hidden state h t is updated as h t = f LSTM ([ E ( a t 1 ) : s t 1 : p t ] , h t 1 ) , (2) 2 We also note that the field with sequential cardinality will be expanded to multiple branches.",
"However, in this work, we do not consider this scenario, which is left as future work.",
"where E ( a t 1 ) is the embedding of the previous action a t 1 , s t 1 is the previous decoder hidden state, and p t is a concatenated vector involving the embedding of the frontier field and the decoder hidden state for the parent node.",
"Furthermore, the decoder hidden state s t is defined as s t = tanh ( W [ h t : c t ]) , (3) where c t is the context vector produced from the encoder hidden states and W is a parameter matrix.",
"Composite .",
"We adopt an APPLYCONSTR action to expand the field or a REDUCE action to complete the field.",
"3 The probability of using APPLYCONSTR [ c ] is defined as follows: p ( a t = APPLYCONSTR [ c ] | a <t , x ) = softmax (cid:16) E ( c ) (cid:62) Ws t (cid:17) (4) where E ( c ) denotes the embedding of the constructor c .",
"Primitive .",
"We apply a GENTOKEN action to produce a token v , which is either generated from the vocabulary or copied from the input NL description.",
"Formally, the probability of using GENTOKEN [ v ] can be decomposed into two parts: p ( a t = GENTOKEN [ v ] | a <t , x ) = p (gen | a <t , x ) p gen ( v | a <t , x ) + (1 p (gen | a <t , x )) p copy ( v | a <t , x ) , (5) where p (gen | a <t , x ) is modeled as sigmoid ( Ws t ) .",
"Please note that our proposed dynamic selection of branch expansion orders does not affect other aspects of the model.",
"In this section, we extend the conventional Seq2Tree model with a context-based branch selector , which dynamically determines optimal expansion orders of branches for multi-branch AST nodes.",
"In the following subsections, we first illustrate the elaborately-designed branch selector module and then introduce how to train the extended Seq2Tree model via reinforcement learning in detail.",
"3 REDUCE action can be considered as a special APPLYCONSTR action Branch Selector Reward TRANX (cid:2869) ?",
"As described in Section 2.2, the action prediction at each timestep is mainly affected by its previous action, frontier field and the action of its parent node.",
"Thus, it is reasonable to construct the branch selector determining optimal expansion orders of branches according to these three kinds of information.",
"Specifically, given a multi-branch node n t at timestep t , where the ASDL grammar of action a t contains m fields [ f 1 , f 2 , ...f m ] , we feed the branch selector with three vectors: 1) E ( f i ) : the embedding of field f i , 2) E ( a t ) : the embedding of action a t , 3) s t : the decoder hidden state, and then calculate the priority score of expanding fields as follows: Score ( f i ) = W 2 (tanh( W 1 [ s t : E ( a t ) : E ( f i )])) , (6) where W 1 R d 1 d 2 and W 2 R d 2 1 are learnable parameters.",
"4 Afterwards, we normalize priority scores of expanding all fields into a probability distribution: p n t = softmax([ Score ( f 1 ) : : Score ( f m )]) .",
"Based on the above probability distribution, we can sample m times to form a branch expansion order o = [ f o 1 , ..., f o m ] , of which the policy probability is computed as",
"It is notable that during the sampling of f o i , we mask previously sampled fields f o <i to ensure that duplicate fields will not be sampled.",
"During the generation of ASTs, with the above context-based branch selector, we deal with multi-branch nodes according to the dynamically determined order instead of the standard left-to-right order.",
"However, the non-differentiability of multistep expansion order selection and how to determine the optimal expansion order lead to challenges for the model training.",
"To deal with these issues, we introduce reinforcement learning to train the extended Seq2Tree model in an end-to-end way.",
"Concretely, we first pre-train a conventional Seq2Tree model.",
"Then, we employ self-critical training with a reward function that measures loss difference between different branch expansion orders to train the extended Seq2Tree model.",
"It is known that a well-initialized network is very important for applying reinforcement learning (Kang et al., 2020).",
"In this work, we require the model to automatically quantify effects of different branch expansion orders on the quality of the generated action sequences.",
"Therefore, we expect that the model has the basic ability to generate action sequences in random order at the beginning.",
"To do this, instead of using the pre-order traversal based action sequences, we use the randomly-organized action sequences to pre-train the Seq2Tree model.",
"uniform distribution, and then reorganize the corresponding actions according to the sampled order.",
"We conduct the same operations to all multi-branch nodes of the AST, forming a new training instance.",
"Finally, we use the regenerated training instances to pre-train our model.",
"With the above initialized parameters, we then perform self-critical training (Rennie et al., 2017; Kang et al., 2020) to update the Seq2Tree model with branch selector.",
"Specifically, we train the extended Seq2Tree model by combining the MLE objective and RL objective together.",
"Formally, given the training instance ( x , a ) , we first apply the sampling method described in section 3.1 to all multi-branch nodes, reorganizing the initial action sequence a to form a new action sequence a o , and then define the model training objective as L = L mle ( a o | x ; ) + | N mb | (cid:88) n N mb L rl ( o ; ) , (9) where L mle ( ) denotes the conventional training objective defined in Equation 1, L rl ( ) is the negative expected reward of branch expansion order o for the multi-branch node n , is a balancing hyper-parameter, N mb denotes the set of multi-branch nodes in the training instance, and denotes the parameter set of our enhanced model.",
"More specifically, L rl ( ) is defined as L rl ( o ; ) = E o [ r ( o )] r ( o ) , o , (10) where we approximate the expected reward with the loss of an order o sampled from the policy .",
"Inspired by successful applications of self-critical training in previous studies (Rennie et al., 2017; Kang et al., 2020), we propose the reward r ( ) to accurately measure the effect of any order on the model performance.",
"As shown in Figure 2, we calculate the reward using two expansion orders of branches: one is o sampled from the policy , and the other is o inferred from the policy with the maximal generation probability: r ( o ) = ( L mle ( o ) L mle ( o )) (max( p ( o ) , 0)) .",
"Please note that we extend the standard reward function by setting a threshold to clip the reward, which can prevent the network from being over-confident in current expansion order of branches.",
"Finally, we apply the REINFORCE algorithm (Williams, 1992) to compute the gradient: L rl r ( o ) log p ( o ) .",
"To investigate the effectiveness and generalizability of our model, we carry out experiments on several commonly-used datasets.",
"Following previous studies (Yin and Neubig, 2018, 2019; Xu et al., 2020), we use the following four datasets:",
"DJANGO (Oda et al., 2015).",
"This dataset totally contains 18,805 lines of Python source code, which are extracted from the Django Web framework, and each line is paired with an NL description.",
"ATIS .",
"This dataset is a set of 5,410 inquiries of flight information, where the input of each example is an NL description and its corresponding output is a short piece of code in lambda calculus.",
"GEO .",
"It is a collection of 880 U.S. geographical questions, with meaning representations defined in lambda logical forms like ATIS.",
"CONALA (Yin et al., 2018).",
"It totally consists of 2,879 examples of manually annotated NL questions and their Python solutions on STACK OVERFLOW.",
"Compared with DJANGO, the examples of CONALA cover real-world NL queries issued by programmers with diverse intents, and are signifi-cantly more difficult due to its broad coverage and high compositionality of target meaning representations.",
"To facilitate the descriptions of experimental results, we refer to the enhanced TRANX model as TRANX-RL .",
"In addition to TRANX, we compare our enhanced model with several competitive models: TRANX (w/ pre-train) .",
"compare with it because our model involves a pre-training stage.",
"COARSE2FINE (Dong and Lapata, 2018).",
"This model adopts a two-stage decoding strategy to produce the action sequence.",
"It first generates a rough sketch of its meaning, and then fills in missing detail.",
"TREEGEN (Sun et al., 2020).",
"It introduces the attention mechanism of Transformer (Vaswani et al., 2017), and a novel AST reader to incorporate grammar and AST structures into the network.",
"TRANX-R2L .",
"It is a variant of the conventional TRANX model, which deals with multi-branch AST nodes in a right-to-left manner.",
"TRANX-RAND .",
"It is also a variant of the conventional TRANX model dealing with multi-branch AST nodes in a random order.",
"TRANX-RL (w/o pre-train) .",
"In this variant of TRANX-RL, we train our model from scratch.",
"By doing so, we can discuss the effect of pre-training on our model training.",
"To ensure fair comparisons, we use the same experimental setup as TRANX (Yin and Neubig, 2018).",
"Concretely, the sizes of action embedding, field embedding and hidden states are set to 128, 128 and 256, respectively.",
"For decoding, the beam sizes for GEO, ATIS, DJANGO and CONALA are 5, 5, 15 and 15, respectively.",
"We pre-train models in 10 epochs for all datasets.",
"we determine the s as 1.0 according to the model performance on validation sets.",
"As in previous studies (Alvarez-Melis and Jaakkola, 2017; Yin and Neubig, 2018, 2019), we use the exact matching accuracy (Acc) as the evaluation metric for all datasets.",
"For CONALA, we use the corpus-level BLEU (Yin et al., 2018) as a complementary metric.",
"Table 2 reports the main experimental results.",
"Overall, our enhanced model outperforms baselines across all datasets.",
"Moreover, we can draw the following conclusions: First, our reimplemented TRANX model achieves comparable performance to previously reported results (Yin and Neubig, 2019) (TRANX).",
"Therefore, we confirm that our reimplemented TRANX model are convincing.",
"Second, compared with TRANX, TRANX-R2L and TRANX-RAND, our TRANX-RL exhibits better performance.",
"This result demonstrates the advantage of dynamically determining branch expansion orders on dealing with multi-branch AST nodes.",
"Third, the TRANX model with pre-training does not gain a better performance.",
"In contrast, removing the model pre-training leads to the performance degradation of our TRANX-RL model.",
"This result is consistent with the conclusion of previous studies (Wang et al., 2018; Kang et al., 2020) that the pre-training is very important for the applying reinforcement learning.",
"As implemented in related studies on other NLG tasks, such as machine translation (Bahdanau et al., 2015), we individually split two relatively large",
"datasets (DJANGO and ATIS) into different groups according to the number of multi-branch AST nodes, and report the performance of various models on these groups of datasets.",
"Tables 4 and 5 show the experimental results.",
"On most groups, TRANX-RL achieves better or equal performance than other models.",
"Therefore, we confirm that our model is general to datasets with different numbers of multi-branch nodes.",
"Given a multi-branch node, its child nodes have an important influence in the subtree.",
"Therefore, we focus on the accuracy of action predictions for the child nodes.",
"For fair comparison, we predict actions with previous ground-truth history actions as inputs.",
"Table 3 reports the experimental results.",
"We observe that TRANX-RL still achieves higher prediction accuracy than other baselines on most groups, which proves the effectiveness of our model again.",
"Figure 3 shows two examples from DJANGO.",
"In the first example, TRANX first generates the leftmost child node at the timestep t 2 , incorrectly predicting GENTOKEN [gzip'] as REDUCE action.",
"By contrast, TRANX-RL puts this child node in the last position and successfully predict its action, since our model benefits from the previously generated token GzipFile' of the sibling node, which frequently occurs with gzip'.",
"In the second example, TRANX incorrectly predicts the second child node at the t 10 -th timestep, while TRANX-RL firstly predicts it at the timestep t 6 .",
"We think this error results from the sequentially generated nodes and the errors in early timesteps would accumulatively harm the predictions of later sibling nodes.",
"By comparison, our model can flexibly generate subtrees with shorter lengths, alleviating error accumulation.",
"With the prosperity of deep learning, researchers introduce neural networks into code generation.",
"In this aspect, Ling et al. (2016) first explore a Seq2Seq model for code generation.",
"Then, due to the advantage of tree structure, many attempts resort to Seq2Tree models, which represent codes as trees of meaning representations (Dong and Lapata, 2016; Alvarez-Melis and Jaakkola, 2017; Rabinovich et al., 2017; Yin and Neubig, 2017, 2018; Sun et al., 2019, 2020).",
"Typically, Yin and Neubig (2018) propose TRANX, which introduces ASTs as intermediate representations of codes and has become the most influential Seq2Tree model.",
"Then, Sun et al. (2019, 2020) respectively explore CNN and Transformer",
"architectures to model code generation.",
"Unlike these work, Shin et al. (2019) present a Seq2Tree model to generate program fragments or tokens interchangeably at each generation step.",
"From another perspective, Xu et al. (2020) exploit external knowledge to enhance neural code generation model.",
"Generally, all these Seq2Tree models generate ASTs in pre-order traversal, which, however, is not suitable to handle all multi-branch AST nodes.",
"Different from the above studies that deal with multi-branch nodes in left-to-right order, our model determines the optimal expansion orders of branches for multi-branch nodes.",
"Some researchers have also noticed that the selection of decoding order has an important impact on the performance of neural code generation models.",
"For example, Alvarez-Melis and Jaakkola (2017) introduce a doubly RNN model that combines width and depth recurrences to traverse each node.",
"Dong and Lapata (2018) firstly generate a rough code sketch, and then fill in missing details by considering the input NL description and the sketch.",
"Gu et al. (2019a) present an insertion-based Seq2Seq model that can flexibly generate a sequence in an arbitrary order.",
"In general, these researches still deal with multi-branch AST nodes in a left-to-right manner.",
"Thus, these models are theoretically compatible with our proposed branch selector.",
"Finally, it should be noted that have been many NLP studies on exploring other decoding methods to improve other NLG tasks (Zhang et al., 2018; Su et al., 2019; Zhang et al., 2019; Welleck et al., 2019; Stern et al., 2019; Gu et al., 2019a,b).",
"However, to the best of our knowledge, our work is the first attempt to explore dynamic selection of branch expansion orders for tree-structured decoding.",
"In this work, we first point out that the generation of domainant Seq2Tree models based on pre-order traversal is not optimal for handling all multi-branch nodes.",
"Then we propose an extended Seq2Tree model equipped with a context-based branch selector, which is capable of dynamically determining optimal branch expansion orders for multi-branch nodes.",
"Particularly, we adopt reinforcement learning to train the whole model with an elaborate reward that measures the model loss difference between different branch expansion orders.",
"Extensive experiment results and in-depth analyses demonstrate the effectiveness and generality of our proposed model on several commonly-used datasets.",
"In the future, we will study how to extend our branch selector to deal with indefinite branches caused by sequential field.",
"The project was supported by National Key Research and Development Program of China (Grant No. 2020AAA0108004), National Natural Science Foundation of China (Grant No. 61672440), Natural Science Foundation of Fujian Province of China (Grant No. 2020J06001), Youth Innovation Fund of Xiamen (Grant No. 3502Z20206059), and the Fundamental Research Funds for the Central Universities (Grant No. ZK20720200077).",
"We also thank the reviewers for their insightful comments."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"objective",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"result",
"objective",
"objective",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"method",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"objective",
"objective",
"objective",
"method",
"objective",
"objective",
"other",
"other"
] |
[
"English Natural Language Understanding (NLU) systems have achieved great performances and even outperformed humans on benchmarks like GLUE and SuperGLUE.",
"However, these benchmarks contain only textbook Standard American English (SAE).",
"Other dialects have been largely overlooked in the NLP community.",
"This leads to biased and inequitable NLU systems that serve only a sub-population of speakers.",
"To understand disparities in current models and to facilitate more dialect-competent NLU systems, we introduce the VernAcular Language Understanding Evaluation (VALUE) benchmark, a challenging variant of GLUE that we created with a set of lexical and morphosyntactic transformation rules.",
"In this initial release (V.1), we construct rules for 11 features of African American Vernacular English (AAVE), and we recruit fluent AAVE speakers to validate each feature transformation via linguistic acceptability judgments in a participatory design manner.",
"Experiments show that these new dialectal features can lead to a drop in model performance.",
"Most of today's research in NLP mainly focuses on 10 to 20 high-resource languages with a special focus on English, though there are thousands of languages and dialects with billions of speakers in the world.",
"NLU systems that are trained on polished or textbook Standard American English (SAE) are not as robust to linguistic variation (Belinkov and Bisk, 2018; Ebrahimi et al., 2018).",
"While some recent works have challenged leading systems with adversarial examples like typos (Jones et al., 2020), syntactic rearrangements (Iyyer et al., 2018), and sentence/word substitutions (Alzantot et al., 2018; Jia and Liang, 2017; Ribeiro et al., 2018), fewer have considered the effects of dialectal differences on performance.",
"When language technologies are not built to handle dialectal differences, the benefits of these technologies may not be equitably distributed among different demographic groups (Hovy and Spruit, 2016).",
"Specifically, models tested on African American Vernacular English (AAVE) have been found to struggle with language identification (Jurgens et al., 2017), sentiment analysis (Kiritchenko and Mohammad, 2018), POS tagging (Jrgensen et al., 2016) and dependency parsing (Blodgett et al., 2018), and led to severe racial disparities in the resulting language technologies such as the automated speech recognition used by virtual assistants (Koenecke et al., 2020) the hate speech detection used by online media platforms (Rios, 2020; Halevy et al., 2021).",
"However, no prior work has systematically investigated these dialect-specific shortcomings across a broad set of NLU tasks, and the effectiveness of low-resource NLP methods for dialectal Natural Language Understanding (NLU) remains largely unexplored.",
"The first barrier to progress is that a standard benchmark for dialectal NLU has not yet been constructed.",
"The second is that no systematic error analyses have yet revealed causal insights about the specific challenges that models face with domain adaptation to different language varieties.",
"To both understand dialect disparity and facilitate ongoing work on dialect-competent NLU, we introduce a new dialect-specific challenge dataset the V ern A cular L anguage U nderstanding E valuation benchmark ( VALUE ).",
"We specifically focus on African American Vernagular English (AAVE), a dialect spoken by nearly 33 million people, and approximately 80% of African Americans in the United States (Lippi-Green, 1997).",
"To facilitate direct comparison with prior work, we build VALUE by directly transforming GLUE (Wang et al., 2019) into synthetic AAVE.",
"Our AAVE transformation pipeline comes with two key advantages: it is flexible enough to facilitate an interpretable perturbation error analysis, and 3701 the transformation rules are meaning-preserving, which ensures the validity of the transformed NLU tasks.",
"Our pipeline includes a set of linguistically-attested rules for syntax (sentence structure; e.g. negation rules), morphology (word structure; e.g., suffixes), orthography (writing and spelling conven-tions), and the lexicon (the list of available words and phrases).",
"Because our system is rule-based, we can isolate and systematically test which features most significantly challenge models.",
"While it is also possible to generate pseudo-dialects via end-to-end style transfer (Krishna et al., 2020), these systems often fail to disentangle style from content, and thus also fail to preserve meaning (Lam-ple et al., 2019).",
"We confirm these shortcomings in this work, and affirm the validity of our own meaning-preserving transformation rules via the acceptability judgments of fluent AAVE speakers in a participatory design manner.",
"To sum up, our work contributes the following: 1. Dialect Transformations: A set of 11 new linguistic rules for reliably transforming Standard American English (SAE) into African American Vernacular English (AAVE).",
"2. VALUE : An AAVE benchmark dataset with seven NLU tasks.",
"3. Synthetic + Gold Standard Data: Robust validation of synthetic transformations as well as gold standard dialectal data from native AAVE speakers via an iterative participatory design process.",
"4. Benchmark Evaluation: Experiments with RoBERTA baselines plus fine-tuning methods to improve model robustness on dialectal variants.",
"5. Dialect-Specific Analysis: Perturbation analysis that reveals the task-specific challenges of AAVE-specific grammatical features.",
"Computational Sociolinguistics of Dialect Prior work on developing NLU models has often used dominant English varieties, Standard American English (SAE), owing to the availability of text datasets for training and testing (Blodgett et al., 2016).",
"Models can marginalize certain groups when trained on datasets that lack linguistic diversity or contain biases against minority language speakers (Blodgett and O'Connor, 2017).",
"Despite these shortcomings, there still has been relatively little attention paid to dialects in the language technologies research communities.",
"Prior studies have mainly focused on distinguishing between English language varieties (Demszky et al., 2021a; Zampieri et al., 2014).",
"Failure to account for dialects like AAVE can lead to performance degradation of the NLU tools such as Automatic Speech Recognition (ASR) (Dorn, 2019), Language Identification (LID) and dependency parsing tools (Blodgett et al., 2016).",
"Hwang et al. (2020a) also demonstrated the inadequacy of WordNet and ConceptNet in reflecting AAVE and other varieties.",
"Thus there have been several works highlighting the need for AAVE-inclusivity in NLU (Groenwold et al., 2020).",
"Despite its large community of speakers, AAVE is under-represented in current technologies.",
"Model Robustness and Challenge Datasets Language technologies are not inherently robust to linguistic variation.",
"The performance of neural models is expected to degrade due to sparsity in the presence of non-canonical text (Zalmout et al., 2018; Belinkov and Bisk, 2018; Ebrahimi et al., 2018), as shown empirically for random character, word, and sentence-level permutations (Jones et al., 2020; Alzantot et al., 2018; Jia and Liang, 2017; Ribeiro et al., 2018; Iyyer et al., 2018).",
"This has motivated growing interest in challenging datasets based on adversarial perturbations (Nie et al., 2020; Tan et al., 2020), spurious patterns or correlations (Zhang et al., 2019; McCoy et al., 2019), and counterfactual examples (Gardner et al., 2020; Kaushik et al., 2020).",
"However, the same attention has not been shown to dialects, which vary systematically in their syntax, morphology, phonology, orthography, and lexicon (Jurgens et al., 2017).",
"To this end, we introduce the evaluation set by adapting from the in-distribution examples (SAE) to out-of-distribution examples (AAVE) on GLUE benchmarks.",
"Our goal is to develop robust models that have a good performance on test sets in different linguistic variations.",
"We constructed VALUE from the widely-used GLUE benchmark (Wang et al., 2019), which contains NLU tasks such as natural language inference (e.g., MNLI; Bowman et al.), question answering (QNLI; Rajpurkar et al.), and linguistic acceptability (CoLA; Warstadt et al.).",
"For each of the main 3702 tasks, we translated the Standard American English (SAE) into a synthetic form of AAVE a form containing many of AAVE's distinguishing features with extremely high concentration.",
"We implemented these transformations using a set of lexical and morphosyntactic rules derived from a broad survey of the linguistics literature (Collins et al., 2008; Green, 2002; Labov, 1972; Labov et al., 1998; Sidnell, 2002; Stewart, 2014; Thompson, 2016; Wolfram and Schilling, 2015).",
"These features were specifically chosen for their high empirical attestation across regional and generational variants of AAVE.",
"This work represents the first attempt to systematically catalogue and operationalize a set of computational rules for inserting AAVE-specific language structures into text.",
"We distill field linguists's observations into procedural code, which operates on specific grammatical conditions from the SAE source.",
"Each grammatical condition is specified by the part of speech tags and syntactic dependency relationships present in the text.",
"Appendix A.1 lists all implementation details for each transformation rule, and we will now enumerate them briefly.",
"Auxiliaries.",
"AAVE allows copula deletion and other auxiliary dropping (Stewart, 2014; Green, 2002; Labov, 1972; Wolfram and Schilling, 2015).",
"This means the SAE sentence We are better than before could be rendered in AAVE without the copula as We better than before .",
"We look for the present tense is and are as well as any tokens with AUX part of speech tag to drop (under special conditions listed in more detail in Appendix A.1).",
"Completive done and remote time been .",
"The phrase I had written it. can be rendered in AAVE as I done wrote it using the completive verbal marker done .",
"The phrase He ate a long time ago can be rendered as He been ate using the remote time been (Green, 2002).",
"Constructions involving the word ass .",
"These constructions may be misclassified as obscenity, but they serve a distinct and consistent role in AAVE grammar (Spears et al., 1998).",
"One common form is called the ass camouflage construction (Collins et al., 2008), and it can be seen in the phrase I divorced his ass .",
"Here, the word behaves as a metonymic pseudo-pronoun (Spears et al., 1998).",
"Similarly, it can appear reflexively, as in Get yo'ass inside .",
"Ass constructions can also serve as discourse-level expressive markers or intensifiers, as in the compound We was at some random-ass bar .",
"Existential dey / it .",
"AAVE speakers can indicate something exists by using what is known as an it or dey existential construction (Green, 2002).",
"The existential construction in It's some milk in the fridge is used to indicate There is some milk in the fridge .",
"We identify existential dependencies for this transformation.",
"Future gonna and immediate future finna .",
"AAVE speakers can mark future tense with gon or gonna , as in You gon understand (Green, 2002; Sidnell, 2002).",
"In the first person, this becomes I'ma .",
"In the immediate future, speakers can use finna (or variants fixina, fixna and fitna ), as in I'm finna leave. Have / got.",
"In the casual speech of AAVE and other dialects, both the modal and the verb form of have can be replaced by got (Trotta and Blyah-her, 2011).",
"Have to can become got to or gotta , and similar for the verb of possession.",
"We simply convert the present-tense have and has to got and ensure that the verb has an object.",
"Inflection.",
"In AAVE, speakers do not necessarily inflect simple present or past tense verbs differently for number or person (Green, 2002).",
"This means the SAE sentence She studies linguistics could be rendered in AAVE as She study linguistics .",
"We use the pyinflect library to convert all present and simple past verbs into the first person.",
"Negative concord.",
"This widely-known feature of AAVE (and numerous other dialects) involves two negative morphemes to convey a single negation.",
"(Martin et al., 1998).",
"For example, the SAE sentence He doesn't have a camera could look more like He don't have no camera in AAVE.",
"This transformation rule is sensitive to the verb-object dependency structure, and requires that the object is an indefinite noun (Green, 2002).",
"Negative inversion.",
"This feature is superficially similar to negative concord.",
"Both an auxiliary and an indefinite noun phrase are negated at the beginning of a sentence or clause (Green, 2002; Martin et al., 1998).",
"For example, the SAE assertion that no suffering lasts forever could be rendered in AAVE as don't no suffering last forever. 3703 Null genitives.",
"AAVE allows a null genitive marking (Stewart, 2014; Wolfram and Schilling, 2015), like the removal of the possessive 's in Rolanda bed (Green, 2002).",
"We simply drop any possessive endings ( POS ) from the text.",
"Relative clause structures.",
"There is a grammatical option to drop the Wh-pronoun when it is serving as the complementizer to a relative clause, as in It's a whole lot of people don't wanna go to hell (Green, 2002).",
"In our transformation, we simply drop all lemmas who and that where the head is a relative clause modifier.",
"Some of the most recognizable differences between SAE and AAVE are found in the lexicon and orthographic conventions.",
"Because we are not aware of any comprehensive AAVE lexicons, we automatically learn our own SAE to AAVE dictionary from public data, and we will provide this resource in our public repository.",
"This dictionary serves as a mapping between plausible synonyms (e.g., mash / press ; homie / friend ; paper / money ) and orthographic variants (e.g., da / the ; wit / with ; sista / sister ) In a method inspired by Shoemark et al. (2018), we trained a skip-gram word embedding model 1 (Mikolov et al., 2013) on the public TwitterAAE dataset of Blodgett et al. (2016).",
"This dataset contained attested code-switching behavior, which allowed us to extract a linguistic code axis c in the embedding space, defined by the average c = (cid:88) ( x i , y i ) S x i y i | S | where S was our seed list of known priors from Shoemark et al. (2018), given in Appendix A.2.",
"Next, we ranked the candidate word pairs w i , w j by cos( c , w i w j ) following Bolukbasi et al. (2016).",
"In this ranking, we consider only the pairs whose cosine similarity passed a threshold , where was defined by the bottom quartile of the cosine similarities in our seed set S .",
"After automatic filtering, we were left with 2,460 pairs.",
"We hand-filtered this list to remove any semantically dissimilar words, like fishin/kayakin or mom/gramps .",
"This left us with 1,988 pairs.",
"AAVE variants.",
"We provide a sample of this mapping in Table 1. In the final step of the translation, we chose uniformly at random between the AAVE variants to make our substitution.",
"We simply scanned the GLUE dataset and swapped any known tokens from SAE to AAVE.",
"Our transformed tasks are all derived from GLUE.",
"We skip Diagnostics because it is not a benchmark, and we do not transform the Microsoft Research Paraphrase Corpus (Dolan and Brockett, 2005) because it is proprietary.",
"However, we do transform the remaining seven benchmarks, which include the single-sentence tasks",
"(i) Stanford Sentiment Treebank (SST-2) which involves classifying the sentiment of movie reviews as positive or negative, and",
"(ii) Corpus of Linguistic Acceptability (CoLA) which involves deciding whether a sentence is linguistically acceptable or not; the similarity and paraphrase task called Semantic Textual Similarity Benchmark (STS-B), which involves predicting the similarity ratings between two sentences; and the inference tasks",
"(i) Multi-Genre Natural Language Inference (MNLI) which involves classifying the relationships between two sentences as entailment, contradiction, or neutral,",
"(ii) Question Natural Language Inference (QNLI) which involves predicting whether a given sentence is the correct answer to a given question; and finally",
"(iii) Recognizing Textual Entailment (RTE) which involves predicting an entailment relation between two sentences.",
"Ta-3704 Dataset # data ass aux been dey/it got lexical neg cncrd null gen null relcl uninflect CoLA 1,063 9% 15% 6% 2% 2% 51% 4% 3% 3% 17% MNLI 9,682 30% 20% 9% 4% 5% 69% 4% 11% 10% 23% QNLI 5,725 16% 42% 2% 1% 3% 50% 1% 10% 4% 17% QQP 390,690 16% 2% 3% 63% 3% 59% 1% 3% 3% 13% RTE 3,029 48% 40% 36% 3% 5% 81% 4% 28% 25 40% SST-2 1,821 31% 25% 5% 3% 4% 64% 4% 14% 15% 39% STS-B 1,894 1% 0 32% 2% 3% 2% 9% 4% 2% 5% WNLI 146 48% 36% 38% 3% 16% 90% 1% 37% 12% 33% Table 2: Dataset statistics reveal important differences between VALUE datasets, which come in markedly different sizes.",
"ble 2 provides a set of summary statistics for these datasets.",
"It is clear that they come in different sizes, and that the some tasks have been more heavily modified than others.",
"However, most of the sentences in this benchmark have undergone at least one transformation.",
"Since our morphosyntactic transformations are rule-based rather than data-driven, it is especially important to validate that these rules are aligned with real AAVE speakers' grammaticality judgments.",
"User-Centered Validation Protocol.",
"We opt for a participatory design process (Schuler and Namioka, 1993) to help ensure that these transformations are usable and meet the language practices of real speakers.",
"We partnered with DataWorks, 2 an initiative started in Georgia Tech's College of Computing that seeks to involve members of underrepresented and economically disadvantaged groups in research and data annotation.",
"All annotators were AAVE speakers and members of the Black community in Atlanta, and they were compensated for their time.",
"Four volunteers from DataWorks partnered in the design of this rule-validation process.",
"Specifically, we co-designed appropriate questions to measure the linguistic and social plausibility of our transformation system.",
"The HIT questions were based on a pair of utterances: (1) the original SAE sentence from the GLUE benchmark, and (2) the transformed AAVE sentence using only the morphosyntactic rules.",
"We highlighted and indexed the portions of utterance (1) that were transformed in utterance (2), and 2 https://dataworkforce.gatech.edu/ we asked annotators for a binary grammaticality judgment.",
"Separately, we asked for the social acceptability using a scale that was co-designed by DataWorkers.",
"Then, for text marked as ungrammatical, annotators provided us with the indices at which transformation errors occurred.",
"The task was hosted on the Amazon Mechanical Turk sandbox platform, but we interfaced with the annotators throughout the entire annotation process to answer any questions.",
"In early iterations of the task, DataWorkers discussed confusions and disagreements with the authors, and we discovered that the greatest variation in their judgments came not from differences in the speakers' underlying grammars, but rather from their different intuitions about what is socially acceptable (alternatively awkward and unnatural) to say in certain social settings.",
"To disentangle these factors, DataWorkers helped us design a 10-point social acceptability Likert scale with the following vernacular: If someone said this in your community, would it be (1) not very cool, (5) a bit sensitive, (7) passing, or (10) cool?",
"Separately, we discussed certain orthographic conventions that we had adopted from the linguistics literature.",
"DataWorkers indicated that some of these conventions were disagreeable especially the spelling for there are as dey from Green (2002).",
"Some DataWorkers suggested we use the spelling dey're instead.",
"Relatedly, the DataWorkers found the ass constructions sensitive, given its long history of mischaracterization as an expletive, as well as the broader relationship between such dialect misunderstandings and racial injustice (Rickford and King, 2016; Rickford, 2016).",
"We simply excluded ass constructions from the validation.",
"DataWorkers also reported sentences from the original GLUE task were highly offensive (e.g. mentions 3705 Accuracy Accuracy Size Transformation (Maj. Vote) (Unanimous) n Ass constructions -Auxiliaries 96.6 77.4 638 Been / done 95.4 72.7 670 Existential dey/it 91.4 57.9 304 Gonna / finna 95.4 78.7 197 Have / got 96.2 84.8 290 Inflection 97.1 82.3 761 Negative concord 95.9 73.6 584 Negative inversion 95.0 69.3 101 Null genitives 97.9 85.3 573 Relative clause structures 94.1 58.3 489 Table 3: Accuracy of SAE AAVE transformations and n the number of instances present.",
"of sexual violence).",
"We used the Perspective API 3 and the offensive language classifier Davidson et al. (2017) to filter out such instances.",
"Finally, we discussed the visual and interactive elements of the task itself.",
"Workers preferred to see the synthetic AAVE text appear with visual priority above the SAE sentence.",
"We also adjusted the color scheme to maximally distinguish concepts of social and grammatical acceptability.",
"The word acceptability itself was triggering for the DataWorkers because it evoked the history of linguistic discrimination against AAVE speakers based on ignorant and prescriptive claims regarding correct or proper English.",
"For this reason, we modified the prompt to read: Do the words and the order of the words make sense?",
"With extensive follow-up meetings, we clarified that to make sense means a sentence follows expected and consistent language rules (i.e. a speaker's internal grammar).",
"Results.",
"In the end, we collected acceptability judgments from three DataWorks workers for each of 2,556 randomly sampled sentence transformation pairs.",
"We observed fair inter-annotator agreement with Krippendorf's = 0 .",
"26 .",
"Table 3 presents the aggregate judgments for local transformations in each morphological category.",
"Here, we report the transformation accuracy as the proportion of local transformations marked as acceptable by majority vote or unanimous consensus, and we find our transformation rules are strongly validated.",
"Majority vote gives nearly 100% accuracy for all transformation types.",
"Even under strict unanimous consensus, the accuracy exceeds 70% for seven of the 11 transformation types.",
"Overall, this shows the 3 https://www.perspectiveapi.com/ quality of our linguistic transformations.",
"Error Analysis.",
"Although our transformation rules are generally valid, errors can stem from an overapplication of the rule in restricted contexts.",
"For example, most rules do not apply to idioms or named entities, so if we see a brand name like Reese's Pieces , we should not remove the possessive s .",
"Other observed challenge cases include the subjunctive mood and subject inversions in questions, the non-standard morphology of certain contractions, as well as co-reference and scoping issues in relative clauses, ellipsis, and long-range dependencies (See Appendix D for more details).",
"These each may introduce their own special cases that could be coded in future iterations.",
"For a more reliable test set, we next construct a gold standard in Section 4.2 4.2 Building a Gold Test Set Despite the advantages of controllable feature transformations for benchmarking with explainable error analysis, we cannot rely on the synthetic benchmark alone.",
"Synthetic data may not fully capture the social and structural nuances of AAVE, nor speakers' dynamic and contextual use of dialect feature density.",
"This motivates us to build a small test set of Gold Standard AAVE utterances.",
"Here, annotators considered GLUE sentence transformations as before.",
"The DataWorkers could either (1) confirm that synthetic transformation was natural, or alternatively (2) provide us with their own translation of the SAE text.",
"Together, datapoints from (1) and (2) construct our Gold Test Set.",
"4 We provide the distribution of Gold Standard datapoints for each task in Table 4. In future iterations, we will expand the total size of the Gold Test sets for reliable benchmarking.",
"In this section, we stress-test current systems on NLU tasks and reveal performance drops on dialect-variants.",
"We investigate the effectiveness of standard training on VALUE and we ablate the dialect test set to understand which dialect features most significantly challenge models.",
"We have two variants of synthetic AAVE data.",
"In AAVE (VALUE) , we apply the full suite of Sec-4 Note: we did not build a Gold CoLA Test set.",
"The nature of the annotation task would be ambiguous since CoLA itself contains intentionally ungrammatical utterances.",
"It is not clear how annotators should translate ungrammatical SAE into ungrammatical AAVE.",
"tion 3 transformations to the standard GLUE tasks.",
"In AAVE Morph , we have an ablated variant of VALUE where only the morphosyntactic transformations (Section 3.1) are executed.",
"By testing base SAE models on this data, we can disentangle the challenges associated with vocabulary shift from those associated with structural differences.",
"If the challenges of VALUE were entirely lexical, we would anticipate that any performance disparity could be recovered with domain-specific word embeddings, since prior work has found such embeddings adequately represent the meanings of new words in AAVE corpora (Hwang et al., 2020b).",
"The most direct way to prepare models for a particular language variety is to directly train them on a dialect-variant of the task.",
"Using our transformation rules (Section 3), we first augment the GLUE training set with AAVE features and then re-train the models (125M-parameter RoBERTA-base) on the augmented data.",
"Following Liu et al. (2019), the batch size was 16.",
"The maximum learning rate was selected as 5 e 4 and the maximum number of training epochs was set to be either 5 or 10 .",
"Table 5 compares the performance of RoBERTa models trained and tested on SAE or AAVE-variants of seven natural language understanding tasks in GLUE.",
"Results are given as Matthew's Correlation for CoLA, Pearson-Spearman Correlation for STS-B, and Accuracy for all other tasks, averaged over three random seeds.",
"In most cases, training jointly on GLUE and VALUE (SAE + AAVE) leads to best performance .",
"With a single training set, there is an expected pattern: training with the corresponding train set typically leads to best performance on the corresponding test set.",
"With the exception of RTE, 5 base models all suffer a drop in performance when tested on the full 5 RTE may be an outlier because of variance due to its small size: only 2.5k data points vs. QNLI with 100k AAVE (VALUE) test set compared with the models trained on AAVE or jointly on SAE + AAVE (e.g., a 1.5% drop on SST-2; a 0.9% drop on QNLI compared to SAE + AAVE).",
"Performance gaps of a similar magnitude are observed when we test on the Gold Test set (e.g., a 1.2% drop on SST-2; a 0.8% drop on QNLI).",
"Further effort is needed to make the current NLU models more robust to dialect variations.",
"We also see that AAVE Morph challenges current models, which suggests that strategies for resolving any performance gap should take dialect morphology and syntax into consideration.",
"Compared to the AAVE column, there is a less severe but still visible drop in AAVE Morph testing: from 94.3 to 93.2 in SST-2, and from 92.6 to 92.0 in QNLI, for instance.",
"Thus we conclude that the challenge with dialects extends beyond a mere difference in the lexicon.",
"Finally, we run a perturbation analysis (Alvarez-Melis and Jaakkola, 2017) to better understand the impact of each dialectal feature on model performance.",
"For the sake of simplicity, we focus only on MNLI.",
"Specifically, we are interested in cases where the introduction of a particular feature results in a model error.",
"Therefore, we count, for each feature transformation function T , the number of sentence pairs ( x 0 i , x 1 i ) for which a GLUE-trained RoBERTA model f changes its prediction from a correct inference y i to an incorrect inference under the transformation.",
"Not all sentence structures allow for new features, so we consider only the subset of pairs for which the transformation is effective in the hypothesis sentence, and where the original GLUE pair had been predicted correctly.",
"Then the ratio r T is be defined as: r T = (cid:12)(cid:12)(cid:8) ( x 0 i , x 1 i ) XT : f ( x 0 i , T ( x 1 i )) (cid:54) = y i (cid:12)(cid:12)(cid:9) |X T | Here XT is: XT = { ( x 0 i , x 1 i ) : T ( x 1 i ) (cid:54) = x 1 i f ( x 0 i , x 1 i ) = y i } 3707 Test Synth.",
"and r T indicates the proportion of inferences that were flipped to an incorrect label in the presence of T .",
"We report this ratio for each feature in Table 6. The first column in table shows that, when we introduce a negative inversion into a Hypothesis sentence for which the GLUE-trained RoBERTa model was originally correct, then in 9.09% of cases, that correct label would be flipped to an incorrect one.",
"6 The inflection rule and been / done constructions appear less challenging, but still result in 2.88% and 3.06% of new errors respectively.",
"The remaining table columns indicate the contributions of different model mistakes to the overall r T ratio.",
"For example, the single error due to negative inversion occurs here when the model mistakes a neutral relationship for entailment (n e) in the following pair: PREMISE : Still, commercial calculation isn't sufficient to explain his stand and HYPOTHESIS : Won't nothing be enough to explain his strong opinion .",
"In negative concord environments, we most often see neutral pairs mistakenly labeled as contradictory (n c), as with the PREMISE : Each state is different... and HYPOTHESIS : You can go from one area of a state to another and not see no resemblance . For more examples, see Tables 8 and 9 in Appendix C. 6 Why Not Use Style Transfer? We qualitatively investigated the differences between our rule-based approach and a very well-performing unsupervised dialect style transfer model, STRAP Krishna et al. (2020). To train STRAP , we created a pseudo-parallel corpus using a diverse paraphrase model to paraphrase different styles of text, including SAE and the AAVE text from the TwitterAAE corpus Blodgett et al. (2018). Then we fine-tuned a GPT-2 model as the inverse paraphrase function, which learned to reconstruct the various styles. We used the SAE paraphrase model and the AAVE inverse paraphrase model to transfer from SAE to AAVE. In general, we found that STRAP is capable of much greater output diversity. However, in a systematic analysis of dialectal NLU, the first goal is to ensure that the underlying relationships like entailment are not distorted. STRAP can distort the meaning of the text with hallucinations and deletion of key details. Our transformation approach preserves the meaning of the text and thus better captures AAVE morphosyntax. See Appendix E for more details. 7 Conclusion This work introduces the English VernAcular Language Understanding Evaluation (VALUE) benchmark, a challenging variant of GLUE that we created with a set of lexical and morphosyntactic transformation rules. We constructed rules for 11 fea-6 This is the highest error rate for any transformation rule. Note that |X T | = 11 datapoints is a much smaller sample size so the r T estimate is more variable. 3708 Feature r T c n c e n c n e e c e n |X T | Auxiliaries 4.20 0.20 0.07 1.62 0.88 0.68 0.74 1,477 Been / done 3.06 0.22 0.00 1.31 0.44 0.22 0.88 457 Inflection 2.88 0.33 0.20 0.59 0.46 0.39 0.92 1,526 Lexical 5.92 0.67 0.27 1.35 0.57 0.88 2.18 4,902 Negative concord 6.88 0.64 0.16 2.56 0.16 2.08 1.28 625 Negative inversion 9.09 0.00 0.00 0.00 9.09 0.00 0.00 11 Relative clause structures 5.86 0.31 0.62 1.23 0.62 0.31 2.78 324 Table 6: Perturbation analysis. The first column r T gives the proportion of testing instances where the introduction of a particular dialect feature results in a new model error. This column indicates that negative inversions are the most challenging for MNLI. The final column gives the size of the set XT , which is the denominator in the ratio r T . The remaining columns indicate the contributions of different error types to the cumulative r T : the model flips the correct label on the left into the incorrect label on the right side. c : contradiction; n : neutral; e : entailment. tures of AAVE, and recruit fluent AAVE speakers to validate each feature transformation via linguistic acceptability judgments in a participatory design manner. Experiments show that the introduction of new dialectal features can lead to a drop in performance. We also test methods for efficiently adapting models to different language varieties, and discuss dialect specific challenges that our current NLP models are struggling with. Our work sheds light on the disparities of language technologies and has key implications for facilitating more dialect-competent NLU systems. Our longer term goals are to expand VALUE to more NLP tasks such as CoQA (Reddy et al., 2019), and to include other dialects such as Indian English (Demszky et al., 2021b; Lange, 2012; Bhatt, 2008) and Singapore English (Wee, 2008). Limitations and Considerations. Researchers and practitioners should keep the following limitations and considerations in mind when using VALUE. Firstly, dialects are not the deterministic speech patterns that our transformation rules might suggest. While speakers of a dialect have linguistic competence over systematic and internalized grammar rules, speakers still posses an individual degree of control over which features they will employ (Coupland, 2007). The density of these features can vary, not only along demographic axes of geography, age, and gender (Nguyen et al., 2016), but also with different identity presentations in different social contexts (Bucholtz and Hall, 2005). We use VALUE to stress-test current systems by maximally modifying current resources with feature transformations. The high density of dialectal features may appear exaggerated here. Secondly, linguists have historically studied dialects through oral speech via live interviews (Rickford, 2002). The descriptions of academic references will not always map perfectly to the written domain (see Section 4.1 on the spelling of dey ). The orthographic conventions of language communities may vary as significantly as do speech patterns. A third and critical concern is the limitation of synthetic data. Synthetic transformations have the advantage of allowing carefully controlled perturbation analysis and scaling up this analysis without the expensive creation of new datasets. However, synthetic data will not fully capture the social and structural nuances of AAVE, nor speakers' dynamic and contextual use of dialect feature density. For this reason, it is important to ultimately test user-facing models on domain-specific and gold-standard dialectal data. We are continuing to expand our gold-standard test set for GLUE tasks. A fourth consideration is the history of linguistic discrimination and the broader relationship between such dialect misunderstandings and racial injustice (Rickford and King, 2016; Rickford, 2016). AAVE has been frequently appropriated and misused by non-Black individuals, especially in online contexts (Reyes, 2005; Ilbury, 2020). To mitigate deployment risks, we ask users to sign a Data Use Agreement (See Ethics Section). Acknowledgements The authors would like to thank the DataWorks team, as well as Elizabeth DiSalvo, Rahual Gupta, and Jwala Dhamala for their helpful feedback. CZ is supported by the NSF Graduate Research Fellowship under Grant No. DGE-2039655. DY is supported by the Microsoft Research Faculty Fellowship. This work is funded in part by Amazon Research Award under the Alexa Fairness in AI. 3709 Ethics Our task comes from the public version of GLUE (Wang et al., 2019). Our annotation efforts revealed non-normative and offensive language in these original datasets, and we caution practitioners to be aware of this. The rules for converting SAE to AAVE are linguistically informed, and are not designed to change the original meaning of the sentence. Due to the participatory design nature of this work, we involved AAVE speakers and volunteers in the task creation and rule validation process. We asked annotators to skip a specific task and take a break if they are overwhelmed with the task. Our annotators were compensated by DataWorks for their time, and volunteered to help build this linguistic resource for their dialects. Note that AAVE is spoken, and our work only involves speakers from Atlanta. We ask that all users sign the following online agreement before using this resource: I will not use VALUE for malicious purposes including (but not limited to): deception, impersonation, mockery, discrimination, hate speech, targeted harassment and cultural appropriation. In my use of this resource, I will respect the dignity and privacy of all people. References David Alvarez-Melis and Tommi Jaakkola."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"The choice of token vocabulary affects the performance of machine translation.",
"This paper aims to figure out what is a good vocabulary and whether one can find the optimal vocabulary without trial training.",
"To answer these questions, we first provide an alternative understanding of the role of vocabulary from the perspective of information theory.",
"Motivated by this, we formulate the quest of vocabularization finding the best token dictionary with a proper size as an optimal transport (OT) problem.",
"We propose VOLT , a simple and efficient solution without trial training.",
"Empirical results show that VOLT outperforms widely-used vocabularies in diverse scenarios, including WMT-14 English-German and TED's 52 translation directions.",
"For example, VOLT achieves 70% vocabulary size reduction and 0.5 BLEU gain on English-German translation.",
"Also, compared to BPE-search, VOLT reduces the search time from 384 GPU hours to 30 GPU hours on English-German translation.",
"Codes are available at https: //github.com/Jingjing-NLP/VOLT .",
"Due to the discreteness of text, vocabulary construction ( vocabularization for short) is a prerequisite for neural machine translation (NMT) and many other natural language processing (NLP) tasks using neural networks (Mikolov et al., 2013; Vaswani et al., 2017; Gehrmann et al., 2018; Zhang et al., 2018; Devlin et al., 2019).",
"Currently, sub-word approaches like Byte-Pair Encoding (BPE) are widely used in the community (Ott et al., 2018; Ding et al., 2019; Liu et al., 2020), and achieve quite promising results in practice (Sennrich et al., 2016; Costa-juss`a and Fonol-losa, 2016; Lee et al., 2017; Kudo and Richardson, This work is done during the internship at ByteDance AI Lab. 2018; Al-Rfou et al., 2019; Wang et al., 2020).",
"The key idea of these approaches is selecting the most frequent sub-words (or word pieces with higher probabilities) as the vocabulary tokens.",
"In information theory, these frequency-based approaches are simple forms of data compression to reduce entropy (Gage, 1994), which makes the resulting corpus easy to learn and predict (Martin and England, 2011; Bentz and Alikaniotis, 2016).",
"However, the effects of vocabulary size are not sufficiently taken into account since current approaches only consider frequency (or entropy) as the main criteria.",
"Many previous studies (Sennrich and Zhang, 2019; Ding et al., 2019; Provilkov et al., 2020; Salesky et al., 2020) show that vocabulary size also affects downstream performances, especially on low-resource tasks.",
"Due to the lack of appropriate inductive bias about size, trial training (namely traversing all possible sizes) is usually required to search for the optimal size, which takes high computation costs.",
"For convenience, most existing studies only adopt the widely-used settings in implementation.",
"For example, 30K-40K is the most popular size setting in all 42 papers of Conference of Machine Translation (WMT) through 2017 and 2018 (Ding et al., 2019).",
"In this paper, we propose to explore automatic vocabularization by simultaneously considering entropy and vocabulary size without expensive trial training.",
"Designing such a vocabularization approach is non-trivial for two main reasons.",
"First, it is challenging to find an appropriate objective function to optimize them at the same time.",
"Roughly speaking, the corpus entropy decreases with the increase of vocabulary size, which benefits model learning (Martin and England, 2011).",
"On the other side, too many tokens cause token sparsity, which hurts model learning (Allison et al., 2006).",
"Second, supposing that an appropriate measurement is given, it is still challenging to 3000 4000 5000 6000 7000 8000 9000 Size 3.60 3.65 3.70 3.75 3.80 3.85 3.90 3.95 4.00 E n t r o p y 27.0 27.5 28.0 28.5 29.0 29.5 30.0 BLEU Eo Entropy BLEU Figure 1: An illustration of marginal utility.",
"solve such a discrete optimization problem due to the exponential search space.",
"To address the above problems, we propose a VO cabulary L earning approach via optimal T ransport, VOLT for short.",
"It can give an appropriate vocabulary in polynomial time by considering corpus entropy and vocabulary size.",
"Specifi-cally, given the above insight of contradiction between entropy and size, we first borrow the concept of Marginal Utility in economics (Samuelson, 1937) and propose to use Marginal Utility of Vocabularization (MUV) as the measurement.",
"The insight is quite simple: in economics, marginal utility is used to balance the benefit and the cost and we use MUV to balance the entropy (bene-fit) and vocabulary size (cost).",
"Higher MUV is expected for Pareto optimality.",
"Formally, MUV is defined as the negative derivative of entropy to vocabulary size.",
"Figure 1 gives an example about marginal utility.",
"Preliminary results verify that MUV correlates with the downstream performances on two-thirds of tasks (See Figure 2).",
"Then our goal turns to maximize MUV in tractable time complexity.",
"We reformulate our discrete optimization objective into an optimal transport problem (Cuturi, 2013) that can be solved in polynomial time by linear programming.",
"Intuitively, the vocabularization process can be regarded as finding the optimal transport matrix from the character distribution to the vocabulary token distribution .",
"Finally, our proposed VOLT will yield a vocabulary from the optimal transport matrix.",
"We evaluate our approach on multiple machine translation tasks, including WMT-14 English-German translation, TED bilingual translation, and TED multilingual translation.",
"Empirical results show that VOLT beats widely-used vocabularies in diverse scenarios.",
"Furthermore, VOLT is a lightweight solution and does not require expensive computation resources.",
"On English-German translation, VOLT only takes 30 GPU hours to find vocabularies, while the traditional BPE-Search solution takes 384 GPU hours.",
"Initially, most neural models were built upon word-level vocabularies (Costa-juss`a and Fonol-losa, 2016; Vaswani et al., 2017; Zhao et al., 2019).",
"While achieving promising results, it is a common constraint that word-level vocabularies fail on handling rare words under limited vocabulary sizes.",
"Researchers recently have proposed several advanced vocabularization approaches, like byte-level approaches (Wang et al., 2020), character-level approaches (Costa-juss`a and Fonollosa, 2016; Lee et al., 2017; Al-Rfou et al., 2019), and sub-word approaches (Sennrich et al., 2016; Kudo and Richardson, 2018).",
"Byte-Pair Encoding (BPE) (Sennrich et al., 2016) is proposed to get subword-level vocabularies.",
"The general idea is to merge pairs of frequent character sequences to create sub-word units.",
"Sub-word vocabularies can be regarded as a trade-off between character-level vocabularies and word-level vocabularies.",
"Compared to word-level vocabularies, it can decrease the sparsity of tokens and increase the shared features between similar words, which probably have similar semantic meanings, like happy and happier.",
"Compared to character-level vocabularies, it has shorter sentence lengths without rare words.",
"Following BPE, some variants recently have been proposed, like BPE-dropout (Provilkov et al., 2020), SentencePiece (Kudo and Richardson, 2018), and so on.",
"Despite promising results, most existing subword approaches only consider frequency while the effects of vocabulary size is neglected.",
"Thus, trial training is required to find the optimal size, which brings high computation costs.",
"More recently, some studies notice this problem and propose some practical solutions (Kreutzer and Sokolov, 2018; Cherry et al., 2018; Chen et al., 2019; Salesky et al., 2020).",
"In this section, we propose to find a good vocabulary measurement by considering entropy and size.",
"As introduced in Section 1, it is non-trivial to find an appropriate objective function to optimize them simultaneously.",
"On one side, with the increase of vocabulary size, the corpus entropy is decreased, which benefits model learning (Bentz and Alikan-iotis, 2016).",
"On the other side, a large vocabulary causes parameter explosion and token sparsity problems, which hurts model learning (Alli-son et al., 2006).",
"To address this problem, we borrow the concept of Marginal Utility in economics (Samuel-son, 1937) and propose to use Marginal Utility of Vocabularization (MUV) as the optimization objective.",
"MUV evaluates the benefits (entropy) a corpus can get from an increase of cost (size).",
"Higher MUV is expected for higher benefit-cost ratio.",
"Preliminary results verify that MUV correlates with downstream performances on two-thirds of translation tasks (See Figure 2).",
"According to this feature, our goal turns to maximize MUV in tractable time complexity.",
"Definition of MUV Formally, MUV represents the negative derivation of entropy to size.",
"For sim-plification, we leverage a smaller vocabulary to estimate MUV in implementation.",
"Specially, MUV is calculated as: M v ( k + m ) = ( H v ( k + m ) H v ( k ) ) m , (1) where v ( k ) , v ( k + m ) are two vocabularies with k and k + m tokens, respectively.",
"H v represents the corpus entropy with the vocabulary v , which is defined by the sum of token entropy.",
"To avoid the effects of token length, here we normalize entropy with the average length of tokens and the final entropy is defined as: H v = 1 l v (cid:88) i v P ( i ) log P ( i ) , (2) where P ( i ) is the relative frequency of token i from the training corpus and l v is the average length of tokens in vocabulary v .",
"Preliminary Results To verify the effectiveness of MUV as the vocabulary measurement, we conduct experiments on 45 language pairs from TED and calculate the Spearman correlation score between MUV and BLEU scores.",
"We adopt the same and widely-used settings to avoid the effects of other attributes on BLEU scores, such as model hyper-parameters and training hyper-parameters.",
"We generate a sequence of vocabularies with incremental sizes via BPE.",
"All experiments use the same hyper-parameters.",
"Two-thirds of pairs show positive correlations as shown in Figure 2. The middle Spearman score is 0.4.",
"We believe that it is a good signal to show MUV matters.",
"Please refer to Section 5 for more dataset details and Appendix A for more implementation details.",
"Given MUV, we have two natural choices to get the final vocabulary: search and learning.",
"In the search-based direction, we can combine MUV with widely-used vocabularization solutions.",
"For example, the optimal vocabularies can be obtained by enumerating all candidate vocabularies generated by BPE.",
"While being simple and effective, it is not a self-sufficient approach.",
"Furthermore, it still requires a lot of time to generate vocabularies and calculate MUV.",
"To address these problems, we further explore a learning-based solution VOLT for more vocabulary possibilities.",
"We empirically compare MUV-Search and VOLT in Section 5.",
"This section describes the details of the proposed approach.",
"We first show the general idea of VOLT in Section 4.1, then describe the optimal transport solution in Section 4.2, followed by the implementation details in Section 4.3.",
"https://www.statstutor.ac.uk/ resources/uploaded/spearmans.pdf 4.1 Overview We formulate vocabulary construction as a discrete optimization problem whose target is to find the vocabulary with the highest MUV according to Eq.",
"1. However, the vocabulary is discrete and such discrete search space is too large to traverse, which makes the discrete optimization intractable.",
"In this paper, we simplify the original discrete optimization problem by searching for the optimal vocabulary from vocabularies with fixed sizes.",
"Intuitively, MUV is the first derivative of entropy according to the vocabulary size (Eq. 1), and we introduce an auxiliary variable S ( S is an incremental integer sequence) to approximate the computation by only computing MUV between vocabulary sizes as adjacent integers in S .",
"Formally, S = { i, 2 i, ..., ( t 1) i, } where each timestep t represents a set of vocabularies with the number up to S [ t ] .",
"For any vocabulary, its MUV score can be calculated based on a vocabulary from its previous timestep.",
"With sequence S , the target to find the optimal vocabulary v ( t ) with the highest MUV can be formulated as: arg max v ( t 1) VS [ t 1] ,v ( t ) VS [ t ] M v ( t ) = arg max v ( t 1) VS [ t 1] ,v ( t ) VS [ t ] 1 i (cid:2) H v ( t ) H v ( t 1) (cid:3) , where VS [ t 1] and VS [ t ] are two sets containing all vocabularies with upper bound of size S [ t 1] and S [ t ] .",
"Due to exponential search space, we propose to optimize its lower bound: arg max t 1 i (cid:2) max v ( t ) VS [ t ] H v ( t ) max v ( t 1) VS [ t 1] H v ( t 1) (cid:3) .",
"where i means the size difference between t 1 vocabulary and t vocabulary.",
"MUV requires the size difference as a denominator.",
"Based on this equation, the whole solution is split into two steps: 1) searching for the optimal vocabulary with the highest entropy at each timestep t ; 2) enumerating all timesteps and outputing the vocabulary corresponding to the time step satisfying Eq.",
"3. The first step of our approach is to search for the vocabulary with the highest entropy from VS [ t ] .",
"Formally, the goal is to find a vocabulary v ( t ) such that entropy is maximized, arg max v ( t ) VS [ t ] 1 l v ( t ) (cid:88) i v ( t ) P ( i ) log P ( i ) , (4) a b c ab bc ac abc a 200 160 0 0 0 0 40 0 0 b 100 0 100 0 0 0 0 0 0 c 100 0 0 60 0 0 40 0 0 # chr a 200 b 100 c 100 Transport Matrices Char Vocab # tok a 160 b 100 c 60 ac 40 Enumerating possible compositions Finding the optimal composition Token Vocab Corpus Figure 3: An illustration of vocabulary construction from a transport view.",
"where l v is the average length for tokens in v ( t ) , P ( i ) is the probability of token i .",
"However, notice that this problem is in general intractable due to the extensive vocabulary size.",
"Therefore, we instead propose a relaxation in the formulation of discrete optimal transport, which can then be solved efficiently via the Sinkhorn algorithm (Cu-turi, 2013).",
"Intuitively, we can imagine vocabulary construction as a transport process that transports chars into token candidates with the number up to S [ t ] .",
"As shown in Figure 3, the number of chars is fixed, and not all token candidates can get enough chars.",
"Each transport matrix can build a vocabulary by collecting tokens with chars.",
"Different transport matrices bring different transport costs.",
"The target of optimal transport is to find a transport matrix to minimize the transfer cost, i.e., negative entropy in our setting.",
"Given a set of vocabularies VS [ t ] , we want to find the vocabulary with the highest entropy.",
"Consequently, the objective function in Eq.",
"4 becomes min v VS [ t ] 1 l v (cid:88) i v P ( i ) log P ( i ) , s.t. P ( i ) = Token ( i ) (cid:80) i v Token ( i ) , l v = (cid:80) i v len ( i ) | v | .",
"Token ( i ) is the frequency of token i in the vocabulary v .",
"len ( i ) represents the length of token i .",
"Notice that both the distribution P ( i ) and the average length l v depend on the choice of v .",
"Objective Approximation To obtain a tractable lower bound of entropy, it suffices to give a tractable upper bound of the above objective function.",
"We adopt the merging rules to segment raw text similar with BPE where two consecutive tokens will be merged into one if the merged one is in the vocabulary.",
"To this end, let T VS [ t ] be the vocabulary containing top S [ t ] most frequent tokens, C be the set of chars and | T | , | C | be their sizes respectively.",
"Since T is an element of VS [ t ] , clearly, we have min v VS [ t ] 1 l v (cid:88) i v P ( i ) log P ( i ) 1 l T (cid:88) i TP ( i ) log P ( i ) .",
"Here we start from the upper bound of the above objective function, that is 1 l T (cid:80) i TP ( i ) log P ( i ) and then search for a refined token set from T .",
"In this way, we reduce the search space into the subsets of T .",
"Let P ( i, j ) be the joint probability distribution of the tokens and chars that we want to learn.",
"Then we have (cid:88) i TP ( i ) log P ( i ) = (cid:88) i T (cid:88) j CP ( i, j ) log P ( i ) = (cid:88) i T (cid:88) j CP ( i, j ) log P ( i, j ) (cid:124) (cid:123)(cid:122) (cid:125) L 1 + (cid:88) i T (cid:88) j CP ( i, j )( log P ( j | i )) (cid:124) (cid:123)(cid:122) (cid:125) L 2 .",
"(6) The details of proof can be found at Appendix C. Since L 1 is nothing but the negative entropy of the joint probability distribution P ( i, j ) , we shall denote it as H ( P ) .",
"In this way, Eq.",
"6 can be reformulated as the following objective function which has the same form as the objective function in optimal transport: min P R m n (cid:104) P , D (cid:105) H ( P ) .",
"Setup of OT From the perspective of optimal transport, P can be regarded as the transport matrix, and D can be regarded as the distance matrix.",
"Intuitively, optimal transport is about finding the best transporting mass from the char distribution to the target token distribution with the minimum work defined by (cid:104) P , D (cid:105) .",
"To verify the validness of transport solutions, we add the following constraints.",
"First, to avoid invalid transport between char j and token i , we set the distance to + if the target token i does not contain the char j .",
"Otherwise, we use 1 len ( i ) to estimate P ( j | i ) where len ( i ) is the length of token i .",
"Formally, the distance matrix is defined as D ( i, j ) = (cid:26) log P ( j | i ) = + , if j / i log P ( j | i ) = log 1 len ( i ) , otherwise Furthermore, the number of chars is fixed and we set the sum of each row in the transport matrix to the probability of char j .",
"The upper bound of the char requirements for each token is fixed and we set the sum of each column in the transport matrix to the probablity of token j .",
"Formally, the constraints are defined as: | (cid:88) j P ( i, j ) P ( i ) | (cid:15), (9) and (cid:88) i P ( i, j ) = P ( j ) .",
"Given transport matrix P and distance matrix D , the final objective can be formulated as: arg min P R | C || T | H ( P ) + (cid:104) P , D (cid:105) , s.t. (cid:88) i P ( i, j ) = P ( j ) , | (cid:88) j P ( i, j ) P ( i ) | (cid:15), with small (cid:15) > 0 .",
"Figure 4 shows the details of optimal transport solution.",
"Strictly speaking, this is an unbalanced entropy regularized optimal transport problem.",
"Nonetheless, we can still use the generalized Sinkhorn algorithm to efficiently find the target vocabulary as detailed in Section 4.6 of Peyre and Cuturi (2019).",
"The algorithm details are shown in Algorithm 1. At each timestep t , we can generate a new vocabulary associated with entropy scores based on the transport matrix P .",
"Finally, we collect these vocabularies associated with entropy scores, and output the vocabulary satisfying Eq.",
"3. 4.3 Implementation Algorithm 1 lists the process of VOLT.",
"First, we rank all token candidates according to their frequencies.",
"For simplification, we adopt BPE-generated tokens (e.g. BPE-100K) as the token candidates.",
"It is important to note that any segmentation algorithms can be used to initialize token candidates.",
"Experiments show that different initialization approaches result in similar results.",
"We simply adopt BPE-100K for bilingual translation and BPE-300K for multilingual translation in this work.",
"All token candidates with their probabilities are then used to initialize L in Algorithm 1. Figure 4: The details of optimal transport.",
"The size of the incremental integer sequence S is a hyper-parameter and set to (1 K, ..., 10 K ) for bilingual translation, (40 K, ..., 160 K ) for multilingual settings.",
"At each timestep, we can get the vocabulary with the maximum entropy based on the transport matrix.",
"It is inevitable to handle illegal transport case due to relaxed constraints.",
"We remove tokens with distributed chars less than 0 .",
"001 token frequencies.",
"Finally, we enumerate all timesteps and select the vocabulary satisfying Eq.",
"3 as the final vocabulary.",
"After generating the vocabulary, VOLT uses a greedy strategy to encode text similar to BPE.",
"To encode text, it first splits sentences into character-level tokens.",
"Then, we merge two consecutive tokens into one token if the merged one is in the vocabulary.",
"This process keeps running until no tokens can be merged.",
"Out-of-vocabulary tokens will be split into smaller tokens.",
"To evaluate the performance of VOLT, we conduct experiments on three datasets, including WMT-14 English-German translation, TED bilingual translation, and TED multilingual translation.",
"We run experiments on the following machine translation datasets.",
"See Appendix B for more model and training details.",
"dataset is processed following Ott et al. (2018).",
"We choose newstest14 as the test set.",
"2. TED bilingual dataset: We include two settings: X-to-English translation and English-to-X translation.",
"We choose 12 language-pairs with the most training data.",
"We use the language code according to ISO-639-1 standard .",
"TED data is provided by Qi et al. (2018).",
"3. TED multilingual dataset: We conduct experiments with 52 language pairs on a many-to-English setting.",
"The network is trained on all language pairs.",
"We adopt the same preprocessing pipeline in the WMT-14 En-De dataset.",
"Vocabularies Searched by VOLT are Better than Widely-used Vocabularies on Bilingual MT Settings.",
"Ding et al. (2019) gather 42 papers that have been accepted by the research track of Conference of Machine Translation (WMT) through 2017 and 2018.",
"Among these papers, the authors find that 30K-40K is the most popular range for the number of BPE merge actions.",
"Following this work, we first compare our methods with dominant BPE-30K.",
"The results are listed in Table 1. As we can see, the vocabularies searched by VOLT achieve higher BLEU scores with large http://www.lingoes.net/en/translator/langcode.htm Table 1: Comparison between vocabularies search by VOLT and widely-used BPE vocabularies.",
"size reduction.",
"The promising results demonstrate that VOLT is a practical approach that can find a well-performing vocabulary with higher BLEU and smaller size.",
"Vocabularies Searched by VOLT are on Par with Heuristically-searched Vocabularies on Low-resource Datasets.",
"Ding et al. (2019) study how the size of BPE affects the model performance in low-resource settings.",
"They conduct experiments on four language pairs and find that smaller vocabularies are more suitable for low-resource datasets.",
"For Transformer architectures, the optimal vocabulary size is less than 4K, around up to 2K merge actions.",
"We compare VOLT and BPE-1K on an X-to-English bilingual setting.",
"The results are shown in Table 2. We can see that VOLT can find a good vocabulary on par with heuristically searched vocabularies in terms of BLEU scores.",
"Note that BPE-1K is selected based on plenty of experiments.",
"In contrast, VOLT only requires one trials for evaluation and only takes 0.5 CPU hours plus 30 GPU hours to find the optimal vocabulary.",
"VOLT Works Well on Multilingual MT Settings.",
"We conduct a multilingual experiment.",
"These languages come from multiple language families and have diverse characters.",
"We compare VOLT with BPE-60K, the most popular setting in multilingual translation tasks.",
"Table 3 lists the full results.",
"The size of the searched vocabulary is around 110K.",
"As we can see, VOLT achieves better BLEU scores on most pairs.",
"VOLT is a Green Vocabularization Solution.",
"One advantage of VOLT lies in its low resource consumption.",
"We compare VOLT with BPE-Search, a method to select the best one from a BPE-generated vocabulary set based on their BLEU scores.",
"The results are shown in Table 4. In BPE-Search, we first define a vocabulary set including BPE-1K, BPE-2K, BPE-3K, BPE-4K, BPE-5K, BPE-6K, BPE-7K, BPE-8K, BPE-9K, BPE-10K, BPE-20K, BPE-30K.",
"Then, we run full experiments to select the best vocabulary.",
"Table 4 demonstrates that VOLT is a lightweight solution that can find a competitive vocabulary within 0.5 hours on a single CPU, compared to BPE-Search that takes hundreds of GPU hours.",
"The cost of BPE-Search is the sum of the training time on all vocabularies.",
"Furthermore, we also compare VOLT with MUV-Search as introduced in Section 3. MUV-Search is a method that combines MUV and popular approaches by selecting the vocabulary with the highest MUV as the final vocabulary.",
"We generate a sequence of BPE vocabularies with incremental size 1K, 2K, 3K, 4K, 5K, 6K, 7K, 8K, 9K, 10K, 20K.",
"For t -th vocabulary v ( t ) , its MUV score is calculated according to v ( t ) and v ( t 1) .",
"We enumerate all vocabularies and select the vocabulary with the highest MUV as the final vocabulary.",
"The comparison between VOLT and MUV-Search is shown in Table 4. Although MUV-Search does not require downstream full-training, it still takes a lot of time to generate vocabularies and calculate MUV.",
"Among them, VOLT is the most efficient approach.",
"We conduct more experiments to answer the following questions: 1) can a baseline beat strong approaches with a better vocabulary; 2) can VOLT beat recent vocabulary solutions, like SentencePiece; 3) can VOLT work on diverse architectures?",
"dataset.",
"Table 5 shows surprisingly good results.",
"Compared to the approaches in the top block, VOLT achieves almost the best performance with a much smaller vocabulary.",
"These results demonstrate that a simple baseline can achieve good results with a well-defined vocabulary.",
"VOLT Beats SentencePiece and WordPiece.",
"SentencePiece and WordPiece are two variants of sub-word vocabularies.",
"We also compare our approach with them on WMT-14 En-De translation to evaluate the effectiveness of VOLT.",
"The middle block of Table 5 lists the results of Senten-Piece and WordPiece.",
"We implement these two approaches with the default settings.",
"We can observe that VOLT outperforms SentencePiece and WordPiece by a large margin, with over 1 BLEU improvements.",
"VOLT Works on Various Architectures.",
"This work mainly uses Transformer-big in experiments.",
"We are curious about whether VOLT works on other architectures.",
"We take WMT-14 En-De translation as an example and implement a Convolutional Seq2Seq model.",
"The network uses the default settings from Fairseq .",
"We set the maximum epochs to 100 and average the last five models as the final network for evaluation.",
"Table 6 demonstrates that vocabularies searched by VOLT also works on Convolutional Seq2Seq with competitive BLEU but much smaller size.",
"In this work, we verify the effectiveness of VOLT on architectures with standard sizes.",
"Since model capacity is also an important factor on BLEU scores, we recommend larger vocabularies associated with more embedding parameters for small architectures.",
"VOLT can Bring Slight Speedup During Training.",
"We evaluate the running time for VOLT vocabulary and BPE-30K on WMT En-De translation.",
"The model with VOLT-searched vocabulary (11.6k tokens) can process 133 sentences per second, while the model with BPE-30K (33.6k tokens) only executes 101 sentences per second.",
"All experiments run on the same environment (2 Tesla-V100-GPUs + 1 Gold-6130-CPU), with the same beam size for decoding.",
"The speedup mainly comes from larger batch size with reduced embedding parameters.",
"We also find that although VOLT reduces the Softmax computations, it does not sig-nificantly boost the Softmax running time due to optimized parallel computation in GPUs.",
"VOLT Vocabularies and BPE Vocabularies are Highly Overlapped.",
"For simplification, VOLT starts from BPE-segmented tokens.",
"We take WMT En-De as an example to see the difference between VOLT vocabulary and BPE vocabulary.",
"The size of VOLT vocabulary is around 9K and we adopt BPE-9K vocabulary for comparison.",
"We find that these two vocabularies are highly overlapped, especially for those high-frequency words.",
"In this work, we propose a new vocabulary search approach without trail training.",
"The whole framework starts from an informtaion-therotic understanding.",
"According to this understanding, we formulate vocabularization as a two-step discrete optimization objective and propose a principled optimal transport solution VOLT.",
"Experiments show that VOLT can effectively find a well-performing vocabulary in diverse settings.",
"We thank the anonymous reviewers, Demi Guo, for their helpful feedback.",
"Lei Li and Hao Zhou are corresponding authors."
] | [
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"objective",
"abstain",
"objective",
"abstain",
"other",
"other"
] |
[
"Mohammad Sadegh Rasooli Facebook AI Menlo Park, CA, USA [email protected]",
"Abstract",
"We describe a cross-lingual transfer method for dependency parsing that takes into account the problem of word order differences between source and target languages.",
"Our model only relies on the Bible, a considerably smaller parallel data than the commonly used parallel data in transfer methods.",
"We use the concatenation of projected trees from the Bible corpus, and the gold-standard treebanks in multiple source languages along with cross-lingual word representations.",
"We demonstrate that reordering the source treebanks before training on them for a target language improves the accuracy of languages outside the European language family.",
"Our experiments on 68 treebanks (38 languages) in the Universal Dependencies corpus achieve a high accuracy for all languages.",
"Among them, our experiments on 16 treebanks of 12 non-European languages achieve an average UAS absolute improvement of 3 .",
"3% over a state-of-the-art method.",
"There has recently been a great deal of interest in cross-lingual transfer of dependency parsers, for which a parser is trained for a target language of interest using treebanks in other languages.",
"Cross-lingual transfer can eliminate the need for the expensive and time-consuming task of treebank annotation for low-resource languages.",
"Approaches include annotation projection using parallel data sets (Hwa et al., 2005; Ganchev et al., 2009), direct model transfer through learning of a delexicalized model from other treebanks (Zeman and Resnik, 2008; Tackstrom et al., 2013), treebank translation (Tiedemann et al., 2014), using synthetic treebanks (Tiedemann and Agic, 2016; Wang and Eisner, 2016), using cross-lingual word representations (Tackstrom et al., 2012; Guo et al., 2016; Rasooli and Collins, 2017) and using cross-lingual dictionaries (Durrett et al., 2012).",
"Recent results from Rasooli and Collins (2017) have shown accuracies exceeding 80% on unlabeled attachment accuracy (UAS) for several European languages.",
"1 However non-European languages remain a significant challenge for cross-lingual transfer.",
"One hypothesis, which we investigate in this paper, is that word-order differences between languages are a significant challenge for cross-lingual transfer methods.",
"The main goal of our work is therefore to reorder gold-standard source treebanks to make those treebanks syntactically more similar to the target language of interest.",
"We use two different approaches for source treebank reordering: 1) reordering based on dominant dependency directions according to the projected dependencies, 2) learning a classifier on the alignment data.",
"We show that an ensemble of these methods with the baseline method leads to higher performance for the majority of datasets in our experiments.",
"We show particularly significant improvements for non-European languages.",
"2 The main contributions of this work are as follows: We propose two different syntactic reordering methods based on the dependencies projected using translation alignments.",
"The first model is based on the dominant dependency direction in the target language according to the projected dependencies.",
"The second model learns a reordering classifier from the small set of aligned sentences in the Bible parallel data.",
"1 Specifically, Table 9 of Rasooli and Collins (2017) shows 13 datasets, and 11 languages, with UAS scores of over 80%; all of these datasets are in European languages.",
"2 Specifically, performance of our method gives an improvement of at least 2.3% absolute scores in UAS on 11 datasets in 9 languagesCoptic, Basque, Chinese, Vietnamese, Turkish, Persian, Arabic, Indonesian Hebrewwith an average improvement of over 4.5% UAS.",
"We run an extensive set of experiments on 68 treebanks for 38 languages.",
"We show that by just using the Bible data, we are able to achieve significant improvements in non-European languages.",
"Our ensemble method is able to maintain a high accuracy in European languages.",
"We show that syntactic transfer methods can outperform a supervised model for cases in which the gold-standard treebank is very small.",
"This indicates the strength of these models when the language is truly low-resource.",
"Unlike most previous work for which a simple delexicalized model with gold part-of-speech tags are used, we use lexical features and automatic part-of-speech tags.",
"Our final model improves over two strong baselines, one with annotation projection and the other one inspired by the non-neural state-of-the-art model of Rasooli and Collins (2017).",
"Our final results improve the performance on non-European languages by an average UAS absolute improvement of 3 .",
"3% and LAS absolute improvement of 2 .",
"4% .",
"There has recently been a great deal of research on dependency parser transfer.",
"Early work on direct model transfer (Zeman and Resnik, 2008; McDonald et al., 2011; Cohen et al., 2011; Rosa and Zabokrtsky, 2015; Wang and Eisner, 2018a) considered learning a delexicalized parser from one or many source treebanks.",
"A number of papers (Naseem et al., 2012; Tackstrom et al., 2013; Zhang and Barzilay, 2015; Ammar et al., 2016; Wang and Eisner, 2017) have considered making use of topological features to overcome the problem of syntactic differences across languages.",
"Our work instead reorders the source treebanks to make them similar to the target language before training on the source treebanks.",
"Agic (2017) use part-of-speech sequence similarity between the source and target language for selecting the source sentences in a direct transfer approach.",
"Ponti et al. (2018) preprocess source trees to increase the isomorphy between the source and the target language dependency trees.",
"They apply their method on a simple delexicalized model and their accuracy on the small set of languages that they have tried is significantly worse than ours in all languages.",
"The recent work by Wang and Eisner (2018b) reorders delexicalized treebanks of part-of-speech sequences in order to make it more similar to the target language of interest.",
"The latter work is similar to our work in terms of using reordering.",
"Our work is more sophisticated by using a full-fledged parsing model with automatic part-of-speech tags and every accessible dataset such as projected trees and multiple source treebanks as well as cross-lingual word embeddings for all languages.",
"Previous work (Tackstrom et al., 2012; Duong et al., 2015; Guo et al., 2015, 2016; Ammar et al., 2016) has considered using cross-lingual word representations.",
"A number of authors (Durrett et al., 2012; Rasooli and Collins, 2017) have used cross-lingual dictionaries.",
"We also make use of cross-lingual word representations and dictionaries in this paper.",
"We use the automatically extracted dictionaries from the Bible to translate words in the source treebanks to the target language.",
"One other line of research in the delexicalized transfer approach is creating a synthetic treebank (Tiedemann and Agic, 2016; Wang and Eisner, 2016, 2018b).",
"Annotation projection (Hwa et al., 2005; Ganchev et al., 2009; McDonald et al., 2011; Ma and Xia, 2014; Rasooli and Collins, 2015; Lacroix et al., 2016; Agic et al., 2016) is another approach in parser transfer.",
"In this approach, supervised dependencies are projected through word alignments and then used as training data.",
"Similar to previous work (Rasooli and Collins, 2017), we make use of a combination of projected dependencies from annotation projection in addition to partially translated source treebanks.",
"One other approach is treebank translation (Tiedemann et al., 2014) for which a statistical machine translation system is used to translate source treebanks to the target language.",
"These models need a large amount of parallel data for having an accurate translation system.",
"Using the Bible data goes back to the work of Diab and Finch (2000) and Yarowsky et al. (2001).",
"Recently there has been more interest in using the Bible data for different tasks, due to its availability for many languages (Christodouloupoulos and Steedman, 2014; Agic et al., 2015, 2016; Rasooli and Collins, 2017).",
"Previous work ( Ostling and Tiedemann, 2017) has shown that the size of the Bible dataset does not provide a reliable machine translation model.",
"Previous work in the context of machine translation (Bisazza and Federico, 2016; Daiber et al., 2016) presumes the availability of a parallel data that is often much larger than the Bible data.",
"Our model trains on the concatenation of projected dependencies P and all of the source treebanks T 1 . . . T k .",
"The projected data is from the set of projected dependencies for which at least 80% of words have projected dependencies or there is a span of length l 5 such that all words in that span achieve a projected dependency.",
"This is the same as the definition of dense structures P 80 P 5 by Rasooli and Collins (2015).",
"We use our reimplementation of the state-of-the-art neural biaffine graph-based parser of Dozat and Manning (2016) 3 .",
"Because many words in the projected dependencies do not have a head assignment, the parser ignores words without heads during training.",
"Inspired by Rasooli and Collins (2017), we replace every word in the source treebanks with its most frequent aligned translation word from the Bible data in the target language.",
"If that word does not appear in the Bible, we use the original word.",
"That way, we have a code-switched data for which some of the words are being translated.",
"In addition to fine-tuning the word embeddings, we use the fixed pre-trained cross-lingual word embeddings using the training approach of Rasooli and Collins (2017) using the Wikipedia data and the Bible dictionaries.",
"Before making use of the source treebanks T 1 . . . T k in the training data, we reorder each tree in the source treebanks to be syntactically more similar to the word order of the target language.",
"In general, for a head h that has c modifiers m 1 . . . m c , we decide to put each of the dependents m i on the left or right of the head h .",
"After placing them in the correct side of the head, the order in the original source sentence is preserved.",
"Figure 1 shows a real example of an English tree that is reordered for the sake of Persian as the target language.",
"Here we see that we have a verb-final sentence, with nominal modifiers following 3 https://github.com/rasoolims/ universal-parser I had a routine surgery for an ingrown toenail .",
"the head noun.",
"If one aims to translate this English sentence word by word, the reordered sentence gives a very good translation without any change in the sentence.",
"As mentioned earlier, we use two different approaches for source treebank reordering: 1) reordering based on dominant dependency directions according to the projected dependencies, 2) learning a classifier on the alignment data.",
"We next describe these two methods.",
"The main goal of this model is to reorder source dependencies based on dominant dependency directions in the target language.",
"We extract dominant dependency directions according to the projected dependencies P from the alignment data, and use the information for reordering source treebanks.",
"Let the tuple (cid:104) i, m, h, r (cid:105) show the dependency of the m 'th word in the i 'th projected sentence for which the h 'th word is the parent with the dependency label r .",
"(cid:104) i, m, NULL , NULL (cid:105) shows an unknown dependency for the m 'th word: this occurs when some of the words in the target sentence do not achieve a projected dependency.",
"We use the notations h ( i, m ) and r ( i, m ) to show the head index and dependency label of the m 'th word in the i 'th sentence.",
"Definition 2 Dependency direction proportion: Dependency direction proportion of each dependency label l with direction d { 1 , 1 } is defined as:",
"Definition 3 Dominant dependency direction: For each dependency label l , we define the dominant dependency direction ( P ) ( l ) = d if ( P ) ( l, d ) > 0 .",
"75 .",
"In cases where there is no dominant dependency direction, ( P ) ( l ) = 0 .",
"We consider the following dependency labels for extracting dominant dependency direction information: nsubj, obj, iobj, csubj, ccomp, xcomp, obl, vocative, expl, dislocated, advcl, advmod, aux, cop, nmod, appos, nummod, acl, amod.",
"We find the direction of other dependency relations, such as most of the function word dependencies and other non-core dependencies such as conjunction, not following a fixed pattern in the Universal Dependencies corpus.",
"Reordering condition Given a set of projections P , we calculate the dominant dependency direction information for the projections ( P ) .",
"Similar to the projected dependencies, we extract supervised dominant dependency directions from the gold-standard source treebank D : ( D ) .",
"When we encounter a gold-standard dependency relation (cid:104) i, m, h, r (cid:105) in a source treebank D , we change the direction if the following condition holds: ( D ) ( r ) (cid:54) = ( P ) ( r ) and ( P ) ( r ) = d ( i, m ) In other words, if the source and target languages do not have the same dominant dependency direction for r and the dominant direction of the target language is the reverse of the current direction, we change the direction of that dependency.",
"Reordering multiple dependencies in a gold standard tree then results in a reordering of the full tree, as for example in the transformation from Figure 1a to Figure 1b.",
"We now describe our approach for learning a reordering classifier for a target language using the alignment data.",
"Unlike the first model for which we learn concrete rules, this model learns a reordering classifier from automatically aligned data.",
"This model has two steps; the first step prepares the training data from the automatically aligned parallel data, and the second step learns a classifier from the training data.",
"The goal of this step is to create training data for the reordering classifier.",
"This data is extracted from the concatenation of parallel data from all source languages translated to the target language.",
"Given a parallel dataset ( e ( i ) , f ( i ) ) for i = 1 . . . n that contains pairs of source and target sentences e ( i ) and f ( i ) , the following steps are applied to create training data:",
"1. Extracting reordering mappings from alignments: We first extract intersected word alignments for each source-target sentence pair.",
"This is done by running the Giza++ alignments (Och and Ney, 2003) in both directions.",
"We ignore sentence pairs that more than half of the source words do not get alignment.",
"We create a new mapping ( i ) = ( i ) 1 . . . ( i ) s i that maps each index 1 j s i in the original source sentence to a unique index 1 ( i ) j s i in the reordered sentence.",
"2. Parsing source sentences: We parse each source sentence using the supervised parser of the source language.",
"We use the mapping ( i ) to come up with a reordered tree for each sentence.",
"In cases for which the number of non-projective arcs in the projected tree increase compared to the original tree, we do not use the sentence in the final training data.",
"3. Extracting classifier instances: We create a training instance for every modifier word (cid:104) i, m, h, r (cid:105) .",
"The decision about the direction of each dependency can be made based on the following condition: d ( i, m ) = (cid:40) 1 if ( i ) h > ( i ) m 1 otherwise The LORD is a man of war : the LORD is his name .",
"Figure 2 shows an example for the data preparation step.",
"As shown in the figure, the new directions for the English words are decided according to the Persian alignments.",
"The reordering classifier decides about the new direction of each dependency according to the recurrent representation of the head and dependent words.",
"For a source sentence e ( i ) = e ( i ) 1 . . . e ( i ) s i that belongs to a source language L , we first obtain its recurrent representation ( i ) = ( i ) 1 . . . ( i ) s i by running a deep (3 layers) bi-directional LSTM (Hochreiter and Schmidhu-ber, 1997), where ( i ) j R d h .",
"For every dependency tuple (cid:104) i, m, h, r (cid:105) , we use a multi-layer Perceptron (MLP) to decide about the new order dir { 1 , 1 } of the m 'th word with respect to its head h : p ( dir | i, m, h, r ) = softmax dir ( W ( i, m, h, r )) where W R 2 d and ( i, m, h, r ) R d is as follows: ( i, m, h, r ) = relu ( Hq ( i, m, h, r ) + B ) where relu is the rectified linear unit activation (Nair and Hinton, 2010), H R d d q , B R d , and q ( i, m, h, r ) R d q is as follows: q ( i, m, h, r ) = [ ( i ) m ; ( i ) h ; R [ r ]; [ I ( h > m )]; L [ L ]] I had a routine surgery for an ingrown nail .",
"where ( i ) m and ( i ) h are the recurrent representations for the modifier and head words respectively, R is the dependency relation embedding dictionary that embeds every dependency relation to a R d r vector, is the direction embedding for the original position of the head with respect to its head and embeds each direction to a 2-dimensional vector, and L is the language embedding dictionary that embeds the source language id L to a R d L vector.",
"The input to the recurrent layer is the concatenation of two input vectors.",
"The first vector is the sum of the fixed pre-trained cross-lingual embeddings, and randomly initialized word vector.",
"The second vector is the part-of-speech tag embeddings.",
"Figure 3 shows a graphical depiction of the two reordering models that we use in this work.",
"Datasets and Tools We use 68 datasets from 38 languages in the Universal Dependencies corpus version 2.0 (Nivre et al., 2017).",
"The languages are Arabic (ar), Bulgarian (bg), Coptic (cop), Czech (cs), Danish (da), German (de), Greek (el), English (en), Spanish (es), Estonian (et), Basque (eu), Persian (fa), Finnish (fi), French (fr), Hebrew (he), Hindi (hi), Croatian (hr), Hungarian (hu), Indonesian (id), Italian (it), Japanese (ja), Korean (ko), Latin (la), Lithuanian (lt), Latvian (lv), Dutch (nl), Norwegian (no), Polish (pl), Portuguese (pt), Romanian (ro), Russian (ru), Slovak (sk), Slovene (sl), Swedish (sv), Turkish (tr), Ukrainian (uk), Vietnamese (vi), and Chinese (zh).",
"We use the Bible data from Christodouloupoulos and Steedman (2014) for the 38 languages.",
"We extract word alignments using Giza++ default model (Och and Ney, 2003).",
"Following Rasooli and Collins (2015), we obtain intersected alignments and apply soft POS consistency to filter potentially incorrect alignments.",
"We use the Wikipedia dump data to extract monolingual data for the languages in order to train monolingual embeddings.",
"We follow the method of Rasooli and Collins (2017) to use the extracted dictionaries from the Bible and monolingual text from Wikipedia to create cross-lingual word embeddings.",
"We use the UDPipe pretrained models (Straka and Strakova, 2017) to tokenize Wikipedia, and a reimplementation of the Perceptron tagger of Collins (2002) 4 to achieve automatic POS tags trained on the training data of the Universal Dependencies corpus (Nivre et al., 2017).",
"We use word2vec (Mikolov et al., 2013) 5 to achieve embedding vectors both in monolingual and cross-lingual settings.",
"Supervised Parsing Models We trained our supervised models on the union of all datasets in a language to obtain a supervised model for each language.",
"It is worth noting that there are two major changes that we make to the neural parser of Dozat and Manning (2016) in our implementation 6 using the Dynet library (Neubig et al., 2017): first, we add a one-layer character BiLSTM to represent the character information for each word.",
"The final character representation is obtained by concatenating the forward representation of the last character and the backward representation of the first character.",
"The concatenated vector is summed with the randomly initialized as well as fixed pre-trained cross-lingual word embedding vectors.",
"Second, inspired by Weiss et al. (2015), we maintain the moving average parameters to obtain more robust parameters at decoding time.",
"We excluded the following languages from the set of source languages for annotation projection due to their low supervised accuracy: Estonian, Hungarian, Korean, Latin, Lithuanian, Latvian, Turkish, Ukrainian, Vietnamese, and Chinese.",
"Baseline Transfer Models We use two baseline models: 1) Annotation projection: This model only trains on the projected dependencies.",
"2) Annotation projection + direct transfer: To speed up training, we sample at most thousand sentences from each treebank, comprising a training data of about 37K sentences.",
"We noticed that our reordering models perform better in non-European languages, and perform slightly worse in European languages.",
"We use the following ensemble model to make use of all of the three models (annotation projection + direct transfer, and the two reordering models), to make sure that we always obtain an accurate parser.",
"The ensemble model is as follows: given three output trees for the i 'th sentence (cid:104) i j , m, h j , r j (cid:105) for j = 1 , 2 , 3 in the target language L , where the first tuple ( j = 1 ) belongs to the baseline model, the second ( j = 2 ) and third ( j = 3 ) belong to the two reordering models, we weight each dependency edge with respect to the following conditions: ( m, h, r ) = z ( m, h, r ) 3 (cid:88) j =1 c ( j, L ) I ( (cid:104) i j , m, h, r (cid:105) ) where c ( j, L ) is a coefficient that puts more weight on the first or the other two outputs depending on the target language family: c ( j, L ) = 2 if j = 1 & L is European 2 if j > 1 & L is not European 1 otherwise and z ( m, h, r ) is a simple weighting depending on the dominant order information: z ( m, h, r ) = 1 if dir ( (cid:104) m, h (cid:105) ) = ( P ) ( r ) 3 if dir ( (cid:104) m, h (cid:105) ) = ( P ) ( r ) 2 otherwise ( ( P ) ( r ) = 0) Variable Notation Size Word embedding d w 100 POS embedding d p 100 Bi-LSTM d h 400 Dep.",
"The above coefficients are modestly tuned on the Persian language as our development language.",
"We have not seen any significant change in modifying the numbers: instead, the fact that an arc with a dominant dependency direction is regarded as a more valuable arc, and the baseline should have more effect in the European languages suffices for the ensemble model.",
"We run the Eisner first-order graph-based algorithm (Eisner, 1996) on top of the edge weights to extract the best possible tree.",
"We run all of the transfer models with 4000 mini-batches, in which each mini-batch contains approximately 5000 tokens.",
"We follow the same parameters as in Dozat and Manning (2016) and use a dimension of 100 for character embeddings.",
"For the reordering classifier, we use the Adam algorithm (Kingma and Ba, 2014) with default parameters to optimize the log-likelihood objective.",
"We filter the alignment data to keep only those sentences for which at least half of the source words have an alignment.",
"We randomly choose 1% of the reordering data as our heldout data for deciding when to stop training the reordering models.",
"Table 1 shows the parameter values that we use in the reordering classifier.",
"Table 2 shows the results on the Universal Dependencies corpus (Nivre et al., 2017).",
"As shown in the table, the algorithm based on dominant dependency directions improves the accuracy on most of the non-European languages and performs slightly worse than the baseline model in the European languages.",
"The ensemble model, in spite of its simplicity, improves over the baseline in most of the languages, leading to an average UAS improvement of 0 .",
"9 for all languages and 3 .",
"3 for non-European languages.",
"This improvement is very significant in many of the non-European languages; for example, from an LAS of 37 .",
"6 to 52 .",
"7 in Coptic, from a UAS of 44 .",
"9 to 53 .",
"7 in Basque, from a UAS of 40 .",
"6 to 47 .",
"0 in Chinese.",
"Our model also outperforms the supervised models in Ukrainian and Latvian.",
"That is an interesting indicator that for cases that the training data is very small for a language (37 sentences for Ukrainian, and 153 sentences for Latvian), our transfer approach outperforms the supervised model.",
"In this section, we briefly describe our analysis based on the results in the ensemble model and the baseline.",
"For some languages such as Coptic, the number of dense projected dependencies is too small (two trees) such that the parser gives a worse learned model than a random baseline.",
"For some other languages, such as Norwegian and Spanish, this number is too high (more than twenty thousand trees), such that the baseline model performs very well.",
"The dominant dependency direction model generally performs better than the classifier.",
"Our manual investigation shows that the classifier kept many of the dependency directions unchanged, while the dominant dependency direction model changed more directions.",
"Therefore, the dominant direction model gives a higher recall with the expense of losing some precision.",
"The training data for the reordering classifier is very noisy due to wrong alignments.",
"We believe that the dominant direction model, besides its simplicity, is a more robust classifier for reordering, though the classifier is helpful in an ensemble setting.",
"Our detailed analysis show that we are able to improve the head dependency relation for the three most important head POS tags in the dependency grammar.",
"We see that this improvement is more consistent for all non-European languages.",
"Table 3 shows the differences in parsing f-score of dependency relations for adjectives, nouns and verbs as the head.",
"As we see in the Table, we are able to improve the head dependency relation for the three most important head POS tags in the dependency grammar.",
"We see that this improvement is more consistent for all non-European languages.",
"We skip the details of those analysis due to space Dataset Baselines Reordering Supervised Projection Direct+Proj Dominant Classifier Ensemble Difference UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS Coptic 2.0 0.4 58.5 37.6 69.1 52.7 65.5 50.9 69.6 52.7 11.1 15.1 86.9 80.1 Basque 39.5 22.0 44.9 29.0 53.7 34.0 48.6 32.2 53.7 34.4 8.8 5.4 81.9 75.9 Chinese 23.6 10.8 40.6 17.8 47.3 25.4 45.4 23.5 47.0 25.6 6.4 7.8 81.1 74.8 Vietnamese 44.6 26.8 51.2 33.6 55.3 34.5 50.4 34.2 55.1 34.5 4.0 0.9 66.2 56.7 Turkish pud 44.7 19.9 46.6 24.5 50.3 26.7 42.6 22.0 49.9 26.3 3.4 1.8 56.7 31.7 Persian 54.4 46.2 61.8 53.0 64.3 54.7 63.0 53.4 65.1 55.4 3.3 2.4 87.8 83.6 Arabic pud 60.3 44.2 65.2 50.5 68.2 52.0 66.5 51.4 68.3 52.3 3.2 1.8 71.9 58.8 Indonesian 59.9 42.8 72.1 56.0 73.6 56.5 72.9 56.8 74.6 56.7 2.5 0.6 84.8 77.4 Turkish 44.6 23.9 46.6 29.3 48.9 30.6 44.9 26.6 49.0 30.0 2.4 0.7 64.2 52.5 Hebrew 63.1 46.9 70.4 55.4 72.4 54.9 71.6 55.7 72.7 55.4 2.3 0.0 88.2 82.4 Arabic 49.5 36.8 58.9 46.8 60.8 48.3 59.2 46.9 61.2 48.8 2.3 2.0 85.6 78.9 Japanese 54.8 38.9 65.2 46.5 65.9 46.8 64.1 44.8 66.6 46.8 1.4 0.3 94.5 92.7 Japanese pud 58.6 44.1 66.8 51.5 67.4 51.5 64.7 48.4 67.9 51.9 1.1 0.4 94.7 93.5 Korean 34.3 17.3 43.0 24.8 43.5 23.8 43.6 26.4 44.1 24.7 1.1 -0.2 76.2 69.9 Hindi pud 53.4 43.3 58.2 47.6 58.3 47.5 58.8 48.5 58.9 48.2 0.6 0.6 70.2 55.6 Lithuanian 60.6 42.5 66.6 49.5 63.7 46.8 64.6 46.0 67.2 49.9 0.6 0.4 54.8 40.0 Czech cac 33.9 14.8 76.2 66.9 76.3 66.7 75.2 65.8 76.7 67.4 0.5 0.6 92.1 88.3 Czech cltt 13.7 5.1 69.4 59.7 69.7 59.5 66.6 57.8 70.0 60.3 0.5 0.6 88.9 84.9 French partut 81.6 75.2 84.3 77.8 84.9 78.4 84.4 78.1 84.8 78.4 0.5 0.5 90.0 85.1 Croatian 70.6 59.9 79.4 69.9 79.3 69.5 77.9 67.7 79.9 70.1 0.5 0.2 86.8 80.4 Greek 62.3 47.2 75.9 63.9 75.4 63.1 74.7 62.5 76.4 64.1 0.4 0.2 88.0 84.4 Russian pud 75.7 65.8 81.1 72.2 80.9 72.2 79.9 70.7 81.5 72.7 0.4 0.5 86.5 74.1 German 71.4 62.3 75.4 67.1 75.6 67.1 75.5 66.4 75.8 67.3 0.4 0.2 85.9 81.2 French 80.2 72.9 83.0 75.9 82.9 75.9 83.3 75.9 83.4 76.2 0.4 0.3 90.4 86.9 Czech 33.9 14.5 74.6 65.3 74.1 64.4 73.0 63.7 75.0 65.8 0.4 0.5 92.5 89.1 Finnish pud 64.1 52.5 67.2 55.0 66.8 55.0 67.3 55.1 67.5 55.5 0.4 0.5 81.6 74.5 Dutch 59.2 48.2 68.5 55.2 69.6 55.9 68.3 54.4 68.8 55.4 0.4 0.1 83.5 76.6 Russian 68.9 59.4 75.1 63.9 75.4 64.1 74.5 63.4 75.5 64.3 0.4 0.4 85.7 77.9 Latin ittb 56.4 42.5 63.0 49.2 63.2 49.5 62.4 48.7 63.3 49.7 0.4 0.4 89.5 86.5 Norwegian nynorsk 72.5 62.9 76.4 68.1 76.5 68.0 76.1 67.3 76.8 68.4 0.3 0.3 91.3 88.8 Ukrainian 55.1 36.9 64.3 46.1 64.5 45.7 61.7 42.2 64.6 45.9 0.3 -0.2 43.3 22.1 Bulgarian 80.4 69.4 83.8 73.8 84.0 73.8 83.1 73.0 84.1 73.9 0.3 0.1 90.9 86.0 English lines 75.6 66.5 77.8 69.0 78.9 69.9 77.0 68.2 78.1 69.2 0.3 0.3 85.8 80.5 Finnish ftb 63.9 46.5 66.0 48.3 65.8 47.6 65.7 48.1 66.3 48.4 0.3 0.1 81.1 74.4 Russian syntagrus 69.4 57.5 73.9 62.2 73.8 61.8 73.2 61.2 74.2 62.3 0.3 0.1 91.3 88.3 Finnish 60.6 48.7 64.6 51.9 63.5 51.2 63.7 51.1 64.8 52.0 0.2 0.1 80.9 73.5 Hungarian 58.3 41.1 67.8 49.0 67.8 48.9 65.8 47.4 68.0 49.1 0.2 0.1 78.2 69.8 Czech pud 35.7 16.6 77.5 69.3 76.7 67.6 76.2 67.7 77.7 69.4 0.2 0.2 89.9 84.4 Dutch lassysmall 61.8 52.1 73.9 63.4 73.8 62.8 73.0 61.9 74.0 63.3 0.2 0.0 91.3 87.3 Slovenian sst 58.4 44.1 61.7 47.7 61.6 47.7 61.6 47.4 61.9 48.0 0.2 0.3 70.6 63.6 English pud 73.5 65.5 75.9 69.3 77.1 69.9 74.5 67.7 76.0 69.4 0.2 0.2 88.3 84.2 German pud 74.1 65.3 77.8 68.9 77.7 68.5 76.9 67.4 78.0 68.8 0.1 0.0 85.9 79.0 Polish 77.6 64.7 79.9 67.9 79.7 67.5 79.5 67.2 80.1 68.0 0.1 0.1 89.4 83.3 Swedish lines 77.2 67.7 81.1 71.6 80.7 71.1 80.1 70.4 81.3 71.7 0.1 0.1 86.9 81.5 English 70.1 61.6 72.8 64.6 73.5 65.2 71.6 63.5 72.9 64.8 0.1 0.3 88.2 84.8 Spanish 78.5 68.0 83.1 73.8 83.2 73.8 82.3 72.8 83.2 73.9 0.1 0.1 89.3 83.9 Swedish 75.3 67.0 79.0 70.9 78.8 70.9 78.2 70.0 79.1 71.0 0.1 0.1 86.7 82.3 English partut 72.0 65.3 77.4 71.1 78.0 71.1 76.3 69.9 77.5 71.2 0.1 0.1 88.4 83.0 Swedish pud 75.9 67.4 80.5 72.1 80.2 72.0 79.2 71.0 80.6 72.1 0.1 0.0 84.0 77.6 Italian 81.3 74.4 85.0 79.0 85.4 79.5 84.4 78.1 85.1 79.1 0.1 0.0 92.1 89.5 Romanian 72.8 59.0 76.8 64.2 76.2 63.7 75.3 63.2 76.8 64.3 0.1 0.1 89.6 83.5 Estonian 63.1 40.8 66.7 46.0 65.6 45.8 65.5 45.2 66.7 46.1 0.1 0.2 71.6 60.7 Portuguese 62.6 50.7 84.1 76.9 83.7 76.6 83.4 76.2 84.2 77.1 0.0 0.2 90.6 85.6 Portuguese br 60.6 47.7 81.3 71.2 80.8 70.8 80.8 70.4 81.4 71.3 0.0 0.2 91.6 89.0 Norwegian bokmaal 78.0 70.5 80.5 73.2 80.6 73.4 79.7 72.1 80.5 73.2 0.0 0.0 92.1 89.7 French pud 81.0 72.8 83.7 75.7 84.2 76.2 83.3 75.2 83.7 75.7 0.0 0.0 89.1 83.8 Spanish pud 81.3 70.9 84.3 75.6 84.6 76.0 83.6 74.6 84.3 75.7 0.0 0.1 89.1 80.8 Latvian 59.0 43.6 63.3 47.2 62.1 45.6 60.7 44.7 63.3 47.0 0.0 -0.2 71.3 61.2 Italian pud 83.8 76.0 87.3 81.3 87.5 81.3 86.5 79.9 87.3 81.2 0.0 -0.1 91.9 88.4 French sequoia 79.1 73.0 82.2 76.4 81.6 75.8 81.9 76.0 82.2 76.4 0.0 0.0 90.4 86.7 Latin 49.2 33.6 53.9 36.2 51.3 33.3 54.0 35.5 53.9 35.4 0.0 -0.8 67.2 54.5 Slovene 76.4 67.6 82.1 74.2 81.3 73.0 81.3 73.3 82.0 74.2 -0.1 0.0 88.9 85.4 Spanish ancora 77.7 66.2 82.4 72.7 82.0 72.2 81.4 71.3 82.3 72.5 -0.1 -0.3 91.1 87.0 Danish 70.7 61.7 75.7 67.4 75.3 66.7 74.6 66.2 75.6 67.2 -0.1 -0.2 83.1 79.3 Portuguese pud 63.5 51.8 82.7 75.8 82.5 75.8 82.0 74.8 82.6 75.7 -0.2 -0.1 86.4 78.5 Latin proiel 59.2 46.2 61.5 47.4 60.9 47.1 60.2 46.0 61.3 47.2 -0.2 -0.2 80.9 75.4 Slovak 73.6 63.8 78.7 71.0 78.0 69.8 77.1 68.7 78.5 70.7 -0.2 -0.3 83.5 77.9 Hindi 58.7 47.2 63.7 50.0 62.3 49.0 62.6 49.3 62.7 49.4 -1.0 -0.6 94.2 90.4 Avg.",
"For a few number of languages such as Vietnamese, the best model, even though improves over a strong baseline, still lacks enough accuracy to be considered as a reliable parser in place of a supervised model.",
"We believe that more research on those language will address the mentioned problem.",
"Our current model relies on supervised part-of-speech tags.",
"Future work should study using transferred part-of-speech tags instead of supervised tags, leading to a much more realistic scenario for low-resource languages.",
"We have also calculated the POS trigram cosine similarity between the target language gold standard treeebanks, and the three source training datasets (original, and the two reordered datasets).",
"In all of the non-European languages, the cosine similarity of the reordered datasets improved with different values in the range of (0 . 002 , 0 . 02) .",
"For Czech, Portuguese, German, Greek, English, Romanian, Russian, and Slovak, both of the reordered datasets slightly decreased the trigram cosine similarity.",
"For other languages, the cosine similarity was roughly the same.",
"We have described a cross-lingual dependency transfer method that takes into account the problem of word order differences between the source and target languages.",
"We have shown that applying projection-driven reordering improves the accuracy of non-European languages while maintaining the high accuracies in European languages.",
"The focus of this paper is primarily of dependency parsing.",
"Future work should investigate the effect of our proposed reordering methods on truly low-resource machine translation.",
"We deeply thank the anonymous reviewers for their useful feedback and comments."
] | [
"abstain",
"abstain",
"method",
"method",
"method",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"result",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"objective",
"method",
"result",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"abstain",
"method",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"objective",
"other"
] |
[
"Building curious machines that can answer as well as ask questions is an important challenge for AI.",
"The two tasks of question answering and question generation are usually tackled separately in the NLP literature.",
"At the same time, both require significant amounts of supervised data which is hard to obtain in many domains.",
"To alleviate these issues, we propose a self-training method for jointly learning to ask as well as answer questions, leveraging unlabeled text along with labeled question answer pairs for learning.",
"We evaluate our approach on four benchmark datasets: SQUAD , MS MARCO , WikiQA and TrecQA , and show significant improvements over a number of established baselines on both question answering and question generation tasks.",
"We also achieved new state-of-the-art results on two competitive answer sentence selection tasks: WikiQA and TrecQA .",
"Question Answering (QA) is a well-studied problem in NLP which focuses on answering questions using some structured or unstructured sources of knowledge.",
"Alongside question answering, there has also been some work on generating questions (QG) (Heilman, 2011; Du et al., 2017; Tang et al., 2017) which focuses on generating questions based on given sources of knowledge.",
"QA and QG are closely related 1 tasks.",
"However, NLP literature views the two as entirely separate tasks.",
"In this paper, we explore this relationship between the two tasks by jointly learning to generate as well as answer questions.",
"An improved ability to generate as well as answer questions will help us build curious machines that can interact with humans in a better manner.",
"Joint modeling of 1 We can think of QA and QG as inverse of each other.",
"QA and QG is useful as the two can be used in conjunction to generate novel questions from free text and then answers for the generated questions.",
"We use this idea to perform self-training (Nigam and Ghani, 2000) and leverage free text to augment the training of QA and QG models.",
"QA and QG models are typically trained on question answer pairs which are expensive to obtain in many domains.",
"However, it is cheaper to obtain large quantities of free text.",
"Our self-training procedure leverages unlabeled text to boost the quality of our QA and QG models.",
"This is achieved by a careful data augmentation procedure which uses pre-trained QA and QG models to generate additional labeled question answer pairs.",
"This additional data is then used to retrain our QA and QG models and the procedure is repeated.",
"This addition of synthetic labeled data needs to be performed carefully.",
"During self-training, typically the most confident samples are added to the training set (Zhu, 2005) in each iteration.",
"We use the performance of our QA and QG models as a proxy for estimating the confidence value of the questions.",
"We describe a suite of heuristics inspired from curriculum learning (Bengio et al., 2009) to select the questions to be generated and added to the training set at each epoch.",
"Curriculum learning is inspired from the incremental na-ture of human learning and orders training samples on the easiness scale so that easy samples can be introduced to the learning algorithm first and harder samples can be introduced successively.",
"We show that introducing questions in increasing order of hardness leads to improvements over a baseline that introduces questions randomly.",
"We use a seq2seq model with soft attention (Sutskever et al., 2014; Bahdanau et al., 2014) for QG and a neural model inspired from Attentive Reader (Hermann et al., 2015; Chen et al., 2016) for QA.",
"However, these can be any QA 629 and QG models.",
"We evaluate our approach on four datasets: SQUAD , MS MARCO , WikiQA and TrecQA .",
"We use a corpus of English Wikipedia as unlabeled text.",
"Our experiments show that the self-training approach leads to significant improvements over a number of established approaches in QA and QG on these benchmarks.",
"On the two answer sentence selection QA tasks: ( WikiQA and TrecQA ), we obtain state-of-the-art.",
"In this work, we focus on the task of machine comprehension where the goal is to answer a question q based on a passage p .",
"We model this as an answer sentence selection task i.e., given the set of sentences in the passage p , the task is to select the sentence s p that contains the answer a .",
"Treating QA as an answer sentence selection task is quite common in literature (e.g. see Yu et al., 2014).",
"We model QG as the task of transforming a sentence in the passage into a question.",
"Previous work in QG (Heilman and Smith, 2009) transforms text sentences into questions via some set of manually engineered rules.",
"However, we take an end-to-end neural approach.",
"Let D 0 be a labeled dataset of (passage, question, answer) triples where the answer is given by selecting a sentence in the passage.",
"We also assume access to unlabeled text T which will be used to augment the training of the two models.",
"Since we model QA as the task of selecting an answer sentence from the passage, we treat each sentence in the corresponding passage as a candidate answer for every input question.",
"We employ a neural network model inspired from the Attentive Reader framework proposed in Hermann et al. (2015); Chen et al. (2016).",
"We map all words in the vocabulary to corresponding d dimensional vector representations via an embedding matrix E R d V .",
"Thus, the input passage p can be denoted by the word sequence { p 1 , p 2 , . . . p | p | } and the question q can similarly be denoted by the word sequence { q 1 , q 2 , . . . q | q | } where each token p i R d and q i R d .",
"We use a bi-directional LSTM (Graves et al., 2005) with dropout regularization as in Zaremba et al. (2014) to encode contextual embeddings of each word in the passage: ~ h t = LSTM 1 (cid:16) p t , ~ h t 1 (cid:17) , ~ h t = LSTM 2 (cid:16) p t , ~ h t +1 (cid:17) The final contextual embeddings h t are given by concatenation of the forward and backward pass embeddings: h t = [ ~ h t ; ~ h t ] .",
"Similarly, we use another bi-directional LSTM and encode contextual embeddings of each word in the question.",
"Then, we use attention mechanism (Bahdanau et al., 2014) to compute the alignment distribution a based on the relevance among passage words and the question: a i = softmax (cid:0) q T Wh i (cid:1) .",
"The output vector o is a weighted combination of all contextual embeddings: o = P i a i h i .",
"Finally, the correct answer a among the set of candidate answers A is given by: a = arg max a A w T o .",
"NX Here, represents all the model parameters to be estimated.",
"We use a seq2seq model (Sutskever et al., 2014) with soft attention (Bahdanau et al., 2014) as our QG model.",
"The model transduces an input sequence x to an input sequence y .",
"Here, the input sequence is a sentence in the passage and the output sequence is a generated question.",
"Let x = { x 1 , x 2 , . . . , x | x | } , y = { y 1 , y 2 , . . . , y | y | } and Y be the space of all possible output questions.",
"Thus, we can represent the QG task as find-ing y Y such that: y = arg max y P ( y | x ) .",
"Here, P ( y | x ) is the conditional probability of a question sequence y given input sequence x .",
"Decoder: Following Sutskever et al. (2014), the conditional factorizes over token level predictions: P ( y | x ) = | y | Y t =1 P ( y t | y <t , x ) Here, y <t represents the subsequence of words generated prior to the time step t .",
"For the decoder, we again follow Sutskever et al. (2014): P ( y t | y <t , x ) = softmax (cid:16) W tanh (cid:16) W t [ h ( d ) t ; c t ] (cid:17)(cid:17) 630 Here, h ( d ) t is the decoder RNN state at time step t , and c t is the attention based encoding of the input sequence x at decoding time step t (described later).",
"Also W and W t are model parameters to be learned.",
"We use an LSTM with dropout (Zaremba et al., 2014) as the decoder RNN.",
"The LSTM generates the new decoder state h ( d ) t given the representation of previously generated word y t 1 obtained using a look-up dictionary, and the previous decoder state h ( d ) t 1 .",
"Encoder: We use a bi-directional LSTM (Graves et al., 2005) with attention mechanism as our sentence encoder.",
"We use two LSTM's: one that makes a forward pass in the sequence and another that makes a backward pass as in the QA model described earlier.",
"We use dropout regularization for LSTMs as in Zaremba et al. (2014) in our implementation.",
"The final context dependent token representation h ( e ) t is the concatenation of the forward and backward pass token representations: h ( e ) t = [ ~ h ( e ) t ; ~ h ( e ) t ] .",
"To obtain the final context dependent token representation c j at the decoding time step j , we take a weighted average over token representations: c ( d ) j = | x | P i =1 a ij h ( e ) i .",
"Following Bahdanau et al. (2014), the attention weights a ij are calculated by bilinear scoring followed by softmax normalization: a ij = exp (cid:18) h ( e ) j TW h ( d ) i (cid:19) P i 0 exp (cid:18) h ( e ) j TW h ( d ) i 0 (cid:19) Learning and Inference: We train the encoder decoder framework by maximizing data log-likelihood on a large training set with respect to all the model parameters .",
"Let { x ( i ) , y ( i ) } Ni =1 be the training set.",
"The log-likelihood can be written as: LQG = NX i =1 log P (cid:16) y ( i ) | x ( i ) ; (cid:17) = NX i =1 | y ( i ) | X j =1 log P (cid:16) y ( i ) j | x ( i ) , y ( i ) <j ; (cid:17) We use beam search for inference.",
"As in previous works, we introduce a < UNK > token to model rare words during decoding.",
"These < UNK > tokens are finally replaced by the token in the input sentence with the highest attention score.",
"In our self-training framework, we are given unlabeled text in addition to the labeled passages, question and answer pairs.",
"Self-training (Yarowsky, 1995; Riloff et al., 2003), also known as self-teaching, is one of the earliest techniques for using unlabeled data along with labeled data to improve learning.",
"During self-training, the learner keeps on labeling unlabeled examples and retraining itself on an enlarged labeled training set.",
"We extend self-training to jointly learn two models (namely, QA and QG) iteratively.",
"The QA and QG models are first trained on the labeled corpus.",
"Then, the QG model is used to create more questions from the unlabeled text corpus and the QA model is used to answer these newly created questions.",
"These new questions (carefully selected by an oracle details later) and the original labelled data is then used to (stochastically) update these two models.",
"This procedure can be repeated as long as both the two models continue to improve.",
"Algorithm 1 describes the procedure in detail.",
"In each succesive iteration, we allow the addition of more questions than that introduced in the previous iteration by a multiplicative factor.",
"This sheme adds fewer questions initially when the QA and QG models are weak and more questions thereafter when the two models have (hope-fully) improved.",
"We found that this scheme works better in practice than addiing a fixed number of questions in each iteration.",
"The two models are 631 updated on a subsample of the newly generated datapoints and original unlabelled data.",
"Self-training has been seldom used in NLP.",
"Most prominently, it has been used for WSD (Yarowsky, 1995), noun learning (Riloff et al., 2003) and AMR parsing and generation (Konstas et al., 2017).",
"However, it has not been explored in this way for QA and QG.",
"A key challenge in self-training is selecting which unlabeled data sample to label (iwhich generated questions to add to the training set).",
"The self-training process may erroneously generate some bad or incorrect questions which can sidetrack the learning process.",
"Thus, we implement a question selection oracle which determines which questions to add among the potentially very large set of questions generated by the QG model in each iteration.",
"Traditional wisdom in self-training (Yarowsky, 1995; Riloff et al., 2003) advises selecting a subset of questions on which the models have the highest confidence .",
"We experiment with this idea, proposing multiple self-training oracles which introduce questions in the order of how confident the QA and QG models are on the new potential question: QG: The QG oracle introduces the question in the order of how confident the QG model is on generating the question.",
"This is calculated by a number of heuristics (described later).",
"QA: The QA oracle introduces the question in the order of how confident the QA model is on answering the question.",
"This too is calculated by some heuristics (described later).",
"QA+QG: The QA+QG oracle introduces a question when both QA and QG models are confident about the question.",
"The oracle computes the minimum confidence of the QA and QG models for a question and introduces questions which have the the highest minimum confidence score.",
"Our question selection heurisitcs are based on the ideas of curriculum learning and diversity",
": 1. Curriculum learning (Bengio et al., 2009; Sachan and Xing, 2016a) requires ordering questions on the easiness scale, so that easy questions can be introduced to the learning algorithm first and harder questions can be introduced successively.",
"The main challenge in learning the curriculum is that it requires the identification of easy and hard questions.",
"In our setting, such a ranking of easy and hard questions is difficult to obtain.",
"A human judgement of easiness' of a question might not correlate with what is easy for our algorithms in its feature and hypothesis space.",
"We explore various heuristics that define a measure of easiness and learn the ordering by selecting questions using this measure.",
"2. A number of cognitive scientists (Cantor, 1946) argue that alongside curriculum learning, it is important to introduce diverse (even if sometimes hard) samples.",
"Inspired by this, we introduce a measure of diversity and show that we can achieve further improvements by coupling the curriculum learning heuristics with a measure for diversity.",
"Curriculum Learning: Studies in cognitive science (Skinner, 1958; Peterson, 2004; Krueger and Dayan, 2009) have shown that humans learn much better when the training examples are not randomly presented but organized in increasing order of difficulty.",
"In the machine learning community, this idea was introduced with the nomenclature of curriculum learning (Bengio et al., 2009), where a curriculum is designed by ranking samples based on manually curated difficulty measures.",
"A manifestation of this idea is self-paced learning (SPL) (Kumar et al., 2010; Jiang et al., 2014, 2015) which selects samples based on the local loss term of the sample.",
"We extend this idea and explore the following heuristics for our various oracles: 1) Greedy Optimal (GO): The simplest greedy heuristic is to pick a question q which has the minimum expected effect on the QA and QG models.",
"The expected effect on adding q can be written as: X a A p ( a = a ) E [ L QA/QG ] Here, L QA/QG is LQA , LQG or min ( LQA , LQG ) depending on which oracle we are using.",
"p ( a = a ) can be estimated by computing the scores of each of the answer candidates for q and normalizing them.",
"E [ L QA/QG ] can be estimated by retraining the model(s) after adding this question.",
"2) Change in Objective (CiO): Choose question q that causes the smallest increase in L QA/QG .",
"there are multiple questions with the smallest increase in objective, pick one of them randomly.",
"3) Mini-max (M 2 ): Choose question q that minimizes the expected risk when including the question with the answer candidate a that yields the maximum error.",
"4) Expected Change in Objective (ECiO): In this greedy heuristic, we pick a question q which has the minimum expected effect on the model.",
"The expected effect can be written as: X a p ( a = a ) E (cid:2) L QA/QG (cid:3) Here, p ( a = a ) can again be achieved by computing the scores of each of the answer candidates for q and normalizing them and E (cid:2) L QA/QG (cid:3) can be estimated by evaluating the model.",
"5) Change in Objective-Expected Change in Objective (CiO ECiO): We pick a question q which has the minimum value of the difference between the change in objective and the expected change in objective described above.",
"Intuitively, the difference represents how much the model is surprised to see this new question.",
"Time Complexity: GO and CiO require updating the model, M 2 and ECiO require performing inference on candiate questions, and CiO ECiO requires retraining as well as inference.",
"Thus, M 2 and ECiO are computationally most efficient.",
"Ensembling: We introduce an ensembling strategy that combines the heuristics into an ensemble.",
"We tried two ensembling strategies.",
"The first strategy computes the average score over all the heuristics for all potential (top-K in beam) questions and picks questions with the highest average.",
"The second strategy uses minimum instead of the average.",
"Minimum works better than average in practice and we use it in our experiments.",
"The use of minimum is inspired by agreement-based learning (Liang et al., 2008), a well-known extension of self-training which uses multiple views of the data (described using different feature sets or models) and adds new unlabeled samples to the training set when multiple models agree on the label.",
"Diversity: The strategy of introducing easy questions first and then gradually introducing harder questions is intuitive as it allows the learner to improve gradually.",
"Yet, it has one key deficiency.",
"With curriculum learning, by focusing on easy questions first, our learning algorithm is usually not exposed to a diverse set of questions.",
"This is particularly a problem for deep-learning approaches that learn representations during the process of learning.",
"Hence, when a harder question arrives, it can be difficult for the learner to adjust to the new question as the current representation may not be appropriate for the new level of question difficulty.",
"We tackle this by introducing an explore and exploit ( E&E ) strategy.",
"E&E ensures that while we still select easy questions first, we also want to make our selection as diverse as possible.",
"We define a measure for diversity as the angle between the question vectors: q i , q j = Cosine 1 (cid:16) | q i q j | || q i |||| q j || (cid:17) .",
"E&E picks the question which optimizes a convex combination (tuned on the dev set) of the curriculum learning objective and sum of angles between the candidate questions and the questions in the training set.",
"Implementation Details: We perform the same preprocessing on all the text.",
"We lower-case all the text.",
"We use NLTK for word tokenization.",
"For training our neural networks, we only keep the most frequent 50k words (including entity and placeholder markers), and map all other words to a special < UNK > token.",
"We choose word embedding size d = 100, and use the 100-dimensional pretrained GloVe word embeddings (Pennington et al., 2014) for initialization.",
"We set k , m , k 1 and k 2 (hyperparameters for self-training) by grid search on a held-out development set.",
"Datasets: We report our results on four datasets: SQUAD (Rajpurkar et al., 2016), MS MARCO (Nguyen et al., 2016), WikiQA (Yang et al., 2015) and TrecQA (Wang et al., 2007).",
"SQUAD is a cloze-style reading comprehension dataset with questions posed by crowd workers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage.",
"MS MARCO contains questions which are real anonymized queries issued through Bing or Cortana and the documents are related web pages which may or help answer the question.",
"WikiQA is also a datset of queries taken from Bing query logs.",
"Based on user clicks, each query is associated with a Wikipedia page.",
"The summary paragraph of the page is taken as candidate answer sentences, with labels on whether the sentence is a correct answer to the question provided by crowd 633 SQUAD MS MARCO WikiQA TrecQA Train Dev Test Train Dev Test Train Dev Test Train Dev Test #Questions 82,326 4,806 5,241 87,341 5,273 5,279 1,040 140 293 1,229 82 100 #Question-Answer Pairs 676,193 39,510 42,850 440,573 26,442 26,604 20,360 2,733 6,165 53,417 1,148 1,517 Table 1: Statistics of the four datasets used in evaluating our QA and QG models.",
"selection dataset from the TREC QA track.",
"While WikiQA and TrecQA are directly answer sentence selection tasks, the other two are not.",
"Hence, we treat the SQUAD and MS MARCO tasks as the answer sentence selection task assuming a one to one correspondence between answer sentences and annotated correct answer spans.",
"Note that only a very small proportion of answers ( < 0 . 2 % in training set) span two or more sentences.",
"Since SQUAD and MS MARCO have a hidden test set, we only use the training and development sets for our evaluation purposes and we further split the provided development set into a dev and test set.",
"This is also the data analysis setting used in previous works (Du et al., 2017; Tang et al., 2017).",
"In fact, we use the same setting as in Tang et al. (2017) for comparison.",
"The statistics of the four datasets and the respective train, dev and test splits are given in Table",
"1. For WikiQA and TrecQA datasets, we use the standard data splits.",
"We use a large randomly subsampled corpus of English Wikipedia and use the first paragraph of each document as unlabeled text for self-training.",
"Evaluation Metrics: Following Tang et al. (2017), we evaluate our QA system with three standard evaluation metrics: Mean Average Precision (MAP), Mean Reciprocal Rank (MRR) and Precision@1 (P@1).",
"For QG, we follow Du et al. (2017) and use automatic evaluation metrics from MT and summarization: BLEU-4 (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014) and Rouge L (Lin, 2004) to measure the overlap between generated and ground truth questions.",
"Baselines: For SQUAD and MS MARCO datasets, we use four QA baselines that have been used in previous works (Tang et al., 2017).",
"The first two baselines, WordCnt and NormWordCnt , have been taken from Yang et al. (2015) and Yin et al. (2015), and are based on simple word overlap which have been shown to be strong baselines.",
"These compute word co-occurrence between a question sentence and the candidate answer sentence.",
"While WordCnt uses unnormalized word co-occurrence, NormWordCnt uses normalized word co-occurrence.",
"The third and fourth 0.25 0.3 0.35 0.4 0.45 0.5 0 20 40 60 80 100 MAP Epochs Figure 1: MAP for our best self-trained QA model (with 10,000 Wikipedia paragraphs) without any curriculum learning (i.e. candidate questions are added randomly) vs epochs.",
"baselines are CDSSM (Shen et al., 2014) and ABCNN (Yin et al., 2015) which use a neural network approach to model semantic relatedness of sentence pairs.",
"For the WikiQA and TrecQA dataset, we report results of various existing state-of-the-art approaches on the two datasets 2 .",
"For QG, we compare our model against the following four baselines used in previous work (Du et al., 2017).",
"The first baseline is a simple IR baselines taken from Rush et al. (2015) which generates questions by memorizing them from the training set and uses edit distance (Levenshtein, 1966) to calculate distance between a question and the input sentence.",
"The second baseline is a MT system MOSES (Koehn et al., 2007) which models question generation as a translation task where raw sentences are treated as source texts and questions are treated as target texts.",
"The third baseline, DirectIn , uses the longest sub-sentence of the input sentence (using a set of simple sentence splitters) as the question.",
"The fourth baseline, H&S is a rule-based overgenerate-and-rank system proposed by Heilman and Smith (2010).",
"The Question Selection Oracle: The first question we wish to answer is: Is careful question selection even necessary ?",
"To answer this, we plot MAP scores for our best QA model (QA+QG, En-semble+E&E) when we do not have a curriculum learning based oracle (i.e. an oracle which picks questions to be added to the dataset randomly) in Figure 1 as a function of epochs.",
"We observe that 2 https://aclweb.org/aclwiki/Question_ Answering_(State_of_the_art) 634 0.45 0.47 0.49 0.51 0.53 0.55 1 10 100 1000 10000 MAPS c o r e Number of unlabeled documents No CL Oracle:QA Oracle:QG Oracle:QA+QG Figure 2: MAP for the best models for the three oracles: QA, QG and QA+QG.",
"the MAP score degrades instead of improving with time.",
"This supports our claim that we need to augment the training set by a more careful procedure.",
"We also plot MAP scores for our best QA model (Ensemble+E&E) when we use various question selection oracles as a function of the amount of unlabeled data in Figure",
"2. We can observe that when we do not have a curriculum learning based oracle, the MAP score degrades by having more and more unlabeled data.",
"We also observe that the QA+QG oracle performs better than QA and QG which confirms that the best oracle is one that selects questions in increasing degree of hardness in terms of both question answering and question generation.",
"This holds for all the experimental settings.",
"Thus we only show results for the QA+QG strategies in our future experiments.",
"Evaluating Question Answering: First, we evaluate our models on the question answering task.",
"Ensemble+E&E(K) is the variant where we perform self-training using K Wikipedia paragraphs.",
"Hence, Ensemble+E&E(0) is the variant of our MAP MRR CNN (Yang et al., 2015) 0.665 0.652 APCNN (Santos et al., 2016) 0.696 0.689 NASM (Miao et al., 2016) 0.707 0.689 ABCNN (Yin et al., 2015) 0.702 0.692 KVMN (Miller et al., 2016) 0.707 0.727 Wang et al. (2016b) 0.706 0.723 Wang et al. (2016a) 0.734 0.742 Wang and Jiang (2016) 0.743 0.755 Tang et al. (2017) 0.700 0.684 Ensemble+E&E(0) 0.691 0.675 Ensemble+E&E(100) 0.718 0.719 Ensemble+E&E(1000) 0.734 0.733 M 2 0.719 0.704 ECiO 0.721 0.708 GO 0.725 0.710 CiO 0.727 0.719 CiO-ECiO 0.734 0.724 Ensemble 0.743 0.743 Ensemble+E&E(10000) 0.754 0.753 Table 3: Performance of our models and the QA baselines on the WikiQA dataset.",
"model without any self-training.",
"We vary K to see the impact of the size of unlabeled Wikipedia paragraphs on the self-training model.",
"Table 2 shows the results of the QA evaluations on the SQUAD and MS MARCO datasets.",
"We can observe that our QA model has competetive or better performance over all the baselines on both datasets in terms of all the three evaluation metrics.",
"When we incorporate ensembling or diversity, we see a further improvement in the result.",
"Tables 3 and 4 show results of QA evaluations on the WikiQA and TrecQA datasets, respectively.",
"We can again observe that our QA model is competitive to all the baselines.",
"When we introduce ensembling and diversity while jointly learning the QA and QG models, we see incremental improvements.",
"In both these answer sentence selection tasks, our approach achieves new state-of-the-art.",
"Evaluating Question Generation: Table 5 shows the results for QG on the four datasets on each of the three evaluation metrics on all the four datasets.",
"We can observe that the QG model described in our paper performs much better than all the baselines.",
"We again observe that self-training while jointly training the QA and QG models leads to even better performance.",
"These results show that self-training and leveraging the relationship between QA and QG is very useful for boosting the performance of the QA and QG models, while additionally only using cheap unlabeled data.",
"Human Evaluations: We asked two people not involved with this research to evaluate 1000 (ran-domly selected) questions generated by our best QG model and our best performing baseline (Du et al., 2017) on SQUAD for fluency and correctness on a scale of 1 to 5.",
"The raters were also shown the passage sentence used to generate the question.",
"The raters were blind to which system produced which question.",
"The Pearson correlation between the raters' judgments was r = 0 .",
"89 for fluency and r = 0 .",
"78 for correctness.",
"In our analyses, we used the averages of the two raters' judgments.",
"The evaluation showed that our system generates questions that are more fluent and correct than those by the baseline.",
"The mean fluency rating of our best system was 4.15 compared to 3.35 for the baseline, a difference which is statistically significant (t-test, p < 0 . 001 ).",
"Evaluating the Question Selection Oracle: As discussed earlier, the choice of which subset of questions to add to our labeled dataset while self-training is important.",
"To evaluate the various heuristics proposed in our paper, we show the effect of the question selection oracle on the final QA and QG performance in Tables 2, 3, 4 and 5.",
"These comparisions are shown in the shaded grey portions of the tables for self-training with 10,000 Wikipedia paragraphs as unlabeled data.",
"We can observe that all the proposed heuristics (and ensembling and diversity strategies) lead to improvements in the final performance of both QA and QG.",
"The heuristics arranged in increasing order of performance are: M 2 , ECiO , GO , CiO and CiO-ECiO .",
"While the choice of which heuristic to pick seems to make a lesser impact on the final performance, we do see a much more significant performance gain by ensembling to combine the various heuristics and using E&E to incorporate diversity.",
"The incorporatation of diversity is important because the neural network models which learnt latent representions of data usually find it hard to adjust to new level of difficulty of questions as the current representation may not be appropriate for the new level of difficulty.",
"Low data scenarios: A key advantage of our self-training approach is that it can leverage unlabeled text, and thus requires less labeled data.",
"To test this, we plot MAP for our best self-training model and various QA baselines as we vary the proportion of labeled training set in Figure 3.",
"However, we keep the unlabeled text fixed (10K Wikipedia paragraphs).",
"We observe that all the baselines sig-nificantly drop in performance as we reduce the proportion of labeled training set.",
"However, the drop happens at a much slower rate for our self-trained model.",
"Thus, we can conclude that our approach requires less labeled data as compared to the baselines.",
"text always improve our models?",
"Will the performance improve if we add more and more unsupervised data during self-training.",
"According to our results in Tables 2, 3, 4 and 5, the answer is prob-ably yes.",
"As we can observe from these tables, the performance of the QA and QG models improves as we increase K , the size of the unsupervised data during training of the various Ensem-ble+E&E(K) models.",
"Having said that, we do see a tapering effect on the performance results, so it is clear that the performance will be capped by some upper-bound and we will need better ways of modeling language and meaning to make progress.",
"Our work proposes an approach for joint modeling QA and QG.",
"While QA has recieved a lot of attention from the research community with large scale community evaluations such as NTCIR , TREC , CLEF spurring progress, the focus on QG is much more recent.",
"Recently, there has been a renewed interest in reading comprehensions (also known as machine comprehension a nomenclature popularized by Richardson et al. (2013)).",
"Various approaches (Sachan et al., 2015; Wang et al., 2015; Sachan and Xing, 2016b; Sachan et al., 2016; Narasimhan and Barzilay, 2015) have been proposed for solving this task.",
"After the release of large benchmarks such as SQUAD , MS MARCO and WikiQA , there has been a surge in interest on using neural network or deep-learning models for QA (Yin et al., 2015; Seo et al., 2016; Shen et al., 2016; Chen et al., 2017; Liu et al., 2017; Hu et al., 2017).",
"In our work, we deal with the answer sentence selection task and adapt the Attentive Reader framework proposed in Hermann et al. (2015); Chen et al. (2016) as our base model.",
"While, all these models were trained on question answer pairs, we propose a self-training solution to additionally leverage unsupervised text.",
"Similarly, there have been works on QG.",
"Traditionally, rule based approaches with postprocessing (Woo et al., 2016; Heilman and Smith, 2009, 2010) were the norm in QG.",
"However, recent papers build on neural network approaches such as seq2seq (Du et al., 2017; Tang et al., 2017; Zhou et al., 2017), CNNs and RNNs (Duan et al., 2017) for QG.",
"We also choose the seq2seq paradigm in our work.",
"However, we leverage unsupervised text in contrast to these models.",
"Finally, some very recent works have concurrently recognized the relationship between QA and QG and have proposed joint training (Tang et al., 2017; Wang et al., 2017) for the two.",
"Our work differs from these as we additionally propose self-training to leverage unlabeled data to improve the two models.",
"Self-training has seldom been used in NLP.",
"Most prominently, they have been used for word sense disambiguation (Yarowsky, 1995), noun learning (Riloff et al., 2003) and recently, AMR parsing and generation (Konstas et al., 2017).",
"However, it has not been explored in this way for QA and QG.",
"An important decision in the workings of our self-training algorithm was the question selection using curriclum learning.",
"While curriculum learning has seldom been used in NLP, we draw some ideas for curriculum learning from Sachan and Xing (2016a) who conduct a case study of curriculum learning for question answering.",
"However, their work focuses only on QA and not QG.",
"We described self-training algorithms for jointly learning to answer and ask questions while leveraging unlabeled data.",
"We experimented with neural models for question answering and question generation and various careful strategies for question filtering based on curriculum learning and diversity promotion.",
"This led to improved performance for both question answering and question generation on multiple datasets and new state-of-the-art results on WikiQA and TrecQA datasets.",
"We acknowledge the CMLH fellowship to MS and ONR grant N000141712463 for funding support."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"method",
"abstain",
"method",
"method",
"result",
"result",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"result",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"other",
"other",
"other",
"method",
"objective",
"other",
"objective",
"other",
"other",
"other",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"other"
] |
[
"In this paper, we propose a novel bipartite flat-graph network (BiFlaG) for nested named entity recognition (NER), which contains two subgraph modules: a flat NER module for outermost entities and a graph module for all the entities located in inner layers.",
"Bidirectional LSTM (BiLSTM) and graph convolutional network (GCN) are adopted to jointly learn flat entities and their inner dependencies.",
"Different from previous models, which only consider the unidirectional delivery of information from innermost layers to outer ones (or outside-to-inside), our model effectively captures the bidirectional interaction between them.",
"We first use the entities recognized by the flat NER module to construct an entity graph, which is fed to the next graph module.",
"The richer representation learned from graph module carries the dependencies of inner entities and can be exploited to improve outermost entity predictions.",
"Experimental results on three standard nested NER datasets demonstrate that our BiFlaG outperforms previous state-of-the-art models.",
"Named entity recognition (NER) aims to identify words or phrases that contain the names of pre-defined categories like location, organization or medical codes.",
"Nested NER further deals with entities that can be nested with each other, such as the United States and third president of the United States shown in Figure 1, such phenomenon is quite common in natural language processing (NLP).",
"NER is commonly regarded as a sequence labeling task (Lample et al., 2016; Ma and Hovy, 2016; Corresponding author.",
"This paper was partially supported by National Key Research and Development Program of China (No. 2017YFB0304100), Key Projects of National Natural Science Foundation of China (U1836222 and 61733011), Huawei-SJTU long term AI project, Cutting-edge Machine reading comprehension and language model.",
"Peters et al., 2017).",
"These approaches only work for non-nested entities (or flat entities), but neglect nested entities.",
"There have been efforts to deal with the nested structure.",
"Ju et al. 2018 introduced a layered sequence labeling model to first recognize innermost entities, and then feed them into the next layer to extract outer entities.",
"However, this model suffers from obvious error propagation.",
"The wrong entities extracted by the previous layer will affect the performance of the next layer.",
"Also, such layered model suffers from the sparsity of entities at high levels.",
"For instance, in the well-known ACE2005 training dataset, there are only two entities in the sixth level.",
"Sohrab and Miwa 2018 proposed a region-based method that enumerates all possible regions and classifies their entity types.",
"However, this model may ignore explicit boundary information.",
"Zheng et al. 2019 combined the layered sequence labeling model and region-based method to locate the entity boundary first, and then utilized the region classification model to predict entities.",
"This model, however, cares less interaction among entities located in outer and inner layers.",
"In this paper, we propose a bipartite flat-graph network (BiFlaG) for nested NER, which models a nested structure containing arbitrary many layers into two parts: outermost entities and inner entities in all remaining layers.",
"For example, as shown in Figure 1, the outermost entity Thomas Jefferson, third president of the United States is considered as a flat (non-nested) entity, while third president of the United States (in the second layer) and the United States (in the third layer) are taken as inner entities.",
"The outermost entities with the maximum coverage are usually identified in the flat NER module, which commonly adopts a sequence labeling model.",
"All the inner entities are extracted through the graph module, which iteratively propagates information between the start and end nodes of a span using graph convolutional network (GCN) (Kipf and Welling, 2017).",
"The benefits of our model are twofold: (1) Different from layered models such as (Ju et al., 2018), which suffers from the constraints of one-way propagation of information from lower to higher layers, our model fully captures the interaction between outermost and inner layers in a bidirectional way.",
"Entities extracted from the flat module are used to construct entity graph for the graph module.",
"Then, new representations learned from graph module are fed back to the flat module to improve outermost entity predictions.",
"Also, merging all the entities located in inner layers into a graph module can effectively alleviate the sparsity of entities in high levels.",
"(2) Compared with region-based models (Sohrab and Miwa, 2018; Zheng et al., 2019), our model makes full use of the sequence information of outermost entities, which take a large proportion in the corpus.",
"We introduce a novel bipartite flat-graph network named BiFlaG for nested NER, which incorporates a flat module for outermost entities and a graph module for inner entities.",
"Our BiFlaG fully utilizes the sequence information of outermost entities and meanwhile bidirectionally considers the interaction between outermost and inner layers, other than unidirectional delivery of information.",
"With extensive experiments on three benchmark datasets (ACE2005, GENIA, and KBP2017), our model outperforms previous state-of-the-art models under the same settings.",
"Our BiFlaG includes two subgraph modules, a flat NER module and a graph module to learn outermost and inner entities, respectively.",
"Figure 2 illustrates the overview of our model.",
"For the flat module, we adopt BiLSTM-CRF to extract flat (out-ermost) entities, and use them to construct the entity graph G 1 as in Figure 2.",
"For the graph module, we use GCN which iteratively propagates information between the start and end nodes of potential entities to learn inner entities.",
"Finally, the learned representation from the graph module is further fed back to the flat module for better outermost predictions.",
"Given a sequence consisting of N tokens { t 1 , t 2 , ..., t N } , for each token t i , we first concatenate the word-level and character-level embedding t i = [ w i ; c i ], w i is the pre-trained word embedding, character embedding c i is learned following the work of (Xin et al., 2018).",
"Then we use a BiLSTM to capture sequential information for each token x i = BILSTM ( t i ) .",
"We take x i as the word representation and feed it to subsequent modules.",
"We adopt BiLSTM-CRF architecture (Lample et al., 2016; Ma and Hovy, 2016; Yang and Zhang, 2018; Luo et al., 2020) in our flat module to recognize flat entities, which consists of a bidirectional LSTM (BiLSTM) encoder and a conditional random field (CRF) decoder.",
"BiLSTM captures bidirectional contextual information of sequences and can effectively represent the hidden states of words in context.",
"BiLSTM represents the sequential information at each step, the hidden state h of BiLSTM can be expressed as follows.",
"h i = LST M ( x i , h i 1 ; ) h i = LST M ( x i , h i 1 ; ) h i = [ h i ; h i ] (1) where and are trainable parameters.",
"h i and h i respectively denote the forward and backward context representations of token t i .",
"The output of BiLSTM H = { h 1 , h 2 , ..., h N } is further fed into the CRF layer.",
"CRF (Lafferty et al., 2001) has been widely used in state-of-the-art NER models (Lample et al., V 0 V 2 V 3 V 5 V 4 V 0 V 1 V 2 V 3 V 4 V 5 V 6 Thomas Jefferson third ... United States ... V 0 V 1 V 6 V 5 ... + G 1 Token Rep . Flat Graph G 2 V 0 V 1 V 2 V 3 V 5 V 4 BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM BiLSTM x h V 1 V 0 V 1 V 2 V 3 V 4 V 5 V 6 Figure 2: The framework of our BiFlaG model. G 1 and G 2 are entity graph and adjacent graph created for GCN, each dashed line connects the start and end nodes for a potential entity. Solid red lines indicate inner entities recognized by the graph module. 2016; Ma and Hovy, 2016; Yang and Zhang, 2018) to help make better decisions, which considers strong label dependencies by adding transition scores between neighboring labels.",
"Viterbi algorithm is applied to search for the label sequence with highest probability during the decoding process.",
"For y = { y 1 , ..., y N } being a sequence of predictions with length N .",
"Its score is defined as follows.",
"where T y i ,y i +1 represents the transmission score from y i to y i +1 , P i,y i is the score of the j th tag of the i th word from BiLSTM encoder.",
"CRF model defines a family of conditional probability p ( y | x ) over all possible tag sequences y : p ( y | x ) = exp s ( x,y ) (cid:80) y y exp s ( x, y ) (3) during training phase, we consider the maximum log probability of the correct predictions.",
"While decoding, we search the tag sequences with maximum score: y = arg max y y score ( x, y ) (4) 2.3 Graph Module Since the original input sentences are plain texts without inherent graphical structure, we first construct graphs based on the sequential information of texts and the entity information from the flat module.",
"Then, we apply GCN (Kipf and Welling, 2017; Qian et al., 2019) which propagates information between neighboring nodes in the graphs, to extract the inner entities.",
"Graph Construction.",
"We create two types of graphs for each sentence as in Figure 2.",
"Each graph is defined as G = ( V, E ) , where V is the set of nodes (words), E is the set of edges.",
"Entity graph G 1 : for all the nodes in an extracted entity extracted from the flat module, edges are added between any two nodes e ij = ( v i , v j ) , where start i < j end , as shown in Figure 2, allowing the outermost entity information to be utilized.",
"Adjacent graph G 2 : for each pair of adjacent words in the sentence, we add one directed edge from the left word to the right one, allowing local contextual information to be utilized.",
"Bi-GCN.",
"In order to consider both incoming and outgoing features for each node, we follow the work of (Marcheggiani and Titov, 2017; Fu et al., 2019), which uses Bi-GCN to extract graph features.",
"Given a graph G = ( V, E ) , and the word representation X = { x 1 , x 2 , ..., x N } , the graph feature f RN d f learned from Bi-GCN is expressed as follows.",
"f i = ReLU ( (cid:88) e ij E ( W f x j + b f )) f i = ReLU ( (cid:88) e ji E ( W f x j + b f )) f i = [ f i ; f i ] (5) where W f R d x d f and b f R d f are trainable parameters, d x represents the dimension of word representation, d f is the hidden size of GCN, ReLU is the non-linear activation function.",
"e ij represents the edge outgoing from token t i , and e ji represents the edge incoming to token t i .",
"b c R d f is a bias parameter.",
"f 1 and f 2 are graph features of G 1 and G 2 , respectively.",
"After getting the graph representation F = { f 1 , f 2 , ..., f N } from Bi-GCN, we learn the entity score M RN N L for inner layers as M ij = softmax ( W 3 ReLU ( W 1 f i W 2 f j )) (7) where W 1 , W 2 R d f d f / 2 , W 3 R d f L , L is the number of entity types.",
"M ij RL represents the type probability for a span starts from token t i and ends at token t j .",
"For inner entities, we define the ground truth entity of word pair ( t i , t j ) as M ij , where t i and t j are start and end nodes of a span.",
"Cross Entropy (CE) is used to calculate the loss L inner = ( (cid:88) ( M ij log ( M ij )) I (O)+ 1 (cid:88) ( M ij log ( M ij )) (1 I (O))) (8) Algorithm 1 Bipartite Flat-Graph Algorithm Input: word representations X = { x 1 ,",
"where M ij RL denotes the entity score in the graph module.",
"I (O) is a switching function to distinguish the loss of non-entity 'O' and other entity types.",
"It is defined as follows.",
"The entity score M in",
"Eq.(7) carries the type probability of each word pair in the sentence.",
"To further consider the information propagation from inner entities to outer ones, we use Bi-GCN to generate new representations from entity score M for the flat module.",
"The largest type score r ij of the word pair ( t i , t j ) indicates whether this span is an entity or non-entity and the confidence score of being such type, which is obtained by a max-pooling operation: r ij = (cid:40) max ( m ij ) , if type (cid:54) = 'O' 0 , if type = 'O' (10) where type represents the entity type or non-entity 'O' corresponding to the maximum type score.",
"When the corresponding type is O , there exits no dependencies between t i and t j , thus we set r ij to",
"0. A new graph that carries the boundary information ACE2005 GENIA Train (%) Dev (%) Test (%) Train (%) Dev (%) Test (%) # sentences 7,285 968 1,058 15,022 1,669 1,854 with o.l. 2,820 (39) 356 (37) 344 (33) 3,432 (23) 384 (23) 467 (25) # mentions 24,827 3,234 3,028 47,027 4,469 5,596 outermost entity 18,656 (75) 2,501 (77) 2,313 (76) 42,558 (90) 4,030 (90) 4,958 (89) inner entity 6,171 (25) 733 (23) 715 (24) 4,469 (10) 439 (10) 642 (11) Table 1: Statistics of the datasets used in our experiments: ACE2005 and KBP2017.",
"of inner entities is defined as G 3 = ( V, E ) , where r ij E .",
"The new representation used to update flat module consists of two parts.",
"The first part carries the previous representation of each token 1 i = W r x i + b r (11) where W r R d x d f , b r R d f .",
"The second part aggregates inner entity dependencies of the new graph G 3 2 i = BI-GCN ( x i , G 3 ) (12) Finally, 1 i and 2 i are added to obtain the new representation x newi = 1 i + 2 i (13) x newi is fed into the flat module to update the parameters and extract better outermost entities.",
"For outermost entities, we use the BIOES sequence labeling scheme and adopt CRF to calculate the loss.",
"The losses corresponding to the two representations ( X and X new ) are added together as the outermost loss L outer = CRFX + CRFX new (14) Entities in the sequence are divided into two disjoint sets of outermost and inner entities, which are modeled by flat module and graph module, respectively.",
"Entities in each module share the same neural network structure.",
"Between two modules, each entity in the flat module is either an independent node, or interacting with one or more entities in the graph module.",
"Therefore, Our BiFlaG is indeed a bipartite graph.",
"Our complete training procedure for BiFlaG is shown in Algorithm",
"1. 2.5 Loss Function Our BiFlaG model predicts both outermost and inner entities.",
"where 2 is a weight between loss of flat module and graph module.",
"We minimize this total loss during training phase.",
"We evaluate our BiFlaG on three standard nested NER datasets: GENIA, ACE2005, and TACKBP2017 (KBP2017) datasets, which contain 22%, 10% and 19% nested mentions, respectively.",
"Table 1 lists the concerned data statistics.",
"GENIA dataset (Kim et al., 2003) is based on the GENIAcorpus3.02p 1 .",
"We use the same setup as previous works (Finkel and Manning, 2009; Lu and Roth, 2015; Lin et al., 2019a).",
"This dataset contains 5 entity categories and is split into 8.1:0.9:1 for training, development and test.",
"ACE2005 2 (Walker et al., 2006) contains 7 fine-grained entity categories.",
"We preprocess the dataset following the same settings of (Lu and Roth, 2015; Wang and Lu, 2018; Katiyar and Cardie, 2018; Lin et al., 2019a) by keeping files from bn, nw and wl, and splitting these files into training, development and test sets by 8:1:1, respectively.",
"KBP2017 Following (Lin et al., 2019a), we evaluate our model on the 2017 English evaluation dataset (LDC2017E55).",
"The training and development sets contain previous RichERE annotated datasets (LDC2015E29, LDC2015E68, LDC2016E31 and LDC2017E02).",
"The datasets are split into 866/20/167 documents for training, development and test, respectively.",
"Metric Precision ( P ), recall ( R ) and F-score ( F 1 ) are used to evaluate the predicted entities.",
"An entity is confirmed correct if it exists in the target labels, regardless of the layer at which the model makes this prediction.",
"Our model 3 is based on the framework of (Yang and Zhang, 2018).",
"We conduct optimization with the stochastic gradient descent (SGD) and Adam for flat and GCN modules, respectively.",
"For GENIA dataset, we use the same 200-dimension pre-trained word embedding as (Ju et al., 2018; Sohrab and Miwa, 2018; Zheng et al., 2019).",
"For ACE2005 and KBP2017 datasets, we use the publicly available pre-trained 100-dimension GloVe (Pennington et al., 2014) embedding.",
"We train the character embedding as in (Xin et al., 2018).",
"The learning rate is set to 0.015 and 0.001 for flat and GCN modules, respectively.",
"We apply dropout to embeddings and the hidden states with a rate of 0.5.",
"The hidden sizes of BiLSTM and GCN are both set to 256.",
"The bias weights 1 and 2 are both set to 1.5.",
"Table 2 compares our model to some existing state-of-the-art approaches on the three benchmark datasets.",
"Given only standard training data and publicly available word embeddings, the results in Table 2 show that our model outperforms all these models.",
"Current state-of-the-art results on these datasets are tagged with in Table 2, we make improvements of 0.5/1.3/2.8 F 1 on ACE2005, GENIA, and KBP2017 respectively.",
"KBP2017 contains much more entities than ACE2005 and GE-3 Code is available at: https://github.com/cslydia/BiFlaG.",
"4 This result is reported by (Zheng et al., 2019), consistent with our own re-implemented results.",
"NIA.",
"The number of entities on test set is four times that of ACE2005.",
"Our model has the most significant improvement on such dataset, proving the effectiveness of our BiFlaG model.",
"More notably, our model without POS tags surpasses the previous models (Wang and Lu, 2018; Lin et al., 2019a), which use POS tags as additional representations on all three datasets.",
"Besides, (Lin et al., 2019b) incorporate gazetteer information on ACE2005 dataset, our model also makes comparable results with theirs.",
"Other works like (Strakova et al., 2019) 4 , which train their model on both training and development sets, are thus not comparable to our model directly.",
"Table 3 makes a detailed comparison on the five categories of GENIA test dataset with a layered model (Ju et al., 2018) and a region-based model (Zheng et al., 2019).",
"Compared with region-based model, layered model seems to have higher precision and lower recall, for they are subject to error propagate, the outer entities will not be identified if the inner ones are missed.",
"Meanwhile, region-based model suffers from low precision, as they may generate a lot of candidate spans.",
"By contrast, our BiFlaG model well coordinates precision and recall.",
"The entity types Protein and DNA have the most nested entities on GENIA dataset, the improvement of our BiFlaG on these two entity types is remarkable, which can be attributed to the in-4 Their reported results are 75.36 and 76.44 trained on concatenated train+dev sets on ACE2005 and GENIA, respectively.",
"They also use lemmas and POS tags as additional features.",
"Table 4 evaluates the performance of each module on ACE2005 and GENIA datasets.",
"Our flat module performs well on both datasets for outermost entity recognition.",
"However, the recall of the inner entities is low on GENIA dataset.",
"According to the statistics in Table 1, only 11% of the entities on GENIA are located in inner layers, while on ACE2005 dataset, the proportion is 24%.",
"It can be inferred that the sparsity of the entity distribution in inner layers has a great impact on the results.",
"If these inner entities are identified at each layer, the sparsity may be even worse.",
"We can enhance the impact of sparse entities by increasing the weight 1 in",
"Eq.(14), but this may hurt precision, we set 1 = 1 .",
"5 to have a better tradeoff between precision and recall.",
"We conduct additional experiments on ACE2005 dataset to detect the effect of the lengths of the outermost entities on the extraction of their inner entities as shown in Table 6.",
"Our flat module can well predict outermost entities which account for a large proportion among all types of entities.",
"In general, the performance of inner entities is affected by the extracting performance and length of their outermost entities.",
"A shorter outermost entity is more likely to have its inner entities shared either the ACE2005 GENIA KBP2017 Flat Grpah no graph 73.4 74.4 74.0 adjacent graph 73.8 74.9 74.7 entity graph 74.8 75.5 75.2 both graphs 75.1 76.0 75.6 Graph Flat without 74.3 74.5 75.1 with 75.1 76.0 75.6 Table 5: Ablation study on the three benchmark datasets.",
"first token or the last token, making the constructed graph more instructive, thus its inner entities are easier to extract.",
"In this paper, we use the interactions of flat module and graph module to respectively help better predict outermost and inner entities.",
"We conduct ablation study to verify the effectiveness of the interactions.",
"The first part is the information delivery from the flat module to the graph module.",
"We conduct four experiments: (1) no graph: we skip Eq.",
"(5)-(6) and let graph feature f = LINEAR ( x ) .",
"In this case, inner entities are independent of the outermost entities and only rely on the word representation (section 2.1) which carries contextualized information.",
"(2) adjacent graph: we further utilize the sequential information of the text to help inner entity prediction.",
"(3) entity graph: the boundary information of outer entities can be indicative for inner entities, we construct an entity graph based on the entities extracted by the flat module.",
"(4) both graphs: when outer entities are not recognized by the flat module, their inner entities will fail to receive the boundary information, we use the sequential information of the text to make up for the deficiency of using only entity graph.",
"Experimental length outermost entities inner entities P R F Num.",
"results show that entity graph carries more useful information than adjacent graph, which enhances the baseline by 1.4/1.1/1.2 F 1 score, respectively.",
"By combing these two graphs together, we get a larger gain of 1.7/1.6/1.6 F 1 score.",
"The second part is the information delivery from the graph module to the flat module, the new representation X new learned from graph module is propagated back to the flat module.",
"X new is equipped with the dependencies of inner entities and shows useful, yielding an improvement of 0.8/1.5/0.5 F 1 for the three benchmarks, respectively.",
"We examine the inference speed of our BiFlaG with (Zheng et al., 2019), (Sohrab and Miwa, 2018) and (Ju et al., 2018) in terms of the number of words decoded per second.",
"For all the compared models, we use the re-implemented code released by (Zheng et al., 2019) and set the same batch size 10.",
"Compared with (Zheng et al., 2019) and (Sohrab and Miwa, 2018), our BiFlaG does not need to compute region representation for each potential entity, thus we can take full advantage of GPU parallelism.",
"Compared with (Ju et al., 2018), which requires CRF decoding for each layer, our model only needs to calculate two modules, by contrast, the cascaded CRF layers limit their inference speed.",
"Table 7 shows a case study of each module in our model.",
"In this example, entities my , my town , that and Krispy Kreme are nested in the entity the location in my town that was recently abandoned by Krispy Kreme .",
"Our BiFlaG model successfully Inference Speed (t/s) 0 1000 2000 3000 4000 5000 6000 7000 6708 4751 2851 3563 BiFlaG (Ours) Zheng et al., 2019 Sohrab and Miwa, 2018 Ju et al., 2018 Figure 3: The inference speed of our BiFlaG and compared models on GENIA test set.",
"extracts all these entities with exact boundaries and entity categorical labels.",
"Without graph construction, nested entities my town , that and Krispy Kreme are not identified.",
"Without interaction between the two modules, the outermost entity the location in my town that was recently abandoned by Krispy Kreme is mislabeled as LOC (location), which is actually a FAC (Facility) type, inner nested entities my , my town and Krispy Kreme are not propagated back to the flat module, which maybe helpful to correct the extracting of the outermost entity.",
"Recently, with the development of deep neural network in a wide range of NLP tasks (Bai and Zhao, 2018; Huang et al., 2018; Huang and Zhao, 2018; He et al., 2018, 2019; Li et al., 2018a,b, 2019; Zhou and Zhao, 2019; Xiao et al., 2019; Zhang and Zhao,",
"2018; Zhang et al., 2019, 2020a,b,c), it is possible to build reliable NER systems without hand-crafted features.",
"Nested named entity recognition requires to identity all the entities in texts that may be nested with each other.",
"Though NER is a traditional NLP task, it is not until the very recent years that researches have been paid to this nested structure for named entities.",
"(Lu and Roth, 2015) introduce a novel hypergraph representation to handle overlapping mentions.",
"(Muis and Lu, 2017) further develop a gap-based tagging schema that assigns tags to gaps between words to address the spurious structures issue, which can be modeled using conventional linear-chain CRFs.",
"However, it suffers from the structural ambiguity issue during inference.",
"(Wang and Lu, 2018) propose a novel segmental hypergraph representation to eliminate structural ambiguity.",
"(Katiyar and Cardie, 2018) also propose a hypergraph-based approach based on the BILOU tag scheme that utilizes an LSTM network to learn the hypergraph representation in a greedy manner.",
"Stacking sequence labeling models to extract entities from inner to outer (or outside-to-inside) can also handle such nested structures.",
"(Alex et al., 2007) propose several different modeling techniques (layering and cascading) to combine multiple CRFs for nested NER.",
"However, their approach cannot handle nested entities of the same entity type.",
"(Ju et al., 2018) dynamically stack flat NER layers, and recognize entities from innermost layer to outer ones.",
"Their approach can deal with nested entities of the same type, but suffers from error propagation among layers.",
"Region-based approaches are also commonly used for nested NER by extracting the subsequences in sentences and classifying their types.",
"(Sohrab and Miwa, 2018) introduce a neural exhaustive model that considers all possible spans and classify their types.",
"This work is further improved by (Zheng et al., 2019), which first apply a single-layer sequence labeling model to identify the boundaries of potential entities using context information, and then classify these boundary-aware regions into their entity type or non-entity.",
"(Lin et al., 2019a) propose a sequence-to-nuggets approach named as Anchor-Region Networks (ARNs) to detect nested entity mentions.",
"They first use an anchor detector to detect the anchor words of entity mentions and then apply a region recognizer to identity the mention boundaries centering at each anchor word.",
"(Fisher and Vlachos, 2019) decompose nested NER into two stages.",
"Tokens are merged into entities through real-valued decisions, and then the entity embeddings are used to label the entities identified.",
"This paper proposes a new bipartite flat-graph (Bi-FlaG) model for nested NER which consists of two interacting subgraph modules.",
"Applying the divide-and-conquer policy, the flat module is in charge of outermost entities, while the graph module focuses on inner entities.",
"Our BiFlaG model also facilitates a full bidirectional interaction between the two modules, which let the nested NE structures jointly learned at most degree.",
"As a general model, our BiFlaG model can also handle non-nested structures by simply removing the graph module.",
"In terms of the same strict setting, empirical results show that our model generally outperforms previous state-of-the-art models."
] | [
"objective",
"abstain",
"method",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"method",
"result"
] |
[
"Despite their impressive performance in NLP, self-attention networks were recently proved to be limited for processing formal languages with hierarchical structure, such as Dyck k , the language consisting of well-nested parentheses of k types.",
"This suggested that natural language can be approximated well with models that are too weak for formal languages, or that the role of hierarchy and recursion in natural language might be limited.",
"We qualify this implication by proving that self-attention networks can process Dyck k,D , the subset of Dyck k with depth bounded by D , which arguably better captures the bounded hierarchical structure of natural language.",
"Specifically, we construct a hard-attention network with D + 1 layers and O (log k ) memory size (per token per layer) that recognizes Dyck k,D , and a soft-attention network with two layers and O (log k ) memory size that generates Dyck k,D .",
"Experiments show that self-attention networks trained on Dyck k,D generalize to longer inputs with near-perfect accuracy, and also verify the theoretical memory advantage of self-attention networks over recurrent networks.",
"1 1 Introduction Transformers (Vaswani et al., 2017) are now the undisputed champions across several benchmark leaderboards in NLP.",
"The major innovation of this architecture, self-attention , processes input tokens in a distributed way, enabling efficient parallel computation as well as long-range dependency modelling.",
"The empirical success of self-attention in NLP has led to a growing interest in studying its properties, with an eye towards a better understanding of the nature and characteristics of natural language (Tran et al., 2018; Papadimitriou and Juraf-sky, 2020).",
"In particular, it was recently shown that self-attention networks cannot process various kinds of formal languages (Hahn, 2020; Bhattamishra et al., 2020a), among which particularly notable is Dyck k , the language of well-balanced brackets of k types.",
"By the Chomsky-Schtzenberger Theorem (Chom-sky and Schtzenberger, 1959), any context-free language can be obtained from a Dyck k language through intersections with regular languages and homomorphisms.",
"In other words, this simple language contains the essence of all context-free languages, i.e. hierarchical structure, center embedding, and recursion features which have been long claimed to be at the foundation of human language syntax (Chomsky, 1956).",
"Consider for example the long-range and nested dependencies in English subject-verb agreement: ( Laws ( the lawmaker ) [ writes ] [ and revises ]) [ pass ] . . . .",
"The sentence structure is captured by Dyck 2 string (()[][])[].",
"Given the state-of-the-art performance of Transformers in parsing natural language (Zhang et al., 2020; He and Choi, 2019), the Dyck k blind spot seems very suggestive.",
"If the world's best NLP models cannot deal with this simple language generated by a grammar with k + 2 rules and recognized by a single-state pushdown automaton does this not mean that the role of hierarchy and recursion in natural language must be limited?",
"This question has of course, been extensively debated by linguists on the basis of both theoretical and psycholinguistic evidence (Hauser et al., 2002; Frank et al., 2012; Nelson et al., 2017; Brennan and Hale, 2019; Frank and Christiansen, 2018).",
"So, what can self-attention networks tell us about natural language and recursion?",
"Here we provide a new twist to this question by considering Dyck k,D , the subset of Dyck k with nesting depth at most D , and show that Transformers can process Input ( [ ] { [ ] ( ) } ) Layer 1 ( [ ] { [ ] ( ) } ) Layer 2 ( [ ] { [ ] ( ) } ) Layer 3 ( [ ] { [ ] ( ) } ) 3,3 Input [ ] [ ( ( [ ] ) ) Layer 1 1 0 1 2 3 4 3 2 1 Layer 2 1 0 1 2 3 4 3 2 1 2,4",
"it.",
"Dyck k,D models bounded (or finite) recursion, thus captures the hierarchical structure of human language much more realistically.",
"For example, center-embedding depth of natural language sentences is known to rarely exceed three (Karlsson, 2007; Jin et al., 2018), and while pragmatics, discourse, and narrative can result in deeper recursion in language (Levinson, 2014), there is arguably a relatively small limit to the depth as well.",
"In particular, we prove that self-attention networks can both recognize and generate Dyck k,D , with two conceptually simple yet different constructions (Figure 1).",
"The first network requires D + 1 layers and a memory size of O (log k ) (per layer per token) to recognize Dyck k,D , using a distributed mechanism of parenthesis matching.",
"The second network has two layers and memory size O (log k ) .",
"It works by attending to all previous tokens to count the depth for each token in the first layer, and then uses this depth information to attend to the most recent unclosed open bracket in the second layer.",
"Our constructions help reconcile the result in Hahn (2020) with the success of Transformers in handling natural languages.",
"Our proof requires certain assumptions about the positional encodings, an issue that is often considered in empirical papers (Ke et al., 2021; Shaw et al., 2018; Wang et al., 2020; Shiv and Quirk, 2019) but not in the more theoretical literature.",
"First, positional encodings must have log n bits when the input length is n , as otherwise different positions would share the same representation.",
"More importantly, positional encodings should support easy position comparisons, since token order is vital in formal language processing.",
"Our experiments show that two standard practices, namely learnable or fixed sine/cosine positional encodings, cannot generalize well on Dyck k,D beyond the training input lengths.",
"In contrast, using a single fixed scalar monotonic positional encoding such as pos / n achieves near-perfect accuracy even on inputs significantly longer than the training ones.",
"Our findings provide a novel perspective on the function of positional encodings, and implies that different applications of self-attention networks (in this case, natural vs. formal language) may require different model choices.",
"Our theoretical results also bring about interesting comparisons to recurrent networks (e.g. RNNs, LSTMs) in terms of the resource need to process hierarchical structure.",
"While recurrent networks with finite precision need at least ( D log k ) memory to process Dyck k,D (Hewitt et al., 2020), our second construction requires only O (log k ) memory but a O (log n ) precision.",
"In experiments where precision is not an issue for practical input lengths ( < 10 4 ), we confirm that a Transformer requires less memory than a LSTM to reach high test accuracies.",
"This may help explain why Transformers outperform RNNs/LSTMs in syntactical tasks in NLP, and shed light into fundamental differences between recurrent and non-recurrent sequence processing.",
"Our work primarily relates to the ongoing effort of characterizing theoretical abilities (Prez et al., 2019; Bhattamishra et al., 2020b; Yun et al., 2020)",
"and limitations of self-attention networks, particularly through formal hierarchical structures like Dyck k .",
"Hahn (2020) proves that (even with positional encodings) hard-attention Transformers cannot model Dyck k , and soft-attention Transformers with bounded Lipschitz continuity cannot model Dyck k with perfect cross entropy.",
"Bhattamishra et al. (2020a) prove a soft-attention network with positional masking (but no positional encodings) can solve Dyck 1 but not Dyck 2 .",
"Despite the expressivity issues theoretically posed by the above work, empirical findings have shown Transformers can learn Dyck k from finite samples and outperform LSTM (Ebrahimi et al., 2020).",
"Our work addresses the theory-practice discrepancy by using positional encodings and modeling Dyck k,D .",
"A parallel line of work with much lengthier tradition (Elman, 1990; Das et al., 1992; Steijvers and Grnwald, 1996) investigates the abilities and limitations of recurrent networks to process hierarchical structures.",
"In particular, RNNs or LSTMs are proved capable of solving context-free languages like Dyck k given infinite precision (Korsky and Berwick, 2019) or external memory (Suzgun et al., 2019; Merrill et al., 2020).",
"However, Merrill et al. (2020) also prove RNNs/LSTMs cannot process Dyck k without such assumptions, which aligns with experimental findings that recurrent networks perform or generalize poorly on Dyck k (Bernardy, 2018; Sennhauser and Berwick, 2018; Yu et al., 2019).",
"Hewitt et al. (2020) propose to consider Dyck k,D as it better captures natural language, and show finite-precision RNNs can solve Dyck k,D with ( D log k ) memory.",
"For the broader NLP community, our results also contribute to settling whether self-attention networks are restricted to model hierarchical structures due to non-recurrence, a concern (Tran et al., 2018) often turned into proposals to equip Transformers with recurrence (Dehghani et al., 2019; Shen et al., 2018; Chen et al., 2018; Hao et al., 2019).",
"On one hand, Transformers are shown to encode syntactic (Lin et al., 2019; Tenney et al., 2019; Manning et al., 2020) and word order (Yang et al., 2019) information, and dominate syntactical tasks in NLP such as constituency (Zhang et al., 2020) and dependency (He and Choi, 2019) parsing.",
"On the other hand, on several linguistically-motivated tasks like English subject-verb agreement (Tran et al., 2018), recurrent models are reported to outperform Transformers.",
"Our results help address the issue by confirming that distributed and recurrent sequence processing can both model hierarchical structure, albeit with different mechanisms and tradeoffs.",
"Consider the vocabulary of k types of open and close brackets = i [ k ] {(cid:104) i , (cid:105) i } , and define Dyck k ( , being special start and end tokens) to be the formal language of well-nested brackets of k types.",
"It is generated starting from X through the following context-free grammar: X (cid:15) | (cid:104) i X (cid:105) i X ( i [ k ]) (1) where (cid:15) denotes the empty string.",
"Intuitively, Dyck k can be recognized by sequential scanning with a stack (i.e., a pushdown au-tomaton).",
"Open brackets are pushed into the stack, while a close bracket causes the stack to pop, and the popped open bracket is compared with the current close bracket (they should be of the same type).",
"The depth of a string w 1: n at position i is the stack size after scanning w 1: i , that is, the number of open brackets left in the stack: d ( w 1: i ) = count ( w 1: i , (cid:104) ) count ( w 1: i , (cid:105) ) (2) Finally, we define Dyck k,D to be the subset of Dyck k strings with depth bounded by D : Dyck k,D = (cid:26) w 1: n Dyck k (cid:12)(cid:12)(cid:12)(cid:12) max i [ n ] d ( w 1: i ) D (cid:27) That is, a string in Dyck k,D only requires a stack with bounded size D to process.",
"We consider the encoder part of the original Transformer (Vaswani et al., 2017), which has multiple layers of two blocks each:",
"(i) a self-attention block and",
"(ii) a feed-forward network (FFN).",
"For an input string w 1: n , each input token w i is converted into a token embedding via f e : R d model , then added with a position encoding p i R d model .",
"Let x i,(cid:96) R d model be the i -th representation of the (cid:96) -th layer ( i [ n ] , (cid:96) [ L ] ).",
"Then x i, 0 = f e ( w i ) + p i (3) a i,(cid:96) = Att (cid:96) ( Q (cid:96) ( x i ) , K (cid:96) ( x ) , V (cid:96) ( x )) (4) x i,(cid:96) +1 = F (cid:96) ( a i,(cid:96) ) (5) Attention In each head of a self-attention block, the input vectors x 1: n undergo linear transforms Q, K, V yielding query, key, and value vectors.",
"They are taken as input to a self-attention module, whose t -th output, Att( Q x i , K x , V x ) , is a vector a i = (cid:80) j [ T ] j V x j , where 1: n = softmax ( (cid:104) Q x i , K x 1 (cid:105) , , (cid:104) Q x i , K x n (cid:105) ) .",
"The final attention output is the concatenation of multihead attention outputs.",
"We also consider variants of the basic model along these directions:",
"(i) Hard attention , as opposed to soft attention described above, where hardmax is used in place for softmax (i.e. Att( Q x i , K x , V x ) = V x j (cid:48) where j (cid:48) = arg max j (cid:104) Q x i , K x j (cid:105) ).",
"Though impractical for NLP, it has been used to model formal languages (Hahn, 2020).",
"(ii) Positional masking , where 1: i (past) or i : n (future) is masked for position i .",
"Future-positional masking is usually used to train auto-regressive models like GPT-2 (Radford et al., 2019).",
"Feed-forward network A feed-forward network F transforms each self-attention output vector a i F ( a i ) individually.",
"It is usually implemented as a multi-layer perceptron (MLP) with ReLU activations.",
"Residual connections (He et al., 2016) and layer normalization (Ba et al., 2016) are two optional components to aid learning.",
"Positional encodings Vaswani et al. (2017) proposes two kinds of positional encoding:",
"(i) Fourier features (Rahimi and Recht, 2007), i.e. sine/cosine values of different frequencies;",
"(ii) learnable features for each position.",
"In this work we propose to use a single scalar i/n to encode position i [ n ] , and show that it helps process formal languages like Dyck k,D , both theoretically and empirically.",
"Precision and memory size We define precision to be the number of binary bits used to represent each scalar, and memory size per layer ( d model ) to be the number of scalars used to represent each token at each layer.",
"The memory size ( L d model ) is the total memory used for each token.",
"For a Transformer with L layers and input w 1: i , we can use a decoder (MLP + softmax) on the final token output x i,L to predict w i +1 .",
"This defines a language model f ( w i +1 | w i ) where denotes Transformer and decoder parameters.",
"We follow previous work (Hewitt et al., 2020) to define how a language model can generate a formal language: Definition 3.1 (Language generation) .",
"Language model f over (cid:63) generates a language L (cid:63) if there exists (cid:15) > 0 such that L = { w 1: n (cid:63) | i [ n ] , f ( w i | w 1: i 1 ) (cid:15) } .",
"We also consider language recognition by a language classifier g ( w 1: i ) , where a decoder on x i,L instead predicts a binary label.",
"In this section we state our theoretical results along with some remarks.",
"Proof sketches are provided in the next section, and details in Appendix A,B,C.",
"Theorem 4.1 (Hard-attention, Dyck k,D recognition) .",
"For all k, D N + , there exists a ( D + 1) layer hard-attention network that can recognize Dyck k,D .",
"It uses both future and past positional masking heads, positional encoding of the form i/n for position i , O (log k ) memory size per layer, and O (log n ) precision, where n is the input length.",
"Theorem 4.2 (Soft-attention, Dyck k,D generation) .",
"For all k, D N + , there exists a 2-layer soft-attention network that can generate Dyck k,D .",
"It uses future positional masking, positional encoding of form i/n for position i , O (log k ) memory size per layer, and O (log n ) precision, where n is the input length.",
"The feed-forward networks use residual connection and layer normalization.",
"Theorem 4.3 (Precision lower bound) .",
"For all k N + , no hard-attention network with o (log n ) precision can recognize Dyck k, 2 where n is the input length.",
"Required precision Both constructions require a precision increasing with input length, as indicated by Theorem 4.3.",
"The proof of the lower bound is inspired by the proof in Hahn (2020), but several technical improvements are necessary; see Appendix C. Intuitively, a vector with a fixed dimension and o (log n ) precision cannot even represent n positions uniquely.",
"The required precision is not unreasonable, since log n is a small overhead to the n tokens the system has to store.",
"Comparison to recurrent processing Hewitt et al. (2020) constructs a 1-layer RNN to generate Dyck k,D with ( D log k ) memory, and proves it is optimal for any recurrent network.",
"Thus Theorem 4.2 establishes a memory advantage of self-attention networks over recurrent ones.",
"However, this is based on two tradeoffs:",
"(i) Precision .",
"Hewitt et al. (2020) assumes O (1) precision while we require O (log n ) .",
"(ii) Runtime .",
"Runtime of recurrent and self-attention networks usually scale linearly and quadratically in n , respectively.",
"Comparison between two constructions Theorem 4.2 requires fewer layers ( 2 vs. D ) and memory size ( O (log k ) vs. O ( D log k ) ) than Theorem 4.1, thanks to the use of soft-attention, residual connection and layer normalization.",
"Though the two constructions are more suited to the tasks of recognition and generation respectively (Section 5), each of them can also be modified for the other task.",
"Connection to Dyck k In Hahn (2020) it is shown that no hard-attention network can recognize Dyck k even for k = 1 .",
"Theorem 4.1 establishes that this impossibility can be circumvented by bounding the depth of the Dyck language.",
"Hahn (2020) also points out soft-attention networks can be limited due to bounded Lipschitz continuity.",
"In fact, our Theorem 4.2 construction can also work on Dyck k with some additional assumptions (e.g. feed n also in input embeddings), and we circumvent the impossibility by using laying normalization, which may have an O ( n ) Lipschitz constant.",
"More details are in Appendix B.4.",
"Our insight underlying the construction in Theorem 4.1 is that, by recursively removing matched brackets from innermost positions to outside, each token only needs to attend to nearest unmatched brackets to find its matching bracket or detect error within D layers.",
"Specifically, at each layer (cid:96) D , each token will be in one of three states (Figure 2",
"(c)):",
"(i) Matched ,",
"(ii) Error ,",
"(iii) Unmatched , and we leverage hard-attention to implement a dynamic state updating process to recognize Dyck k,D .",
"Representation For an input w 1: n , the representation at position i of layer (cid:96) has five parts x i,(cid:96) = [ t i , o i , p i , m i,(cid:96) , e i,(cid:96) ] :",
"(i) a bracket type embedding t i R (cid:100) log k (cid:101) that denotes which bracket type ( 1 k ) the token is (or if the token is start/end token);",
"(ii) a bracket openness bit o i { 0 , 1 } , where 1 denotes open brackets (or start token) and 0 denotes close one (or end token);",
"(iii) a positional encoding scalar p i = i/n ;",
"(iv) a match bit m i,(cid:96) { 0 , 1 } , where 1 denotes matched and 0 unmatched;",
"(v) an error bit e i,(cid:96) { 0 , 1 } , where 1 denotes error and 0 no error.",
"Token identity parts t i , o i , p i are maintained unchanged throughout layers.",
"The match and error bits are initialized as e i, 0 = m i, 0 = 0 .",
"The first D layers have identical self-attention blocks and feed-forward networks, detailed below.",
"Attention Consider the (cid:96) -th self-attention layer ( (cid:96) [ D ] ), and denote x i = x i,(cid:96) 1 , m i = m i,(cid:96) 1 , a i = a i,(cid:96) , y i = x i,(cid:96) for short.",
"We have 3 attention heads:",
"(i) an identity head Att id , where each token only attends to itself with attention output a id i = x i ;",
"(ii) a left head Att left with future positional masking;",
"(iii) a right head Att right with past positional masking.",
"The query, key, and value vectors for Att left are defined as Q x i = 1 R , K x i = p i m i R , V x i = x i R d model , so that a left i = x j 1 , j 1 = arg max j<i ( j/n m j ) is the representation of the nearest unmatched token to i on its left side.",
"is the representation of the nearest unmatched token to i on its right side.",
"The attention output for position i is the concatenation of these three outputs: a i = [ a id i , a left i , a right i ] = [ x i , x j 1 , x j 2 ] .",
"Feed-forward network (FFN) Following the notation above, the feed-forward network F : a i y i serves to update each position's state using information from x j 1 , x j 2 .",
"The high level logic (Fig-ure 2",
"(c)) is that, if w i is an open bracket, its potential matching half should be w j = w j 2 ( j 2 > i ), otherwise it should be w j = w j 1 ( j 1 < i ).",
"If w i and w j are one open and one close, they either match (same type) or cause error (different types).",
"If w i and w j are both open or both close, no state update is done for position i .",
"Besides, token identity parts t i , o i , p i are copied from a id i to pass on.",
"The idea can be translated into a language of logical operations ( , , ) plus a SAME ( t , t (cid:48) ) operation, which returns 1 if vectors t = t (cid:48) and 0 otherwise: y i = [ t i , o i , p i , m (cid:48) i , e (cid:48) i ] m (cid:48) i = m i ( o i o j 2 s 1 ) ( o i o j 1 s 2 ) e (cid:48) i = e i ( o i o j 2 s 1 ) ( o i o j 1 s 2 ) s 1 = SAME ( t i , t j 1 ) s 2 = SAME ( t i , t j 2 ) ( [ ] ( ] ) Layer 1 ( [ ] ( ] ) Layer 2 ( [ ] ( ] ) Layer 3 output: 0/1 ( [ ] ( ] ) FFN ([](]) ( [ ] ( ] ) ([](]) ( [ ] ( ] ) ([](]) ( [ ] ( ] ) ( [ ] ( ] ) matched error unmatched unmatched w 1: n x 1: n ( [ ] ( ] )",
"As we show in Appendix A, a multi-layer perception with ReLU activations can simulate all operations ( , , , SAME ) , thus the existence of our desired FFN.",
"Final layer At the ( D + 1) -th layer, the self attention is designed as Q x i = 1 R , K x i = e i +1 m i R , V x i = ( e i , m i ) R 2 .",
"If all brackets are matched without error ( ( e i , m i ) = (0 , 1) ), all keys would be 0 , and the attention output of the last token a n would be (0 , 1) .",
"If any bracket finds error ( e i = 1) or is not matched ( m i = 0) , the key would be at least 1 and a n would not be (0 , 1) .",
"An FNN that emulates ( a, b ) (cid:55) a b will deliver y n as the recognition answer.",
"Our Theorem 4.2 construction takes advantage of soft attention, residual connection, and layer normalization to calculate each token depth and translate it into a vector form at the first layer.",
"Using the depth information, at the second layer each w i can attend to the stack-top open bracket at the position, in order to decide if open brackets or which type of close brackets can be generated as the next token (Figure 3).",
"bracket type embedding t i , bracket openness bit o i , position encoding p i already specified in Section 5.1.",
"The last part d i,(cid:96) R 2 is used to store depth information for position i , and initialized as d i, 0 = (0 , 0) .",
"First Layer Depth Counting The first self-attention layer has two heads, where an Att id head is still used to inherit t i , o i , p i , and a future positional masking head 2 Att d aims to count depth with Q x i = K x i = 1 and V x i = 2 o i 1 , resulting in uniform attention scores and attention output a di = (cid:80) j i 1 i (2 o j 1) = d ( w 1: i ) /i .",
"However, our goal is to enable matching based on depth d i = d ( w 1: i ) , and the attention output d i /i isn't readily usable for such a purpose: the denominator i is undesirable, and even a scalar d i cannot easily attend to the same value using dot-product attention.",
"Thus in the first feed-forward network, we leverage residual connection and layer normalization to transform d i /i (cid:55) d i = (cos( ( d i )) , sin( ( d i ))) (6) where ( d ) = arctan (cid:16) d D +2 d (cid:17) has an unique 2 Here we assume w i +1: n is masked for position i , just for convenience of description.",
"value for every d { 0 , , D + 1 } , so that d i d j (cid:40) = 1 d i = d j < 1 1 10 D 2 d i (cid:54) = d j (7)",
"The representation by the end of first layer is x i, 1 = [ t i , o i , p i , d i ] .",
"The full detail for the first FFN is in Appendix B.1.",
"Second layer Depth Matching The second self-attention layer has a depth matching hard-attention head Att match , with query, key, value vectors as Q x i = [20 D 2 d i , 1 , 2] R 4 , K x i = [ d i , p i , o i ] R 4 , V x i = x i , so that attention score (cid:104) Q x i , K x j (cid:105) = 20 D 2 d i d j + j/n + 2 o j (cid:40) = 20 D 2 + 2 + j/n d i = d j , o j = 1 20 D 2 + 1 otherwise would achieve its maximum when w j ( j i ) is the open bracket (or start token) closest to w i with d j = d i .",
"The attention output is a i = [ a id i , a match i ] = [ x i , x j ] where j = max { j i | d i = d j o j = 1 } .",
"With such a [ x i , x j ] , the second-layer FFN can readily predict what w i +1 could be.",
"It could be any open bracket when d i < D (i.e. cos( ( d i )) > cos( ( D )) ), and it could be a close bracket with type as t j (or end token if w j is start token).",
"The detailed construction for such a FFN is in Appendix B.2.",
"On Dyck k Generation In fact, this theoretical construction can also generate Dyck k , as intuitively the O (log n ) precision assumption allows counting depth up to O ( n ) .",
"But it involves extra conditions like feeding n into network input, and may not be effectively learned in practice.",
"Please refer to details in Appendix B.4.",
"Connection to Empirical Findings Our theoretical construction explains the observation in Ebrahimi et al. (2020): the second layer of a two-layer Transformer trained on Dyck k often produces virtually hard attention, where tokens attend to the stack-top open bracket (or start token).",
"It also explains why such a pattern is found less systematically as input depth increases, as (6) is hard to learn and generalize to unbounded depth in practice.",
"Our constructions show the existence of self-attention networks that are capable of recognizing and generating Dyck k,D .",
"Now we bridge theoretical insights into experiments, and study whether such networks can be learned from finite samples and generalize to longer input.",
"The answer is af-firmative when the right positional encodings and memory size are chosen according to our theory.",
"We first present results on Dyck 8 , 10 (Sec-tion 6.1) as an example Dyck k,D language to investigate the effect of different positional encoding schemes, number of layers, and hidden size on the Transformer performance, and to compare with the LSTM performance.",
"We then extend the Transformer vs. LSTM comparison on more Dyck k,D languages ( k { 2 , 8 , 32 , 128 } , D { 3 , 5 , 10 , 15 } ) in Section 6.2.",
"Finally, we apply 1 2 3 4 5 10 # Layers 0.6 0.7 0.8 0.9 1.0 C l o s e A cc u r a cy",
"the novel scalar positional encoding to natural language modeling with some preliminary findings (Section 6.3).",
"Setup For Dyck 8 , 10 , we generate training and validation sets with input length n 700 , and test set with length 700 < n 1400 .",
"We train randomly initialized Transformers using the Huggingface library (Wolf et al., 2019), with one future positional masking head, L { 1 , 2 , 3 , 4 , 5 , 10 } layers, and a default memory size d model = 30 .",
"We search for learning rates in { 0 .",
"01 , 0 .",
"001 } , run each model with 3 trials, and report the average accuracy of generating close brackets, the major challenge of Dyck k,D .",
"More setup details are in Appendix D.1.",
"Positional Encodings We compare 3 types of positional encodings:",
"(i) Fourier features ( COS );",
"(ii) learnable features ( LEARN );",
"(iii) a scalar i/ 6000 for position i ( POS /N ).",
"Note that (i,",
"ii) are original proposals in Vaswani et al. (2017), where positional encoding vectors are added to the token embed-dings, while our proposal",
"(iii) encodes the position as a fixed scalar separated from token embeddings.",
"On the validation set of Dyck 8 , 10 (see Appendix D.2), all three models achieve near-perfect accuracy with L 2 layers.",
"On the test set (Fig-ure",
"4(a)) however, only POS /N maintains near-perfect accuracy, even with L = 10 layers.",
"Meanwhile, LEARN and COS fail to generalize, because encodings for position 700 < i 1400 are not learned (for LEARN ) or experienced (for COS ) during training.",
"The result validates our theoretical construction, and points to the need for separate and systemic positional encodings for processing long and order-sensitive sequences like Dyck k,D .",
"Memory Size and Comparison with LSTM We compare a two-layer Transformer ( POS /N ) with a one-layer LSTM 3 (Hochreiter and Schmidhu-ber, 1997) using varying per-layer memory sizes d model { 10 , 20 , , 100 } .",
"As Figure 4",
"(b) shows, the Transformer consistently outperforms the LSTM on the validation set.",
"On the test set (Figure 4",
"(c)), the Transformer and the LSTM first achieve a > 90% accuracy using d model = 20 and 40 respectively, and an accuracy of > 95% with d model = 30 and 50 , respectively.",
"These findings agree with our theoretical characterization that self-attention networks have a memory advantage over recurrent ones.",
"Setup In order to generalize some of the above results, we generate a wide range of Dyck k,D languages with different vocabulary sizes ( k { 2 , 8 , 32 , 128 } ) and recursion bounds ( D { 3 , 5 , 10 , 15 } ).",
"We continue to compare the one-layer LSTM versus the two-layer Transformer ( POS /N ).",
"For each model on each language, we perform a hyperparameter search for learning rate in {0.01, 0.001} and memory size d model { 10 , 30 , 50 } , and report results from the best setting based on two trials for each setting.",
"3 LSTMs only need one layer to process Dyck k,D (Hewitt et al., 2020), while Transformers at least need two in our constructions.",
"We also experimented with two-layer LSTMs but did not find improved performance.",
"Results The validation and test accuracy of the models are reported in Figure 5, and more fine-grained results for each d model { 10 , 30 , 50 } are in Appendix D.2.",
"The Transformer attains a > 99 .",
"9% validation accuracy and a > 94% test accuracy across all languages, strengthening the main claim that self-attention networks can learn Dyck k,D languages and generalize to longer input.",
"On the other hand, the validation and test accuracy of the LSTM model are less than 80% when the vocabulary size and recursion depth are large, i.e. ( k, D ) { (32 , 15) , (128 , 10) , (128 , 15) } 4 , which reconfirms Transformers' memory advantage under limited memory ( d model 50 ).",
"In Section 6.1, we show a Transformer with the scalar positional encoding scheme ( POS /N ) can learn Dyck k,D and generalize to longer input, while traditional positional encoding schemes (( COS ), ( LEARN )) lead to degraded test performance.",
"To investigate whether such a novel scheme is also useful in NLP tasks, we train two RoBERTa 5 models ( POS /N , LEARN ) from scratch on the WikiText-103 dataset (Merity et al., 2017) for 150 epochs.",
"Figure 6 shows the masked language modeling loss on both training and validation sets.",
"By the end of the training, POS /N has a slightly larger validation loss (1.55) than LEARN (1.31).",
"But throughout the optimization, POS /N shows a gradual decrease of loss while LEARN has a sudden drop of loss around 20-30 epochs.",
"We believe it will be interest-4 Note that Hewitt et al. (2020) only reports D { 3 , 5 } .",
"5 We also tried language modeling with GPT-2 models, and POS /N has slightly larger train/validation losses than LEARN throughout the training.",
"Interestingly, using no positional encoding leads to the same loss curves as LEARN , as positional masking leaks positional information.",
"ing for future work to explore how POS /N performs on different downstream tasks, and why POS /N seems slightly worse than LEARN (at least on this MLM task), though theoretically it provides the complete positional information for Transformers.",
"These topics will contribute to a deeper understanding of positional encodings and how Transformers leverage positional information to succeed on different tasks.",
"In this paper, we theoretically and experimentally demonstrate that self-attention networks can process bounded hierarchical languages Dyck k,D , even with a memory advantage over recurrent networks, despite performing distributed processing of sequences without explicit recursive elements.",
"Our results may explain their widespread success at modeling long pieces of text with hierarchical structures and long-range, nested dependencies, including coreference, discourse and narratives.",
"We hope these insights can enhance knowledge about the nature of recurrence and parallelism in sequence processing, and lead to better NLP models.",
"Our work is mainly theoretical with no foreseeable ethical issues."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"result",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain"
] |
[
"Frame-based state representation is widely used in modern task-oriented dialog systems to model user intentions and slot values.",
"However, a fixed design of domain ontology makes it difficult to extend to new services and APIs.",
"Recent work proposed to use natural language descriptions to define the domain ontology instead of tag names for each intent or slot, thus offering a dynamic set of schema.",
"In this paper, we conduct in-depth comparative studies to understand the use of natural language description for schema in dialog state tracking.",
"Our discussion mainly covers three aspects: encoder architectures, impact of supplementary training, and effective schema description styles.",
"We introduce a set of newly designed bench-marking descriptions and reveal the model robustness on both homogeneous and heterogeneous description styles in training and evaluation.",
"From early frame-driven dialog system GUS (Bo-brow et al., 1977) to virtual assistants (Alexa, Siri, and Google Assistant et al. ), frame-based dialog state tracking has long been studied to meet various challenges.",
"In particular, how to support an ever-increasing number of services and APIs spanning multiple domains has been a focal point in recent years, evidenced by multi-domain dialog modeling (Budzianowski et al., 2018; Byrne et al., 2019; Shah et al., 2018a) and transferable dialog state tracking to unseen intent/slots (Mrkic et al., 2017; Wu et al., 2019; Hosseini-Asl et al., 2020).",
"Recently, Rastogi et al. (2019) proposed a new paradigm called schema-guided dialog for transferable dialog state tracking by using natural language description to define a dynamic set of service schemata.",
"As shown in Figure 1, the primary motivation is that these descriptions can offer effective Work done when Jie Cao was an intern at Amazon Figure 1: An example dialog from Restaurant_1 service, along with its service/intent/slot descriptions and dialog state representation.",
"knowledge sharing across different services, e.g., connecting semantically similar concepts across heterogeneous APIs, thus allowing a unified model to handle unseen services and APIs.",
"With the publicly available schema-guided dialog dataset (SG-DST henceforward) as a testbed, they organized a state tracking shared task composed of four subtasks: intent classfication ( Intent ), requested slot identification ( Req ), categorical slot labeling ( Cat ), and noncategorical slot labeling ( NonCat ) (Rastogi et al., 2020).",
"Many participants achieved promising performance by exploiting the schema description for dialog modeling, especially on unseen services.",
"Despite the novel approach and promising results, current schema-guided dialog state tracking task only evaluates on a single dataset with limited variation in schema definition.",
"It is unknown how this paradigm generalizes to other datasets and other different styles of descriptions.",
"In this paper, we focus our investigation on the study of three aspects in schema-guided dialog state tracking: (1) schema encoding model architectures (2) supplementary training on intermediate tasks (3) various styles for schema description.",
"To make a more general discussion on the schema-guided dialog state tracking, we perform extensive empirical studies on both SG-DST and MULTIWOZ 2.2 datasets.",
"In summary, our contributions include: A comparative study on schema encoding architectures, suggesting a partial-attention encoder for good balance between inference speed and accuracy.",
"An experimental study of supplementary training on schema-guided dialog state tracking, via intermediate tasks including natural language inference and question answering.",
"An in-depth analysis of different schema description styles on a new suite of benchmarking datasets with variations in schema description for both SG-DST and MULTIWOZ 2.2.",
"A classic dialog state tracker predicts a dialog state frame at each user turn given the dialog history and predefined domain ontology.",
"As shown in Figure 1, the key difference between schema-guided dialog state tracking and the classic paradigm is the newly added natural language descriptions.",
"In this section, we first introduce the four subtasks and schema components in schema-guided dialog state tracking, then we outline the research questions in our paper.",
"Subtasks.",
"As shown in Figure 1, the dialog state for each service consists of 3 parts: active intent , requested slots , user goals (slot values) .",
"Without loss of generality, for both SG-DST and MULTIWOZ 2.2 datasets, we divide their slots into categorical and non-categorical slots by following previous study on dual-strategies (Zhang et al., 2019).",
"Thus to fill the dialog state frame for each user turn, we solve four subtasks: intent classification ( Intent ), requested slot identification ( Req ), categorical slot labeling ( Cat ), and non-categorical slot labeling ( NonCat ).",
"All subtasks require matching the current dialog history with candidate schema descriptions for multiple times.",
"Schema Components.",
"Figure 1 shows three main schema components: service, intent, slot.",
"For each intent, the schema also describes optional or required slots for it.",
"For each slot, there are flags indicating whether it is categorical or not.",
"Categorical means there is a set of predefined candidate values (Boolean, numeric or text).",
"For instance, has_live_music in Figure 1 is a categorical slot with Boolean values.",
"Non-categorical , on the other hand, means the slot values are filled from the string spans in the dialog history.",
"Q1.",
"How should dialogue and schema be encoded?",
"5 Q2.",
"How do different supplementary trainings impact each subtask?",
"6 Q3.",
"How do different description styles impact the state tracking performance?",
"7 3 Related Work Our work is related to three lines of research: multi-sentence encoding, multi-domain and transferable dialog state tracking.",
"However, our focus is on the comparative study of different encoder architectures, supplementary training, and schema description style variation.",
"Thus we adopt existing strategies from multi-domain dialog state tracking.",
"Multi-Sentence Encoder Strategies.",
"Similar to the recent study on encoders for response selection and article search tasks Humeau et al. (2019), we also conduct our comparative study on the two typical architectures Cross-Encoder (Bordes et al., 2014; Lowe et al., 2015) and Dual-Encoder (Wu et al., 2017; Yang et al., 2018).",
"However, they only focus on sentence-level matching tasks.",
"All subtasks in our case require sentence-level matching between dialog context and each schema, while the non-categorical slot filling task also needs to produce a sequence of token-level representation for span detection.",
"Hence, we study multi-sentence encoding for both sentence-level and token-level tasks.",
"Moreover, to share the schema encoding across subtasks and turns, we also introduce a simple Fusion-Encoder by caching schema token embeddings in 5.1, which improves efficiency without sacrificing much accuracy.",
"Multi-domain Dialog State Tracking.",
"Recent research on multi-domain dialog system have been largely driven by the release of large-scale multi-domain dialog datasets, such as MultiWOZ (Budzianowski et al., 2018), M2M (Shah et al., 2018a), accompanied by studies on key issues such as in/cross-domain carry-over (Kim et al., 2019).",
"In this paper, our goal is to understanding the design choice for schema descriptions in dialog state tracking.",
"Thus we simply follow the in-domain cross-over strategies used in TRADE (Wu et al., 2019).",
"Additionally, explicit cross-domain carryover (Naik et al., 2018) is difficult to generalize to new services and unknown carryover links.",
"We use longer dialog history to inform the model on the dialog in the previous service.",
"This simpli-fied strategy does impact our model performance negatively in comparison to a well-designed dialog state tracking model on seen domains.",
"However, it helps reduce the complexity of matching extra slot descriptions for cross-service carryover.",
"We leave the further discussion for future work.",
"Transferable Dialog State Tracking.",
"Another line of research focuses on how to build a transferable dialog system that is easily scalable to newly added intents and slots.",
"This covers diverse top-ics including e.g., resolving lexical/morphological variabilities by symbolic de-lexicalization-based methods (Henderson et al., 2014; Williams et al., 2016), neural belief tracking (Mrkic et al., 2017), generative dialog state tracking (Peng et al., 2020; Hosseini-Asl et al., 2020), modeling DST as a question answering task (Zhang et al., 2019; Lee et al., 2019; Gao et al., 2020, 2019).",
"Our work is similar with the last class.",
"However, we further investigate whether the DST can benefit from NLP tasks other than question answering.",
"Furthermore, without rich description for the service/intent/slot in the schema, previous works mainly focus on simple format on question answering scenarios, such as domain-slot-type compounded names (e.g., restaurant-food \"), or simple question template What is the value for slot i ? \".",
"We incorporate different description styles into a comparative discussion on 7.1.",
"To the best of our knowledge, at the time of our study, SG-DST and MULTIWOZ 2.2 are the only two publicly available corpus for schema-guided dialog study.",
"We choose both of them for our study.",
"In this section, we first introduce these two representative datasets, then we discuss the generaliz-ibility in domain diversity, function overlapping, data collecting methods.",
"Schema-Guided Dialog Dataset.",
"SG-DST dataset 1 is especially designed as a test-bed for schema-guided dialog, which contains well-designed heterogeneous APIs with overlapping functionalities between services (Rastogi et al., 2019).",
"In DSTC8 (Rastogi et al., 2020), SG-DST was introduced as the standard benchmark dataset for schema-guided dialog research.",
"SG-DST covers 20 domains, 88 intents, 365 slots.",
"2 However, previous research are mainly conducted based on this single dataset and the provided single description style.",
"In this paper, we further extended this dataset with other benchmarking description styles as shown in 7, and then we perform both homogenous and hetergenous evalution on it.",
"Remixed MultiWOZ 2.2 Dataset.",
"To eliminate potential bias from the above single SG-DST dataset, we further add MULTIWOZ 2.2 (Zang et al., 2020) to our study.",
"Among various extended versions for MultiWOZ dataset (2.0-2.3, Budzianowski et al., 2018; Eric et al., 2020; Zang et al., 2020; Han et al., 2020) , besides rectifying the annotation errors, MULTIWOZ 2.2 also introduced the schema-guided annotations, which covers 8 domains, 19 intents, 36 slots.",
"To evaluate performance on seen/unseen services with MultiWOZ, we remix the MULTIWOZ 2.2 dataset to include as seen services dialogs related to restaurant , attraction and train during training, and eliminate slots from other domains/services from training split.",
"For dev, we add two new domains hotel and taxi as unseen services.",
"For test, we add all remaining domains as unseen, including those that have minimum overlap with seen services, such as hospital , police , bus .",
"The statistics of data splits are shown in Appendix A.2.",
"Note that this data split is different from the previous work on zero-shot MultiWOZ DST which takes a leave-one-out approach in Wu et al. (2019).",
"By remixing the data in the way described above, we can evaluate the zero-shot performance on MultiWOZ in a way largely compatible with SG-DST .",
"Discussion.",
"First, the two datasets cover diverse domains.",
"MULTIWOZ 2.2 covers various possible dialogue scenarios ranging from requesting basic information about attractions through booking a hotel room or travelling between cities.",
"While SG-DST covers more domains, such as Payments', Calender', DoctorServices' and so on.",
"Second, they include different levels of overlapping functionalities.",
"SG-DST allows frequent function overlapping between multiple services, within the same domain (e.g. BookOneWayTicket v.s. BookRoundTripTicket), or across different domains (BusTicket v.s. TrainTicket).",
"However, the overlapping in MULTIWOZ 2.2 only exists across different domains, e.g., destination', leaveat' slots for Taxi and Bus services, pricerange', bookday' for Restaurant and Hotel services.",
"Third, they are collected by two different approaches which are commonly used in dialog collecting.",
"SG-DST is firstly collected by machine-to-machine self-play (M2M, Shah et al., 2018b) with dialog flows as seeds, then paraphrased by crowd-workers.",
"While MULTIWOZ 2.2 are human-to-human dialogs (H2H, Kelley, 1984), which are collected with the Wizard-of-Oz approach.",
"We summarize the above discussion in Table 1. We believe that results derived from these two representative datasets can guide future research in schema guided dialog.",
"In this section, we focus on the model architecture for matching dialog history with schema descriptions using pretrained BERT (Devlin et al., 2019) 3 .",
"To support four subtasks, we first extend Dual-Encoder and Cross-Encoder to support both sentence-level matching and token-level prediction.",
"Then we propose an additional Fusion-Encoder strategy to get faster inference without sacrificing much accuracy.",
"We summarize different architectures in Figure 2. Then we show the classification head and results for each subtask.",
"Dual-Encoder.",
"It consists of two separate BERTs to encode dialog history and schema description respectively, as Figure 2",
"(a).",
"We follow the setting in the official baseline provided by DSTC8 Track4 (Rastogi et al., 2020).",
"We first use a fixed BERT to encode the schema description once and cached the encoded schema CLSS .",
"Then for sentence-level representation, we concatenate dialog history representation CLSD and candidate schema representation CLSS as the whole sentence-level representation for the pair, denoted as CLSDE .",
"For token-level representation, we concatenate the candidate schema CLSS with each token embedding in the dialog history, denoted as TOKDE .",
"4 Because the candidate schema embeddings are encoded independently from the di-4 A schema-aware dialog token embedding can also be computed by attention or other method for span-based detection tasks (Humeau et al., 2019; Noroozi et al., 2020) alog context, they can be pre-computed once and cached for fast inference.",
"Cross-Encoder.",
"Another popular architecture as Figure 2",
"(b) is Cross-Encoder , which concatenates the dialog and schema as a single input, and encodes jointly with a single self-attentive encoder spanning over the two segments.",
"When using BERT to encode the concatenated sentence pair, it performs full (cross) self-attention in every transformer layers, thus offer rich interaction between the dialog and schema.",
"BERT naturally produces a summarized representation with [CLS] embedding CLSCE and each schema-attended dialog token embeddings TOKCE .",
"Since the dialog and schema encoding always depend on each other, it requires recomputing dialog and schema encoding for multiple times, thus much slower in inference.",
"Fusion-Encoder.",
"In Figure 2",
"(c), similar to Dual-Encoder , Fusion-Encoder also encodes the schema independently with a fixed BERT and finetuning another BERT for dialog encoding.",
"However, instead of caching a single [CLS] vector for schema representation, it caches all token representation for the schema including the [CLS] token.",
"What's more, to integrate the sequences dialog token representation with schema token representation, an extra stack of transformer layers are added on top to allow token-level fusion via self-attention, similar to Cross-Encoder .",
"The top transformer layers will produce embeddings for each token TOKFE including a schema-attended CLSFE of the input [CLS] from the dialog history.",
"With cached schema token-level representations, it can efficiently produce schema-aware sentenceand token-level representation for each dialog-schema pairs.",
"All the above 3 encoders will produce both sentenceand token-level representations for a given sentence pair.",
"In this section, we abstract them as two representations CLS and TOK , and present the universal classification heads to make decisions for each subtask.",
"Active Intent.",
"To decide the intent for current dialog turn, we match current dialog history D with each intent descriptions I 0 ...I k .",
"For each dialog-intent pair ( D, I k ) , we project the final sentence-level CLS representation to a single number P activeI k with a linear layer follows a sigmoid function.",
"We predict \"NONE\" if the P activeI k of all intents are less than a threshold 0.5, which means no intent is active.",
"Otherwise, we predict the intent with largest P activeI k .",
"We predict the intent for each turn independently without considering the prediction on previous turns.",
"Requested Slot.",
"As in Figure 1, mulitple requested slots can exist in a single turn.",
"We use the same strategy as in active intent prediction to predict a number P activereq .",
"However, to support the multiple requested slots prediction.",
"We predict all the requested slots with P activereq > 0 .",
"5 .",
"Categorical Slot.",
"Categorical slots have a set of candidate values.",
"We cannot predict unseen values via n-way classification.",
"Instead, we do binary classification on each candidate value.",
"Besides, rather than directly matching with values, we also need to check that whether the corresponding slot has been activated.",
"For Cross-Encoder and Fusion-Encoder , we use typical two-stage state tracking to incrementally build the state: Step 1. Using CLS to predict the slot status as none , dontcare or active .",
"When the status is active , we use the predicted slot value; Otherwise, it will be assigned to dontcare meaning no user preference for this slot, or none meaning no value update for the slot in current turn; Step 2. If Step 1 is active , we match the dialog history with each value and select the most related value by ranking.",
"We train on cross entropy loss.",
"Two-stage strategy is efficient for Dual-Encoder and Fusion-Encoder , where cached schema can be reused, and get efficiently ranked globally in a single batch.",
"However, it is not scalable for Cross-Encoder , especially for large number of candidate values in MultiWOZ dataset.",
"Hence, during training, we only use a binary cross-entropy for each single value and postpone the ranking only to the inference time.",
"Noncategorical Slot.",
"The slot status prediction for noncategorical slot use the same two-stage strategy.",
"Besides that, we use the token representation of dialog history TOK to compute two softmax scores f istart and f iend for each token i , to represent the score of predicting the token as start and end position respectively.",
"Finally, we find the valid span with maximum sum of the start and end scores.",
"To fairly compare all three models, we follow the same schema input setting as in Table 2. We trained separate models for SG-DST and the remixed MultiWOZ datasets for all the experiments in our pa-Intent",
"pers 5 .",
"Because there are very few intent and requested slots in MULTIWOZ 2.2 dataset, we ignore the intent and requested slots tasks for MULTIWOZ 2.2 in our paper.",
"Results.",
"As shown in Table 3, Cross-Encoder performs the best over all subtasks.",
"Our Fusion-Encoder with partial attention outperforms the Dual-Encoder by a large margin, epsecially on categorical and noncategorical slots predictions.",
"Additionally, on seen services, we found that Dual-Encoder and Fusion-Encoder can perform as good as Cross-Encoder on Intent and Req tasks.",
"However, they cannot generalize well on unseen services as Cross-Encoder .",
"Inference Speed.",
"To test the inference speed, we conduct all the experiments with a maximum affordable batch size to fully exploit 2 V100 GPUs (with 16GB GPU RAM each).",
"During training, we log the inference time of each evaluation on dev set.",
"Both Dual-Encoder and Fusion-Encoder can do joint inference across 4 subtasks to obtain an integral dialog state for a dialog turn example.",
"Dual-Encoder achieves the highest inference speed of 603.35 examples per GPU second, because the 5 Appendix A.1 shows the detailed experiment setup encoding for dialog and schema are fully separated.",
"A dialog only needed to be encoded for once during the inference of a dialog state example while the schema are precomputed once.",
"However, for Cross-Encoder , to predict a dialog state for a single turn, it need to encode more than 300 sentence pairs in a batch, thus only processes 4.75 examples per GPU second.",
"Fusion-Encoder performs one time encoding on dialog history, but it needs to jointly encode the same amount of dialog-schema pair ws Cross-Encoder , instead, however, with a two-layer transformer encoder.",
"Overall it achieves 10.54 examples per GPU second, which is 2.2x faster than Cross-Encoder .",
"With regarding to the accuracy in Table 3, Fusion-Encoder performs much better than Dual-Encoder , especially on unseen services.",
"Besides the pretrain-fintune framework used in 5, Phang et al. (2018) propose to add a supplementary training phase on an intermediate task after the pretraining, but before finetuning on target task.",
"It shows significant improvement on the target tasks.",
"Moreover, large amount pretrained and finetuned transformer-based models are publicly accessible, and well-organized in model hubs for sharing, training and testing 6 .",
"Given the new task of schema-guided dialog state tracking, in this section, we study our four subtasks with different intermediate tasks for supplementary training.",
"As described in 5.2, all our 4 subtasks take a pair of dialog and schema description as input, and predict with the summerized sentence-pair CLS representation.",
"While NonCat also requires span-based detection such as question answering.",
"Hence, they share the similar problem structure with the following sentence-pair encoding tasks.",
"Natural Language Inference.",
"Given a hypothe-sis/premise sentence pair, natural language inference is a task to determine whether a hypothesis is entailed, contradicted or neutral given that premise.",
"Question Answering.",
"Given a passage/question pairs, the task is to extract the span-based answer in the passage.",
"Hence, when finetuning BERT on our subtaks, instead of directly using the originally pretrained BERT, we use the BERT finetuned on the above 6 e.g., Huggingface(https://huggingface.co/models) and ParlAL(https://parl.ai/docs/zoo.html), etc.",
"two tasks for further finetuning.",
"Due to better pefor-mance of Cross-Encoder in 5, we directly use the finetuned Cross-Encoder version of BERT models on SNLI and SQuAD2.0 dataset from Huggingface model hub.",
"We add extra speaker tokens [user:] and [system:] into the vocabulary for encoding the multi-turn dialog histories.",
"Table 4 shows the performances gain when finetuning 4 subtasks based on models with the above",
"SNLI and SQuAD2.0 supplementary training.",
"We mainly find that SNLI helps on Intent task, SQuAD2 mainly helps on NonCat task, while neither of them helps much on Cat task.",
"Recently, Namazifar et al. (2020) also found that when modeling dialog understanding as question answering task, it can benefit from a supplementary training on SQuAD2 dataset, especially on few-shot scenarios, which is a similar findings as our NonCat task.",
"Result difference on Req task is minor, because it is a relatively easy task, adding any supplementary training did n't help much.",
"Moreover, for Cat task, the sequence 2 of the input pair is the slot description with a categorical slot value, thus the meaning overlapping between the full dialog history and the slot/value is much smaller than SNLI tasks.",
"On the other side, CLS token in SQuAD BERT is finetuned for null predictions via start and end token classifers, which is different from the the single CLS classifer in Cat task.",
"Previous work on schema-guided dialog (Rastogi et al., 2020) are only based on the provided descriptions in SG-DST dataset.",
"Recent work on modeling dialog state tracking as reading comprehension (Gao et al., 2019) only formulate the descriptions as simple question format with existing intent/slot names, it is unknown how it performs when compared to other description styles.",
"Moreover, they only conduct homogeneous evaluation where training and test data share the same description style.",
"In this section, We also investigate how a model trained on one description style will perform on other different styles, especially in a scenario where chat-bot developers may design their own descriptions.",
"We first introduce different styles of descriptions in our study, and then we train models on each description style and evaluate on tests with corresponding homogeneous and heterogeneous styles of descriptions.",
"Given the best performance of Cross-Encoder shown in the previous section and its popularity in DSTC8 challenges, we adopt it as our model architecture in this section.",
"For each intent/slot, we describe their functionalities by the following different descriptions styles: Identifer .",
"This is the least informative case of name-based description: we only use meaningless intent/slot identifiers, e.g. Intent_1, Slot_2.",
"It means we don't use description from any schema component.",
"We want to investigate how a simple identifier-based description performs in schema-guided dialog modeling, and the performance lower-bound on transferring to unseen services.",
"NameOnly .",
"Using the original intent/slot names in SG-DST and MULTIWOZ 2.2 dataset as descriptions, to show whether name is enough for schema-guided dialog modeling.",
"Q-Name .",
"This is corresponding to previous work by Gao et al. (2019).",
"For each intent/slot, it generate a question to inquiry about the intent and slot value of the dialog.",
"For each slot, it simply follows the template ' What is the value for slot i ?",
"'.",
"Besides that, our work also extend the intent description by following the template Is the user intending to intent j \". Orig . The original descriptions in SG-DST and MULTIWOZ 2.2 dataset. Q-Orig . Different from the Q-Name , firstly it is based on the original descriptions; secondly, rather than always use the what is\" template to inquiry the intent/slot value, We add what\", which\", how many\" or when\" depending on the entity type required for the slot. Same as Q-Name , we just add prefixes as Is the user intending to. . . in front of the original description.",
"In a sum, this description is just adding question format to original description.",
"The motivation of this description is to see whether the question format is helpful or not for schema-guided dialog modeling.",
"To test the model robustness, we also create two paraphrased versions Name-Para and Orig-Para for NameOnly and Orig respectively.",
"We first use nematus (Sennrich et al., 2017) to automatically paraphrase the description with back translation, from English to Chinese and then translate back, then we manually check the paraphrase to retain the main meaning.",
"Appendix A.5.1 shows examples for different styles of schema descriptions.",
"Unlike the composition used in Table 2, we don't use the service description to avoid its impact.",
"For each style, we train separate models on 4 subtasks, then we evaluate them on different target styles.",
"First, Table 5 summarizes the performance for homogeneous evaluation, while Table 6 shows how the question style description can benefit from SQuAD2 finetuning.",
"Then we also conduct heterogeneous evaluation on the other styles 7 as shown in Table 7.",
"Is name-based description enough?",
"As shown in Table 5, Identifer is the worst case of using name description, its extremely bad performance indicates name-based description can be very unstable.",
"However, we found that simple meaningful name-based description actually can perform the best in Intent and Req task, and they perform 7 We don't consider the meaningless Identifer style due to its bad performance worse on Cat and NonCat tasks comparing to the bottom two rich descriptions.",
"8 After careful analysis on the intents in SG-DST datasets, we found that most services only contains two kinds of intents, an information retrieval intent with a name prefix \"Find-\", \"Get-\", \"Search-\"; another transaction intent like \"Add-\", \"Reserve-\" or \"Buy-\".",
"Interestingly, we found that all the intent names in the original schema-guided dataset strictly follows an action-object template with a composition of words without abbreviation, such as \"FindEvents\", \"BuyEventTickets\".",
"This simple name template is good enough to describe the core functionality of an intent in SG-DST dataset.",
"9 Additionally, Req is a relaitively simper task, requesting information are related to specifial attributes, such as \"has_live_music\", \"has_wifi\", where keywords co-occured in the slot name and in the user utterance, hence rich explanation cannot help further.",
"On the other side, rich descriptions are more necessary for Cat and NonCat task.",
"Because in many cases, slot names are too simple to represent the functionalities behind it, for example, slot name \"passengers\" cannot fully represent the meaning \"number of passengers in the ticket booking\".",
"Does question format help?",
"As shown in Table 5, when comparing row Q-Orig v.s. Orig , we found extra question format can improve the performance on Cat and NonCat task on both SG-DST and MULTIWOZ 2.2 datasets, but not for Intent and Req tasks.",
"We believe that question format helps the model to focus more on specific entities in the dialog history.",
"However, when adding a simple question pattern to NameOnly , comparing row Q-Name and NameOnly , there is no consistent improvement on both of the two datasets.",
"Further more, we are curious about whether BERT finetuned on SQuAD2 (SQuAD2-BERT) can further help on the question format.",
"Because NonCat are similar with span-based question answering, we focus on NonCat here.",
"Table 6 shows that, after applying the supplementary training on SQuAD2 (6), almost all models get improved on unseen splits however slightly dropped on seen services.",
"Moreover, comparing to Q-Name , Q-8 Only exception happens in Cat on MULTIWOZ 2.2.",
"When creating MULTIWOZ 2.2 (Zang et al., 2020), the slots with less than 50 different slot values are classified as categorical slots, which leads to inconsistencies.",
"We put detailed discuss about MULTIWOZ 2.2 in the supplementary material 9 This action-object template has also been found efficient for open domain intent induction task(e.g., Vedula et al., 2020, OPINE).",
"Orig is more similar to the natural questions in the SQuAD2, we obverse that Q-Orig gains more than Q-Name from pretrained model on SQuAD2.",
"In this subsection, we first simulate a scenario when there is no recommended description style for the future unseen services.",
"Hence, unseen services can follow any description style in our case.",
"We average the evaluation performance on three other descriptions and summarized in Table 7.",
"The column shows the performance change compared to the homogeneous performance.",
"It is not surprising that almost all models perform worse on heterogeneous styles than on homogeneous styles due to different distribution between training and evaluation.",
"The bold number shows the best average performance on heterogeneous evaluation for each subtask.",
"The trends are similar with the analysis in homogeneous evaluation 7.2.1, the name-based descriptions perform better than other rich descriptions on intent classification tasks.",
"While on other tasks, the Orig description performs more robust, especially on NonCat task.",
"Furthermore, we consider another scenario where fixed description convention such as NameOnly and Orig are suggested to developers, they must obey the basic style convention but still can freely use their own words, such as abbreviation, synonyms, adding extra modifiers.",
"We train each model on NameOnly and Orig , then evaluate on the corresponding paraphrased version respectively.",
"In the last two rows of Table 7, the column para' shows performance on paraphrased schema, while shows the performance change compared to the homogeneous evaluation.",
"Orig still performs more robust than NameOnly when schema descriptions get paraphrased on unseen services.",
"In this paper, we studied three questions on schema-guided dialog state tracking: encoder architectures, impact of supplementary training, and effective schema description styles.",
"The main findings are as follows: By caching the token embedding instead of the single CLS embedding, a simple partial-attention Fusion-Encoder can achieve much better performance than Dual-Encoder , while still infers two times faster than Cross-Encoder .",
"We quantified the gain via supplementary training on two intermediate tasks.",
"By carefully choosing representative description styles according to recent works, we are the first of doing both homogeneous/hetero-geneous evaluations for different description style in schema-guided dialog.",
"The results show that simple name-based description performs well on Intent and Req tasks, while NonCat tasks benefits from richer styles of descriptions.",
"All tasks suffer from inconsistencies in description style between training and test, though to varying degrees.",
"Our study are mainly conducted on two datasets: SG-DST and MULTIWOZ 2.2, while the speed-accuracy balance of encoder architectures and the findings in supplementary training are expected to be dataset-agnostic, because they depend more on the nature of the subtasks than the datasets.",
"Based on our proposed benchmarking descriptions suite, the homogeneous and heterogeneous evaluation has shed the light on the robustness of cross-style schema-guided dialog modeling, we believe our study will provide useful insights for future research.",
"The authors wish to thank the anonymous reviewers and members of the Amazon LEX team for their valuable feedback."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"objective",
"other"
] |
[
"Word embeddings are widely used in NLP for a vast range of tasks.",
"It was shown that word embeddings derived from text corpora reflect gender biases in society.",
"This phenomenon is pervasive and consistent across different word embedding models, causing serious concern.",
"Several recent works tackle this problem, and propose methods for signifi-cantly reducing this gender bias in word embeddings, demonstrating convincing results.",
"However, we argue that this removal is super-ficial.",
"While the bias is indeed substantially reduced according to the provided bias definition, the actual effect is mostly hiding the bias, not removing it.",
"The gender bias information is still reflected in the distances between gender-neutralized words in the debiased embeddings, and can be recovered from them.",
"We present a series of experiments to support this claim, for two debiasing methods.",
"We conclude that existing bias removal techniques are insufficient, and should not be trusted for providing gender-neutral modeling.",
"Word embeddings have become an important component in many NLP models and are widely used for a vast range of downstream tasks.",
"However, these word representations have been proven to reflect social biases (e.g. race and gender) that naturally occur in the data used to train them (Caliskan et al., 2017; Garg et al., 2018).",
"In this paper we focus on gender bias.",
"Gender bias was demonstrated to be consistent and pervasive across different word embeddings.",
"Bolukbasi et al. (2016b) show that using word embeddings for simple analogies surfaces many gender stereotypes.",
"For example, the word embedding they use (word2vec embedding trained on the Google News dataset 1 (Mikolov et al., 2013)) an-1 https://code.google.com/archive/p/word2vec/ swer the analogy man is to computer programmer as woman is to x with x = homemaker.",
"Caliskan et al. (2017) further demonstrate association between female/male names and groups of words stereotypically assigned to females/males (e.g. arts vs. science).",
"In addition, they demonstrate that word embeddings reflect actual gender gaps in reality by showing the correlation between the gender association of occupation words and labor-force participation data.",
"Recently, some work has been done to reduce the gender bias in word embeddings, both as a post-processing step (Bolukbasi et al., 2016b) and as part of the training procedure (Zhao et al., 2018).",
"Both works substantially reduce the bias with respect to the same definition: the projection on the gender direction (i.e. he she ), introduced in the former.",
"They also show that performance on word similarity tasks is not hurt.",
"We argue that current debiasing methods, which lean on the above definition for gender bias and directly target it, are mostly hiding the bias rather than removing it.",
"We show that even when drastically reducing the gender bias according to this definition, it is still reflected in the geometry of the representation of gender-neutral words, and a lot of the bias information can be recovered.",
"2 2 Gender Bias in Word Embeddings In what follows we refer to words and their vectors interchangeably.",
"Definition and Existing Debiasing Methods Bolukbasi et al. (2016b) define the gender bias of a word w by its projection on the gender di-rection: w ( he she ) , assuming all vectors are normalized.",
"The larger a word's projection is on 2 The code for our experiments is available at https://github.com/gonenhila/gender_bias_lipstick .",
"he she , the more biased it is.",
"They also quantify the bias in word embeddings using this definition and show it aligns well with social stereotypes.",
"Both Bolukbasi et al. (2016b) and Zhao et al. (2018) propose methods for debiasing word embeddings, substantially reducing the bias according to the suggested definition.",
"3 In a seminal work, Bolukbasi et al. (2016b) use a post-processing debiasing method.",
"Given a word embedding matrix, they make changes to the word vectors in order to reduce the gender bias as much as possible for all words that are not inherently gendered (e.g. mother, brother, queen).",
"They do that by zeroing the gender projection of each word on a predefined gender direction.",
"4 In addition, they also take dozens of inherently gendered word pairs and explicitly make sure that all neutral words (those that are not predefined as inherently gendered) are equally close to each of the two words.",
"This extensive, thoughtful, rigorous and well executed work surfaced the problem of bias in embeddings to the ML and NLP communities, defined the concept of debiasing word embeddings, and established the defacto metric of measuring this bias (the gender direction).",
"It also provides a perfect solution to the problem of removing the gender direction from non-gendered words.",
"However, as we show in this work, while the gender-direction is a great indicator of bias, it is only an indicator and not the complete manifestation of this bias.",
"Zhao et al. (2018) take a different approach and suggest to train debiased word embeddings from scratch.",
"Instead of debiasing existing word vectors, they alter the loss of the GloVe model (Pen-nington et al., 2014), aiming to concentrate most of the gender information in the last coordinate of each vector.",
"This way, one can later use the word representations excluding the gender coordinate.",
"They do that by using two groups of male/female seed words, and encouraging words that belong to different groups to differ in their last coordinate.",
"In addition, they encourage the representation of neutral-gender words (excluding the last coordinate) to be orthogonal to the gender direc-3 Another work in this spirit is that of Zhang et al. (2018), which uses an adversarial network to debias word embeddings.",
"There, the authors rely on the same definition of gender bias that considers the projection on the gender direction.",
"We expect similar results for this method as well, however, we did not verify that.",
"4 The gender direction is chosen to be the top principal component (PC) of ten gender pair difference vectors.",
"tion.",
"5 This work did a step forward by trying to remove the bias during training rather than in postprocessing, which we believe to be the right approach.",
"Unfortunately, it relies on the same definition that we show is insufficient.",
"These works implicitly define what is good gender debiasing: according to Bolukbasi et al. (2016b), there is no gender bias if each non-explicitly gendered word in the vocabulary is in equal distance to both elements of all explicitly gendered pairs.",
"In other words, if one cannot determine the gender association of a word by looking at its projection on any gendered pair.",
"In Zhao et al. (2018) the definition is similar, but restricted to projections on the gender-direction.",
"Both works provide very compelling results as evidence of reducing the bias without hurting the performance of the embeddings for standard tasks.",
"However, both methods and their results rely on the specific bias definition.",
"We claim that the bias is much more profound and systematic, and that simply reducing the projection of words on a gender direction is insufficient: it merely hides the bias, which is still reflected in similarities between gender-neutral words (i.e., words such as math or delicate are in principle gender-neutral, but in practice have strong stereotypical gender associations, which reflect on, and are reflected by, neighbouring words).",
"Our key observation is that, almost by definition, most word pairs maintain their previous similarity, despite their change in relation to the gender direction.",
"The implication of this is that most words that had a specific bias before are still grouped together, and apart from changes with respect to specific gendered words, the word embed-dings' spatial geometry stays largely the same.",
"6 In what follows, we provide a series of experiments that demonstrate the remaining bias in the debiased embeddings.",
"5 The gender direction is estimated during training by averaging the differences between female words and their male counterparts in a predefined set.",
"6 We note that in the extended arxiv version, Bolukbasi et al. (2016a) do mention this phenomenon and refer to it as indirect bias.",
"However, they do not quantify its extensiveness before and after debiasing, treat it mostly as a nuance, and do not provide any methods to deal with it.",
"We refer to the word embeddings of the previous works as HARD-DEBIASED (Bolukbasi et al., 2016b) and GN-GLOVE (gender-neutral GloVe) (Zhao et al., 2018).",
"For each debiased word embedding we quantify the hidden bias with respect to the biased version.",
"For HARD-DEBIASED we compare to the embeddings before applying the debiasing procedure.",
"For GN-GLOVE we compare to embedding trained with standard GloVe on the same corpus.",
"7 Unless otherwise specified, we follow Bolukbasi et al. (2016b) and use a reduced version of the vocabulary for both word embeddings: we take the most frequent 50,000 words and phrases and remove words with upper-case letters, digits, or punctuation, and words longer than 20 characters.",
"In addition, to avoid quantifying the bias of words that are inherently gendered (e.g. mother, father, queen), we remove from each vocabulary the respective set of gendered words as pre-defined in each work.",
"8 This yeilds a vocabulary of 26,189 words for HARD-DEBIASED and of 47,698 words for GN-GLOVE .",
"As explained in Section 2 and according to the definition in previous works, we compute the bias of a word by taking its projection on the gender direction: he she .",
"In order to quantify the association between sets of words, we follow Caliskan et al. (2017) and use their Word Embedding Association Test (WEAT): consider two sets of target words (e.g., male and female professions) and two sets of attribute words (e.g., male and female names).",
"A permutation test estimates the probability that a random permutation of the target words would produce equal or greater similarities to the attribute sets.",
"Maleand female-biased words cluster together We take the most biased words in the vocabulary according to the original bias (500 male-7",
"male-7 We use the embeddings provided by Bolukbasi et al. (2016b) in https://github.com/tolga-b/ debiaswe and by Zhao et al. (2018) in https:// github.com/uclanlp/gn_glove .",
"8 For HARD-DEBIASED we use first three lists from: https://github.com/tolga-b/debiaswe/tree/master/data and for GN-GLOVE we use the two lists from: https://github.com/uclanlp/gn_ glove/tree/master/wordlist",
"biased and 500 female-biased 9 ), and cluster them into two clusters using k-means.",
"For the HARDDEBIASED embedding, the clusters align with gender with an accuracy of 92.5% (according to the original bias of each word), compared to an accuracy of 99.9% with the original biased version.",
"For the GN-GLOVE embedding, we get an accuracy of 85.6%, compared to an accuracy of 100% with the biased version.",
"These results suggest that indeed much of the bias information is still embedded in the representation after debiasing.",
"Figure 1 shows the tSNE (Maaten and Hinton, 2008) projection of the vectors before and after debiasing, for both models.",
"Bias-by-projection correlates to bias-by-neighbours This clustering of gendered words indicates that while we cannot directly observe the bias (i.e. the word nurse will no longer be closer to explicitly marked feminine words) the bias is still manifested by the word being close to socially-marked feminine words, for example nurse being close to receptionist, caregiver and teacher.",
"This suggests a new mechanism for measuring bias: the percentage of male/female socially-biased words among the k nearest neighbors of the target word.",
"10 We measure the correlation of this new bias 9 highest on the two lists for HARD-DEBIASED are 'pe-tite', 'mums', 'bra', 'breastfeeding' and 'sassy' for female and 'rookie', 'burly', 'hero', 'training camp' and 'journey-man' for male.",
"Lowest on the two lists are 'watchdogs', 'watercolors', 'sew', 'burqa', 'diets' for female and 'teammates', 'playable', 'grinning', 'knee surgery', 'impersonation' for male.",
"10 While the social bias associated with a word cannot be observed directly in the new embeddings, we can approximate it using the gender-direction in non-debiased embeddings.",
"measure with the original bias measure.",
"For the HARD-DEBIASED embedding we get a Pearson correlation of 0.686 (compared to a correlation of 0.741 when checking neighbors according to the biased version).",
"For the GN-GLOVE embedding we get a Pearson correlation of 0.736 (compared to 0.773).",
"All these correlations are statistically significant with p-values of",
"0. Professions We consider the list of professions used in Bolukbasi et al. (2016b) and Zhao et al. (2018) 11 in light of the neighbours-based bias definition.",
"Figure 2 plots the professions, with axis X being the original bias and axis Y being the number of male neighbors, before and after debiasing.",
"For both methods, there is a clear correlation between the two variables.",
"We observe a Pearson correlation of 0.606 (compared to a correlation of 0.747 when checking neighbors according to the biased version) for HARD-DEBIASED and 0.792 (compared to 0.820) for GN-GLOVE .",
"All these correlations are significant with p-values < 1 10 30 .",
"Association between female/male and female/male-stereotyped words We replicate the three gender-related association experiments from Caliskan et al. (2017).",
"For these experiments we use the full vocabulary since some of the words are not included in the reduced one.",
"The first experiment evaluates the association between female/male names and family and career words.",
"The second one evaluates the association between female/male concepts and arts and mathematics words.",
"Since the inherently gendered words (e.g. girl, her, brother) in the second experiment are handled well by the debiasing models we opt to use female and male names instead.",
"The third one evaluates the association between fe-male/male concepts and arts and science words.",
"Again, we use female and male names instead.",
"12 For the HARD-DEBIASED embedding, we get a p-value of 0 for the first experiment, 0 .",
"00016 for the second one, and 0 .",
"0467 for the third.",
"For the GN-GLOVE embedding, we get p-values of 7 .",
"7 10 5 , 0 .",
"00031 and 0 .",
"0064 for the first, second and third experiments, respectively.",
"Classifying previously femaleand male-biased words Can a classifier learn to generalize from some gendered words to others based only on their 12 All word lists are taken from Caliskan et al. (2017): First experiment: Female names : Amy, Joan, Lisa, Sarah, Diana, Kate, Ann, Donna.",
"Male names : John, Paul, Mike, Kevin, Steve, Greg, Jeff, Bill.",
"Family words : home, parents, children, family, cousins, marriage, wedding, relatives.",
"Career words : executive, management, professional, corporation, salary, office, business, career.",
"Second experiment: Arts Words : poetry, art, dance, literature, novel, symphony, drama, sculpture.",
"Math words : math, algebra, geometry, calculus, equations, computation, numbers, addition.",
"Third experiment: Arts words : poetry, art, Shakespeare, dance, literature, novel, symphony, drama.",
"Science words : science, technology, physics, chemistry, Einstein, NASA, experiment, astronomy.",
"representations?",
"We consider the 5,000 most biased words according to the original bias (2,500 from each gender), train an RBF-kernel SVM classifier on a random sample of 1,000 of them (500 from each gender) to predict the gender, and evaluate its generalization on the remaining 4,000.",
"For the HARD-DEBIASED embedding, we get an accuracy of 88 .",
"88% , compared to an accuracy of 98 .",
"25% with the non-debiased version.",
"For the GN-GLOVE embedding, we get an accuracy of 96 .",
"53% , compared to an accuracy of 98 .",
"65% with the non-debiased version.",
"The experiments described in the previous section reveal a systematic bias found in the embeddings, which is independent of the gender direction.",
"We observe that semantically related words still maintain gender bias both in their similarities, and in their representation.",
"Concretely, we find that:",
"1. Words with strong previous gender bias (with the same direction) are easy to cluster together.",
"2. Words that receive implicit gender from social stereotypes (e.g. receptionist, hairdresser, captain) still tend to group with other implicit-gender words of the same gender, similar as for non-debiased word embeddings.",
"3. The implicit gender of words with prevalent previous bias is easy to predict based on their vectors alone.",
"The implications are alarming: while suggested debiasing methods work well at removing the gender direction, the debiasing is mostly superficial.",
"The bias stemming from world stereotypes and learned from the corpus is ingrained much more deeply in the embeddings space.",
"We note that the real concern from biased representations is not the association of a concept with words such as he, she, boy, girl nor being able to perform gender-stereotypical word analogies.",
"While these are nice party tricks, algorithmic discrimination is more likely to happen by associating one implicitly gendered term with other implicitly gendered terms, or picking up on gender-specific regularities in the corpus by learning to condition on gender-biased words, and generalizing to other gender-biased words (i.e., a resume classifier that will learn to favor male over female candidates based on stereotypical cues in an existingand biasedresume dataset, despite of being oblivious to gender).",
"Our experiments show that such classifiers would have ample opportunities to pick up on such cues also after debiasing w.r.t the gender-direction.",
"The crux of the issue is that the gender-direction provides a way to measure the gender-association of a word, but does not determine it.",
"Debiasing methods which directly target the gender-direction are for the most part merely hiding the gender bias and not removing it.",
"The popular definitions used for quantifying and removing bias are insufficient, and other aspects of the bias should be taken into consideration as well.",
"This work is supported by the Israeli Science Foundation (grant number 1555/15), and by the Israeli ministry of Science, Technology and Space through the Israeli-French Maimonide Cooperation program."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"result",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Natural Language Inference (NLI) has garnered significant attention in recent years; however, the promise of applying NLI breakthroughs to other downstream NLP tasks has remained unfulfilled.",
"In this work, we use the multiple-choice reading comprehension (MCRC) and checking factual correctness of textual summarization (CFCS) tasks to investigate potential reasons for this.",
"Our find-ings show that: (1) the relatively shorter length of premises in traditional NLI datasets is the primary challenge prohibiting usage in downstream applications (which do better with longer contexts); (2) this challenge can be addressed by automatically converting resource-rich reading comprehension datasets into longer-premise NLI datasets; and (3) models trained on the converted, longer-premise datasets outperform those trained using short-premise traditional NLI datasets on downstream tasks primarily due to the difference in premise lengths.",
"Large-scale, open Natural Language Inference (NLI) datasets (Bowman et al., 2015; Williams et al., 2018) have catalyzed the recent development of NLI models that exhibit close to human-level performance.",
"However, the use of these NLI models for other downstream Natural Language Processing (NLP) tasks has met with limited success.",
"Two of the most popular downstream tasks where NLI models' use has been explored are Multiple-choice Question Answering (MCRC) and Checking Factual Correctness of Summaries (CFCS) (Trivedi et al., 2019; Falke et al., 2019; Clark et al., 2018) both of which can easily be cast into the NLI form, as shown in Figure 1. Looking closely at the composition of these datasets, it is evident that there is a stark difference in the lengths of the con-texts/premises when compared to NLI datasets.",
"As seen in Table 1, traditional NLI datasets have much Passage/Premise/Full text The first time my father and I ever went fishing became a family legend.",
"shorter premises than the context texts from these downstream tasks.",
"Prior research has shown that the capabilities required for handling local inference are very different from those required to perform inference over longer forms of text (Cooper et al., 1996; Lai et al., 2017a).",
"In this work, we explore this conflict as a major bottleneck in the utility of NLI models (trained on traditional NLI datasets) for downstream NLP tasks.",
"We compare the usage of long and short-premise NLI datasets on the dowsntream tasks of MCRC and CFCS, which have inherently long contexts.",
"Such a comparison has not been possible thus far because traditional NLI datasets do not exhibit long premises.",
"We hence look towards recasting other tasks into NLI to generate datasets that can Task Dataset WordCount(Avg) NLI SNLI 14 Scitail 17 MNLI 22 RTE 42 ANLI 54 MCRC RACE 271 MultiRC 252 DREAM 110 CosmosQA 75 CFCS FactCC 546 Summary Reranking 738 Table 1: The average premise (context) length in various datasets.",
"be used to evaluate our conjecture.",
"The Question-Answering (QA) task can easily be cast into the NLI form, and QA datasets (Rajpurkar et al., 2016; Lai et al., 2017b; Khashabi et al., 2018; Sun et al., 2019; Huang et al., 2019) encompass a variety of semantic phenomena that only occur in longer (con)texts.",
"We leverage the resource-rich MCRC task to generate long-premise NLI datasets for our experiments via an automated conversion strategy.",
"We contrast the zero-shot model performance on the MCRC and CFCS tasks of a model pre-trained on our converted long-premise NLI dataset and a model trained on two short-premise NLI datasets MNLI and ANLI.",
"We show that the presence of longer premises is the primary factor for better performance on these two tasks.",
"We further discuss other potential confounding factors for this performance difference such as dataset vocabulary overlap and dataset conversion strategies and eliminate the possibility of their contribution through targeted experiments.",
"Performance on the NLI task has improved significantly due to the availability of large scale datasets (Bowman et al., 2015; Williams et al., 2018) that can be used to train data-hungry deep learning models (Kapanipathi et al., 2020; Wang and Jiang, 2015), including transformer-based architectures (Devlin et al., 2018).",
"However, there has been very limited success in translating this performance to downstream NLP tasks.",
"Work relevant to the use of these NLI models for downstream tasks can be categorized into two categories: (1) work focusing on using models trained on short-premise NLI datasets with fixed or learned aggregations over segmented premises to perform a target downstream task with long contexts (Falke et al., 2019; Trivedi et al., 2019); and (2) work addressing the need for task-specific NLI datasets (Kryscinski et al., 2019; Demszky et al., 2018; Welleck et al., 2019).",
"Despite several attempts, efforts to apply models trained on available NLI datasets to downstream NLP tasks such as MCRC and CFCS have had limited success.",
"Trivedi et al. (2019) use hand-crafted rules to first cast MCRC to NLI; and subsequently divide a long passage into smaller sentence-level premises.",
"They use a pre-trained NLI model to evaluate per-sentence relevance scores concerning one particular hypothesis, and then combine the resulting scores using a learned representation aggregation module to assess the answer given the long passage.",
"Falke et al. (2019) apply a similar approach for the CFCS task, and divide both the provided summary as well as the source documents into single-sentence premises and hypotheses.",
"They use a max pooling operation over the entailment scores of all sentence-level premise-hypothesis pairs to obtain the factual correctness score for each provided summary.",
"Both these works note that models trained on sentence-level NLI datasets do not transfer well to the MCRC and CFCS tasks.",
"We argue that this divide and conquer approach is not ideal for the problem, and highlight the need for an NLI dataset with longer premises.",
"Another line of research focuses on re-casting datasets from other tasks into an NLI form to facilitate the direct use of NLI models on downstream tasks like MCRC and CFCS.",
"Khot et al. (2018) use manual annotation to re-cast SciQ (a QA dataset) to SciTail an NLI dataset.",
"However, Clark et al. (2018) show that an NLI model trained on SciTail does not perform well on the task of MCRC.",
"Similarly, Kryscinski et al. (2019) create an automatically generated training dataset for CFCS.",
"Even though the generated data has relatively long contexts, analysis in Zhang et al. (2020) demonstrated that a model trained on the aforementioned data showed performance improvement only when the token overlap with the source is high.",
"Besides, Demszky et al. (2018) derive an NLI dataset by converting subsets of various QA datasets.",
"They try two approaches for the conversion rule-based and neural.",
"For the rule-based approach, they extract POS tags from the question-answer pair and apply hand-crafted rules on them to convert the pair to a hypothesis sentence.",
"Their neural approach uses a trained SEQ 2 SEQ BiLSTM-with-copy model (Gu et al., 2016) to convert each (cid:104) question, answer (cid:105) pair into a hypothesis sentence (the corresponding passage being the premise).",
"While their approach looks promising, they do not show the utility of these converted datasets by training an NLI model on them.",
"Thus, it remains unclear whether the NLI datasets generated by the conversion are beneficial for NLP tasks.",
"We posit that this direction of research is promising and largely unexplored.",
"In our work, we attempt to leverage the abundance of large and diverse MCRC datasets to generate long-premise NLI datasets, and show that such datasets are useful towards addressing downstream NLP tasks such as MCRC and CFCS which have inherently long contexts.",
"Typically, NLI is cast as a multi-class classification problem, where given a premise and a hypothesis, the model classifies the relation between them as entails , contradicts , or neutral .",
"For the two downstream tasks under consideration: (1) MCRC: Multiple Choice Reading Comprehension, and (2) CFCS: Checking Factual Correctness of Text-Summarization; differentiating between the neutral and contradicts class is often unnecessary.",
"The task is thus reduced to a two-class problem; where the contradicts and neutral classes are clubbed into a not-entails class.",
"MCRC can be cast as an NLI task by viewing the given context as the premise and the transformed question-answer combinations as different hypotheses (Trivedi et al., 2019).",
"The multiple answer-option setting can then be approached as:",
"(a) an individual option entailment task, where more than one answer-option can be correct; or",
"(b) a multi-class classification task across all the answer options, when only a single correct answer exists.",
"CFCS can also be reduced to a two-class NLI problem.",
"A factually correct summary should be entailed by the given source text it should not contain hallucinated facts , and it should also not contradict facts present in the source text.",
"Despite being ideally suited for reduction to NLI, both MCRC and CFCS have proved to be diffi-cult to solve using models trained on short-premise NLI datasets (Trivedi et al., 2019; Falke et al., 2019).",
"Datasets for these tasks contain significantly longer contexts than traditional short-premise NLI datasets (Table 1).",
"This shift in the text length brings about a fundamental change in the nature of the NLI problem.",
"Thus, models trained on short-premise NLI datasets are incapable of performing inference over longer texts, which we posit as the main cause for their poor performance on downstream tasks like CFCS and MCRC * .",
"The paucity of manually-annotated long-premise NLI datasets poses a barrier to assessing this conjecture.",
"We thus shift our focus towards leveraging the abundance of large and diverse MCRC datasets which can be easily recast into NLI form.",
"While the CFCS task also provides a similar opportunity, the sheer lack of annotated training instances inhibits its use.",
"Table 3 shows the abundance of training instances in MCRC datasets, and highlights the deficiency in CFCS datasets.",
"In the following section, we present our conversion strategy for reformatting MCRC datasets into long-premise NLI datasets, which are needed to test the long premise conjecture.",
"As shown in Figure 1, we can convert MCRC datasets into two-class NLI datasets by reusing the passage as a premise, and paraphrasing the question along with each answer option as individual hypothesis options.",
"We begin by using a rule-based conversion method.",
"A dependency parse of both the question and answer option is generated using the Stanford CoreNLP package (Qi et al., 2018).",
"This is followed by the application of conversion rules proposed by Demszky et al. (2018) to generate a hypothesis sentence.",
"However, due to the limited coverage of rules and errors in the dependency parse, some of the generated hypotheses sound unnatural (e.g. the first example in Table 2).",
"In order to generate more natural and diverse hypotheses and to get broader coverage in conversion, we implement a neural conversion strategy.",
"We use a sequence of datasets as a curriculum to finetune the BART conversion model: (1) starting with CNN/Daily Mail summarization dataset (Hermann et al., 2015), which makes the generated sentences coherent; (2) followed by Google's sentence compression dataset (Filippova and Altun, 2013), which limits the generated sequence to a single sentence; and (3) finally the annotated dataset provided by Demszky et al. (2018) which has around 71 , 000 (cid:104) question-answer, hypothesis (cid:105) pairs from various QA datasets.",
"Based on manual inspection, we find that the hypotheses generated by this method indeed sound more natural and diverse than the ones produced by the rule-based conversion .",
"In some cases, however, the generated hypotheses either discard crucial information, or contain hallucinated facts that do not convey the exact information in the source question-answer pair (Table 2).",
"We thus define a hybrid conversion strategy, combining the desirable aspects of the rule-based and neural conversion strategies.",
"We design a heuristic to compose a hybrid dataset to overcome the caveats in the neural conversion.",
"We use the number of words in the question-answer concatenation as a proxy for the expected length of the hypothesis.",
"We target the problems of hallucination and missing information in the neural conversions by accepting only those neural-generated hypotheses that lie in the range of 0 .",
"8 and 1 .",
"2 times the length of the question-answer concatenation.",
"We replace the rejected neural hypotheses with the rule-based hypothesis, if rule-based conversion is feasible; or with the More examples of conversion results are presented in Appendix D. question-answer concatenation otherwise; as seen in Table 2. The selection policy is driven by the need to get more natural and coherent conversions without compromising on the accuracy and preservation of factual information in the question and answer option.",
"The choice of the specific range is purely empirical in nature.",
"We use this hybrid conversion strategy to generate long-premise NLI datasets from MCRC datasets for our experiments and evaluate them in contrast to short-premise NLI datasets.",
"Our experiments involve zero-shot evaluations of pre-trained NLI models on downstream NLP tasks.",
"In this section, we describe the transfer learning setup and the datasets used in our experiments.",
"In order to use a pretrained NLI model for MCRC and CFCS, we need that model to be agnostic to the peculiarities of the downstream task.",
"We use a standard transfer learning setting where the model architecture is divided into two parts: (1) a transferable entailment scorer; and (2) a weight-free comparator on top of the scorer.",
"Each premise-hypothesis pair is encoded as a single sequence, and passed through the transferable entailment scorer to produce an entailment score.",
"Depending on the problem setup, the comparator can either be a sigmoid function (for a two-class entailment problem) as shown in Figure 2; or a softmax function (for multiple choice classification) as shown in Figure 3. This segmentation of the model makes it easy to transfer the model weights across different tasks.",
"For the entailment scorer, we use a 2-layer feed-forward network on top of the [CLS] token of Code available here: https://github.com/ nli-for-qa/transformers-nli pre-trained RoBERTa .",
"To evaluate the transferability of the entailment model, we perform various zero-shot evaluations.",
"This requires interpreting the entailment scores a bit differently for each task.",
"To transfer the weights from a multiple choice classification model (Fig-ure 3) to a two class entailment model (Figure 2), we copy the weights of the transferable entailment scorer as-is, and calibrate a threshold using a dev set to interpret the outputs from the sigmoid comparator for binary classification.",
"Since the softmax comparator does not need any calibration, the transfer in the other direction, i.e., from a two class entailment model to a multiple choice classification model is more straightforward we simply copy the weights of the transferable entailment scorer.",
"For our experiments, we use the NLI form of 4 MCRC datasets (created using the conversion method described in Section 4); 2 CFCS datasets; and 2 traditional short-premise NLI datasets.",
"These datasets are described below: MCRC Datasets: RACE (Lai et al., 2017b) broadly covers detail reasoning, whole-picture reasoning, passage summarization, and attitude analysis.",
"MultiRC (Khashabi et al., 2018) mainly contains questions which require multi-hop reasoning and co-reference resolution.",
"multi-party dialogue.",
"CosmosQA (Huang et al., 2019) focuses on commonsense and inductive reasoning, which require reading between the lines.",
"CFCS Datasets: FactCC (Kryscinski et al., 2019) consists of tuples of the form (cid:104) article, sentence (cid:105) , where the articles are taken from the CNN/DailyMail corpus, and sentences come from the summaries for these articles generated using several state-of-the-art abstractive summarization models.",
"Ranking Summaries for Correctness (evaluation set) (Falke et al., 2019) consists of articles and a set of summary alternatives for each article, where The RoBERTa model is pre-trained on the masked language modeling objective as described in Liu et al. (2019).",
"MNLI (Williams et al., 2018) is a large-scale general domain NLI dataset that is widely used to learn and evaluate short-premise NLI models.",
"ANLI (Nie et al., 2019) is a large-scale NLI dataset generated through an adversarial human-in-the-loop process; where the annotations are constrained such that models trained on MNLI and SNLI predict incorrect answers.",
"This dataset also has the longest premise lengths amongst the traditional NLI datasets compared in Table 1. Long-Premise NLI Datasets: We convert the following MCRC datasets to generate long-premise NLI datasets using the hybrid conversion strategy described in Section D. We refer to these datasets with a subscript converted attached to the source MCRC dataset.",
"As seen from Table 1 and Table 3, RACE is the largest dataset amongst the MCRC datasets, and also has the longest average premise length.",
"In line with this intuition, the model trained on the RACE converted dataset outperforms the converted forms of other MCRC datasets (Appendix B) on all the evaluation tasks.",
"Due to this, in the following section, we only discuss and report results on the RACE converted dataset for brevity and clarity of comparison.",
"Amongst the traditional NLI datasets, we use MNLI and ANLI for a good mix of average premise lengths along with a large number of training samples.",
"Our experiments aim to answer the following questions: (1) Are long premise NLI datasets more use-Figure",
"use-Figure 2: Two class entaiment model.",
"ful for downstream tasks compared to short premise NLI datasets?",
"(Section 6.1 & 6.2); (2) How much do possible confounding factors affect our empirical evaluations?",
"(Section 6.3).",
"To answer these, we perform zero-shot evaluation on the MCRC and CFCS tasks.",
"We contrast the performance of NLI models trained on the short-premise NLI datasets (MNLI, ANLI) with one that is trained on a long-premise NLI dataset (RACE converted ).",
"The models trained on short-premise NLI datasets are evaluated in two ways: (1) by treating the entire premise as input; and (2) by segmenting the premise into shorter segments and using a max aggregation over the entailment scores of all the segments (Falke et al., 2019).",
"Since the model architecture remains the same, we use the name of the training dataset to refer to the model trained on it.",
"For evaluating NLI models on the MCRC task, we use the hybrid conversion (Section 4) to create evaluation datasets.",
"The MultiRC dataset contains multiple correct answer options and hence is evaluated with each question-answer option posed as a separate example.",
"DREAM and CosmosQA datasets have only a single correct answer-option (out of 3 answer-options).",
"Hence, for these datasets, a multi-class classification problem is posed as described in Section 3, using the model architecture described in Figure 3. As seen in Table 4, the model trained on the long-premise RACE converted dataset outperforms the model trained on the short-premise NLI datasets in both regular and segmented forms of evaluation.",
"We assert that this difference in performance can Model Dataset * MultiRC DREAM CosmosQA Random Guess 50.00 33.33 33.33 MNLI 60.58 67.76 38.11 MNLI segmented 61.71 42.28 43.28 ANLI 67.95 74.12 49.71 ANLI segmented 63.45 61.42 49.60 RACE converted 77.43 83.58 73.58 * Datasets are in NLI form created using hybrid conversion method (Section 4).",
"be attributed to the difference in premise lengths of the datasets.",
"However, we allow for the possibility that using the same conversion strategy for the evaluation datasets could potentially benefit the model trained on RACE converted .",
"We discuss such confounding factors in Section 6.3.2.",
"(1) CFCS as classification: In this form, given a document and a corresponding summary sentence, the model needs to identify if the sentence is factually correct with respect to the document (entailed) or not.",
"In order to perform the classification, we first obtain our entailment scorer by fine-tuning the multiple choice classification model (Figure 3) on the RACE Converted dataset and use the dev set || to calibrate a threshold ** (described in Section 5.1) to obtain the two-class entailment model (Figure 2).",
"(2) CFCS as ranking: Given a source document and a set of five machine generated summaries, the model is required to rank at least one factually correct summary above all incorrect summary alternatives.",
"Note that a variable number of these five machine generated summaries can be factually correct (Falke et al., 2019).",
"However, there is always at least one incorrect summary in this set.",
"* These results are reported from Kryscinski et al. (2019).",
"# FactCC autogen is the automatically generated training data used by Kryscinski et al. (2019).",
"Table 5 and Table 6 present the results for CFCS as classification and CFCS as ranking, respectively.",
"Similar to the MCRC task, the model trained on the long-premise RACE converted dataset outperforms the models trained on the short-premise NLI datasets in both regular and segmented forms of evaluation on each of the CFCS task types.",
"Moreover, it also outperforms the FactCC model which uses the automatically generated long-premise training data (Kryscinski et al., 2019).",
"The results of evaluations on the MCRC and CFCS tasks which inherently contain long contexts provide strong evidence supporting our long premise conjecture.",
"Natural language experiments are often vulnerable to artifacts that may leak exploitable signals into the training data that the model can fit on.",
"Such extraneous factors, if present, can prevent the empirical isolation of the premise-length as a major factor.",
"We therefore discuss and eliminate the two most obvious potential confounding factors.",
"In the zero-shot evaluation setup, a high vocabulary overlap between the training data and the target data can potentially help a model perform better.",
"To eliminate this confounding factor from our experiments, we calculate the vocabulary overlap of RACE, MNLI and ANLI (training data) with the 3 MCRC datasets (evaluation data).",
"We define overlap as: # words in [Vocab(train data) Vocab(eval. data)] # words in Vocab(eval. data) Table 8 shows that all the datasets have similar vocabulary overlap with the three MCRC datasets.",
"However, from Table 4, we see that the model trained on RACE converted considerably outperforms the models trained on the short-premise NLI datasets.",
"This indicates that vocabulary overlap is not playing a big role in the model's performance.",
"To substantiate this claim, we further evaluate the two models on those subsets of the three MCRC datasets that consist only of examples where the vocabulary overlap is high ( 0 . 9) .",
"Table 7 shows that the performance of the two models on these high vocabulary overlap subsets is similar to their overall performances on the respective datasets.",
"We can thus conclude that vocabulary overlap is not helping either of the models in terms of predictive performance.",
"We evaluate the models trained on the short-premise NLI datasets and RACE converted on the converted forms of the MCRC datasets.",
"However, only the model trained on the RACE converted dataset is exposed to the same conversion strategy during training.",
"It is therefore possible that the conversion MultiRC DREAM CosmosQA Overall Subset Overall Subset Overall Subset RoBERTa+RACE converted 77.4 77.6 83.5 85.5 73.5 74.0 RoBERTa+MNLI 60.5 61.1 67.7 68.6 38.1 37.6 RoBERTa+ANLI 67.9 68.7 74.1 73.7 49.7 49.9 Table 7: Performance of the models on high vocabulary overlap subsets of the MCRC datasets.",
"mechanism itself becomes a confounding factor, enabling the RACE converted model to perform better on the MCRC task.",
"To assess this nuance, we manually annotate a subset of the MCRC datasets using Label Studio (Tkachenko et al., 2020), with a random set of examples annotated by each of the authors.",
"To create a setting where the difference is vivid, we design the annotation subsets such that the RACE converted model gives an accuracy of around 50% using the hybrid conversion strategy.",
"The independent manual annotations prevent any exploitable signal from leaking into the training data of the model through the conversion mechanism.",
"We compare the performance of models trained on converted forms of the RACE dataset using both our hybrid strategy as well as manual annotation.",
"We manually annotate 100 examples from MultiRC and 50 each from ComsosQA and DREAM.",
"MultiRC is evaluated at an option-level with each question-answer pair considered an individual example.",
"On the other hand, CosmosQA and DREAM are evaluated at a question-level, with each example consisting of three question-answer pairs, and one label corresponding to the correct answer option.",
"Table 9 shows that the RACE converted model performs better on the manually annotated subset; this eliminates the possibility of the conversion mechanism being a confounding factor in our results.",
"It is important to note that this setting is solely for the purpose of establishing the role of the hybrid conversion strategy as a potential confounding factor in the performance of the RACE converted model.",
"The absolute accuracy numbers are not reflective of the model performance on the overall dataset.",
"The difficulty of transferring entailment (NLI) knowledge to downstream NLP tasks can be largely attributed to the difference in data distributions, specifically the premise lengths.",
"Models trained on short-premise NLI datasets are not very good at performing inference over longer texts, which is a central feature of important downstream tasks such as QA and text summarization.",
"We leverage the abundance of large and diverse MCRC datasets and the ease of conversion from MCRC into the NLI format to automatically and scalably create a long-premise NLI dataset to test this long-premise conjecture.",
"We show that the long-premise nature of the converted dataset indeed helps achieve better performance on the downstream tasks of MCRC and CFCS when compared against models trained on traditional short-premise NLI datasets.",
"We further discuss and eliminate possible confounding factors in our experiments to ensure the validity of our results.",
"Our work highlights a major shortcoming in popular NLI datasets that limits their usefulness to downstream NLP applications; and emphasizes the need for long-premise NLI datasets.",
"Future work in this direction can take us closer to realizing the full potential of NLI as a fundamental task in natural language understanding.",
"In this work, we use open source datasets, libraries, and services which are freely available and appropriately cited.",
"We do not release the converted form of the MCRC dataset in respect of existing copyright; however, we provide all the information required to reproduce our experimental setup, datasets, and results in the content of the main paper as well as in the appendix.",
"All rules used in the conversion strategies (Sec-tion 4), as well as the manual annotations performed as part of the confounding factors analysis, were produced solely by the group of authors.",
"Our work did not involve any external human subjects; and did not require institutional review.",
"Looking forward, it is certainly possible that the neural conversion strategy proposed by us in Section 4 may be applied by readers of this work in other potentially scaled-up contexts.",
"Since the conversion is used as a means to an end (producing an appropriate long-premise dataset) rather than as the central contribution of the current work, we do not provide an extended analysis of the pros and cons of this strategy.",
"We thank Rajarshi Das, and Andrew McCallum for their constructive feedbacks in the 696DS class during Spring 2020.",
"We also thank the anonymous reviewers for their insightful suggestions.",
"Xiang Lorraine Li is supported in part by IBM Research AI through the AI Horizons Network and in part under the University of Southern California subcontract no. 123875727 under Office of Naval Research prime contract no.",
"N660011924032.",
"The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government."
] | [
"abstain",
"objective",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"objective",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"Although teacher forcing has become the main training paradigm for neural machine translation, it usually makes predictions only conditioned on past information, and hence lacks global planning for the future.",
"To address this problem, we introduce another decoder, called seer decoder, into the encoder-decoder framework during training, which involves future information in target predictions.",
"Meanwhile, we force the conventional decoder to simulate the behaviors of the seer decoder via knowledge distillation.",
"In this way, at test the conventional decoder can perform like the seer decoder without the attendance of it.",
"Experiment results on the Chinese-English, English-German and English-Romanian translation tasks show our method can outperform competitive baselines significantly and achieves greater improvements on the bigger data sets.",
"Besides, the experiments also prove knowledge distillation the best way to transfer knowledge from the seer decoder to the conventional decoder compared to adversarial learning and L2 regularization.",
"Neural machine translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bah-danau et al., 2014; Gehring et al., 2017; Vaswani et al., 2017) has achieved great success and is drawing larger attention recently.",
"Most NMT models are under the attention-based encoder-decoder framework which assumes there is a common semantic space between the source and target languages.",
"The encoder encodes the source sentence to the common space to get its meaning, and the decoder projects the source meaning to the target space to generate corresponding target words.",
"Whenever generating a target word at a time step, the decoder The code: https://github.com/ictnlp/SeerForcingNMT needs to retrieve the attended source information and then decodes into a target word.",
"The underline principle which makes sure the framework works is that the information hold by the source sentence and its target counterpart is equivalent.",
"Thus the translation procedure can be considered to decompose source information into different pieces and then to convert each piece to a proper target word according to bilingual context.",
"When all the information encoded in the source sentence is throughly processed, the whole translation has been generated.",
"Neural machine translation models are usually trained via maximum likelihood estimation (MLE) (Johansen and Juselius, 1990) and the operation form is known as teacher forcing (Williams and Zipser, 1989).",
"The teacher forcing strategy performs one-step-ahead predictions with the past ground truth words fed as context and forces the distribution of the next prediction to approach a 0-1 distribution where the probability of the next ground truth word corresponds to 1 and others to",
"0. In this way, the predicted sequence is trained to be close to the ground truth sequence.",
"From the perspective of information division, the function of teacher forcing is to teach the translation model how to segment source information and derive the ground truth word from the source information at a maximum probability.",
"However, teacher forcing can only provide up-to-now ground truth words for one-step-ahead predictions and hence lacks global planning for the future.",
"This will result in local optimization especially when the next prediction is highly related to the future.",
"Besides, as the translation grows, the previous prediction errors will be accumulated and affect later predictions (Zhang et al., 2019c).",
"This is the important reason why NMT models cannot always produce the ground truth sequence during training.",
"Therefore, it is more possible to achieve Figure 1: The architecture of the proposed method global optimization by getting to know the future ground truth words.",
"This can lead to better cross-attention to the source sentence and thus better information devision.",
"But unfortunately, ground truth can be only obtained during training and we cannot inference with future ground truth at test.",
"To address this problem, we introduce an additional seer decoder into the encoder-decoder framework to integrate future information.",
"During training, the seer decoder is used to guide the behaviors of the conventional decoder while at test the translation model only inferences with the conventional decoder without introducing any extra parameters and calculation cost.",
"Specifically, the conventional decoder only gets past information participating in the next prediction, while the seer decoder has both the past and future ground truth words engaged in the next prediction.",
"Both decoders are trained to generate ground truth via MLE and meanwhile the conventional decoder is forced to simulate the behaviors of the seer decoder via knowledge distillation (Bucilua et al., 2006; Hinton et al., 2015).",
"In this way, at test the conventional decoder can perform like the seer decoder as if it knew the future translation.",
"We conducted experiments on two small data sets (Chinese-English and English-Romanian) and two big data sets (Chinese-English and English-German) and the experiment results show that our method can outperform strong baselines on all the data sets.",
"In addition, we also compared different mechanisms of transferring knowledge and found that knowledge distillation is more effective than adversarial learning and L2 regularization.",
"To the best of our knowledge, this paper is the first to explore the effects of the three mechanisms simultaneously in machine translation.",
"We introduce our method on the basis of Transformer which is under the encoder-decoder framework (Vaswani et al., 2017).",
"Our model consists of three components: the encoder, the conventional decoder and the seer decoder.",
"The architecture is shown in Figure",
"1. The encoder and the conventional decoder work in the same way as the corresponding components of Transformer do.",
"The seer decoder integrates future ground truth information into its self-attention representation and calculates cross-attention over source hidden states with the self-attention representation as the query.",
"During training, the encoder is shared by the two decoders and both decoders perform predictions to generate ground truth.",
"The behaviors of the conventional decoder are guided by the seer decoder via knowledge distillation.",
"If the conventional decoder can predict a similar distribution as the seer decoder, we think the conventional decoder performs like the seer decoder.",
"Then we can only use the conventional decoder for test.",
"The details of the encoder and the conventional decoder can be got from Vaswani et al. (2017).",
"Assume the input sequence is x = ( x 1 , ..., x J ) , the ground truth sequence is y = ( y 1 , ..., y I ) and the generated translation is y = ( y 1 , ..., y I ) .",
"We will give more description to the seer decoder and the training in what follows.",
"Although we feed the future ground truth words to the seer decoder, we will not tell it the next ground truth word to be generated, in case it will only learn a copy operation, not how to derive a word.",
"Considering efficiency, the seer decoder does not integrate the past and future ground truth information with a unique decoder , but two separate subdecoders.",
"As a result, the seer decoder consists of three components: the past subdecoder, the future subdecoder and the fusion layer.",
"The architecture of the seer decoder is given in Figure",
"2. The past and future subdecoders are employed to decode the past and future ground truth information into hidden states respectively and the fusion layer is used to fuse the output of the past and future subdecoders and calculate the final hidden state for the next prediction.",
"The past subdecoder is composed of N 1 layers and each layer has three sublayers which are the multi-head sublayer, the cross-attention sublayer and the feed-forward network (FNN) sublayer, the same as Transformer.",
"The multi-attention sublayer accepts the whole ground truth sequence as the input and applies a mask matrix M p to make sure only the past ground truth words attend the self-attention.",
"Specifically, to generate the i -th target word, its corresponding mask vector in the mask matrix M p is set to mask the words y i , y i +1 , ..., y I .",
"Then after the cross-attention sublayer and the FFN sublayer, the past subdecoder output a sequence of past hidden states, the packed matrix of which is denoted as H p .",
"The future subdecoder has the same structure as the past subdecoder except for the mask matrix.",
"The future subdecoder also has the whole ground truth sequence as the input but employs a different mask matrix M f to only remain the future ground truth information.",
"To generate the i -th target word, the corresponding mask vector in M f masks the words y 1 , ..., y i 1 , y i .",
"The packed matrix of the future hidden states generated by the future subdecoder is denoted as H f .",
"The fusion layer is composed of four sublayers: the multi-head sublayer, the linear sublayer, the cross-attention sublayer and the FFN sublayer.",
"Except the linear sublayer, the rest three sublayers works in the same way as Transformer does.",
"The multi-head sublayer encodes the outputs of the past and future subdecoders separately with the mask matrix M p and M f , and the packed matrix of their output are denoted as H (cid:48) p and H (cid:48) f respectively.",
"Then we reverse the order of the vectors in H (cid:48) f to get H (cid:48)(cid:48) f , so that the same index in H (cid:48) p and H (cid:48)(cid:48) f can correspond to the past and future representation needed for the same prediction.",
"Assume H (cid:48) f = [ h (cid:48) f 1 ; h (cid:48) f 2 ; ... ; h (cid:48) fI ] , then its reversed matrix is H (cid:48)(cid:48) f = [ h (cid:48) fI ; ... ; h (cid:48) f 2 ; h (cid:48) f 1 ] .",
"The linear sublayer fuses H (cid:48) p and H (cid:48)(cid:48) f via a linear transformation as A = W p H (cid:48) p + W f H (cid:48)(cid:48) f (1) Now we can think each representation in the matrix A incorporates the past and future information for its corresponding prediction.",
"Then after the cross-attention sublayer over the outputs of the encoder and then the FFN sublayer, we can get the target hidden states produced by the seer decoder as S s = [ s s 1 ; ... ; s s I ] T .",
"Then the probability to generate the target word y i is p s ( y i | y >i , y <i , x ) exp ( W o s si ) (2) Note that the past and the future subdecoders share the same set of parameters, and the same linear transformation matrix W o is applied to the outputs of the conventional and seer decoders.",
"In our method, only the conventional decoder is employed for test and the seer decoder is only used to guide the conventional decoder during training.",
"Given a sentence pair (cid:104) x , y (cid:105) in the training set, the conventional decoder and the seer decoder can predict a distribution for target position i as p c ( y i | y <i , x ) and p s ( y i | y >i , y <i , x ) , respectively.",
"The two decoders are both trained by comparing its predicted distribution with the 0-1 distribution of the ground truth word by minimizing the cross entropy, that is to maximize the likelihood of the corresponding ground truth word.",
"As the two decoders involve different information for next prediction, we call the training strategy teacher forcing and seer forcing , respectively.",
"The cross-entropy loss for the conventional decoder is L c = K (cid:88) k =1 I k (cid:88) i =1 log p c ( y i | y <i , x ) , (3) and the cross-entropy loss for the seer decoder is L s = K (cid:88) k =1 I k (cid:88) i =1 log p s ( y i | y >i , y <i , x ) .",
"(4) where K is the size of the training set and I k is the length of the k -th target sentence.",
"The conventional decoder is further trained to get close to the distribution of the seer decoder via knowledge distillation.",
"In knowledge distillation, the conventional decoder ( the student ) has to not only match the one-hot ground truth word, but fit the distribution over the target vocabulary V drawn by the seer decoder ( the teacher ).",
"The knowledge distillation loss can be formalized as L kd = K (cid:88) k =1 I k (cid:88) i =1 | V | (cid:88) l =1 p s ( y i = l | y >i , y <i , x ) log p c ( y i = l | y <i , x ) (5) where | V | is the size of the target vocabulary.",
"The final training loss is L = L s + L c + (1 ) L kd .",
"(6) Different from the conventional knowledge distillation which first trains the teacher via cross entropy against ground truth, then fixes the teacher and only trains the student, we train all the parameters from the scratch, but we still follow the above rule to keep the teacher (i.e. the seer decoder) unchanged in the process of distillation.",
"To do this, we do not update the parameters of the seer decoder through the loss L kd , that is, we only back propagate gradients to the seer decoder through L s , but not through L kd .",
"Reinforcement-learning-based methods also encode future information in the rewards to supervise fine-tuning of the translation model.",
"The rewards are worked out either by sampling future translation with the REINFORCE algorithm (Williams, 1992; Yu et al., 2017; Yang et al., 2018; Shao et al., 2019), or by directly calculating a value with the actor-critic algorithm (Bahdanau et al., 2016; Li et al., 2017).",
"This set of methods only give a weak supervision to the NMT model through rewards and suffer from unstable training.",
"In contrast, Shao et al. (2018) propose to train autoregressive NMT with the probabilistic n-gram based GLEU (Wu et al., 2016) and Shao et al. (2020) propose to minimize the bag-of-ngrams difference for non-autoregressive NMT so that the two methods can abandon reinforcement learning and perform training directly by gradient descent.",
"Another set of methods introduce future information into inference with additional pass of decoding or extra components at test.",
"Niehues et al. (2016), Xia et al. (2017), Hassan et al. (2018) and Zhang et al. (2018) proposed a two-pass decoding algorithm to first generate a draft translation and then generate final translation referring to the draft.",
"Geng et al. (2018) expand this line of methods by performing an adaptive multi-pass decoding where the number of decoding passes is determined by a policy network.",
"Liu et al. (2016a), Liu et al. (2016b), Hoang et al. (2017), Zhang et al. (2019d) and He et al. (2019) perform bidirectional decoding simultaneously and the two decoders correlate to each other via an agreement term or a regularization term in the loss.",
"Zhou et al. (2019a) , Zhou et al. (2019b) and Zhang et al. (2019b) also maintain a forward decoder and a backward decoder to decode simultaneously but they interact to each other when making predictions.",
"Zhang et al. (2019a) introduce a future-aware vector at test which is learned via the knowledge distillation framework during training.",
"The difference between this set of methods and our method is that our method does not require any other cost at test and is easy to use.",
"There are some other works which integrate future information during training while only perform one-pass decoding.",
"Serdyuk et al. (2018) introduce a twin network to perform bidirectional decoding simultaneously during training and force the hidden states generated by the two decoders to be consistent, then at inference it can only use the forward decoder.",
"But in this method the two decoders act as a counterpart to each other and no decoder plays a role of teacher, which determines that it can only be trained via L 2 regularization, not knowledge distillation which has proven in the experiments more effective than L 2 regularization.",
"Feng et al. (2020) introduce an evaluation module to give each translation more reasonable evaluation when it cannot match the ground truth.",
"The evaluation is conducted from the perspective of fluency and faithfulness which both need the participation of past and future information.",
"The difference from the method proposed in this paper is their method uses self-generated translation as past information and does not train with knowledge distillation.",
"Some researchers work in another perspective by introducing future information.",
"Zhang et al. (2020b) propose to employ future source information to guide simultaneous machine translation with knowledge distillation, so that the incompleteness of source can be mitigated.",
"Zheng et al. (2018) and Zheng et al. (2019) propose to model past and future information for the source to help the decoder focus on untranslated source information.",
"5.1.1 Data Preparation",
"Chinese English The training set consists of about 1.25M sentence pairs from LDC corpora with 27.9M Chinese words and 34.5M English words respectively 1 .",
"We used MT02 for validation and MT03, MT04, MT05, MT06, MT08 for test.",
"We tokenized and lowercased English sentences using the Moses scripts 2 , and segmented the Chinese sentences with the Stanford Segmentor 3 .",
"The two sides were further segmented into subword units using Byte-Pair Encoding(BPE) (Sennrich et al., 2016) with 30K merge operations.",
"32K size of the Chinese dictionary and 29K size of the English dictionary were built for the two sides.",
"English Romanian We used the preprocessed version of WMT16 En-Ro dataset released by Lee et al. (2018) which includes 0.6M sentence pairs.",
"We used news-dev 2016 for validation and news-test 2016 for test.",
"The two languages share the 35K size of the joint vocabulary generated with 40K merge operations of BPE on the combined data.",
"Big Data Sets Chinese English The training data is from WMT 2017 Zh-En translation tasks that contains 20.18M sentence pairs after deleting duplicate ones.",
"The newsdev2017 was used as the development set and newstest2017 was used as the test set.",
"To avoid the effects of the translationese (Graham et al., 2019), we also tested the methods on the newstest2019 test set.",
"We tokenized and truecased the English sentences with Moses scripts.",
"For the Chinese data, we performed word segmentation by using Stanford Segmenter.",
"32K BPE sizes were applied to the training data seperately and then we filtered out the sentences which are longer than 128 sub-words.",
"44K size of the Chinese dictionary and 1 The corpora include LDC2002E18, LDC2003E07, LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08 and LDC2005T06.",
"33K size of the English dictionary were built based on the corresponding data.",
"English German The training data is from WMT2016 which consists of about 4.5M sentences pairs with 118M English words and 111M German words.",
"The newstest2014 was used as the development set and newstest2016 and newstest2019 were used as the test sets.",
"The two languages share the 32K size of the joint vocabulary generated with 30K merge operations of BPE on the combined data.",
"TRANSFORMER We used an open-source toolkit called Fairseq-py released by Facebook (Ott et al., 2019) which was implemented strictly following Vaswani et al. (2017).",
"RL-NMT We trained Transformer under the reinforcement learning framework using the REINFORCE algorithm (Williams, 1992) with the BLEU as the rewards.",
"The implementation details for the RL part is the same as Yang et al. (2018).",
"ABDNMT Our implementation of Zhang et al. (2018) based on Transformer.",
"TWINNET Our implementation of Serdyuk et al. (2018) based on Transformer.",
"The weight of L 2 loss was 0 .",
"2 .",
"EVANMT Our implementation of Feng et al. (2020).",
"SEER +L 2 Seer forcing with L 2 regularization.",
"Similar to TWINNET , we set L 2 = (cid:80) K k =1 (cid:80) I k i =1 (cid:107) g ( s ti ) s si ) (cid:107) 2 where g is a linear transformation.",
"We first pretrained the two decoders together only with L = L t + L s , then trained them with the loss of L = L t + L s + L 2 where = 0 .",
"2 , too.",
"Please note that the L 2 loss did not update the seer decoder and the encoder so that the conventional decoder would approach the seer decoder, which followed Serdyuk et al. (2018).",
"SEER +AL Seer forcing with adversarial learning.",
"A discriminator is employed to distinguish the hidden state sequences generated by the conventional decoder and the seer decoder.",
"The discriminator is based on CNN, implemented according to Gu et al. (2019).",
"The translation model and the discriminator are trained jointly via a gradient reversal layer just like our method.",
"The loss is L = L t + L s + L d where L d is the loss of the discriminator and = 0 .",
"3 on the EN RO data set and = 0 .",
"2 on the other data sets.",
"Our Method Implemented based on Fairseq-py.",
"The weight in Equation 6 for the small Chinese English data set is set to 0.25, and for other data sets is set to 0.5.",
"All the Transformer-based systems have the same configuration as the base model described in Vaswani et al. (2017) except that dropout rate is 0 .",
"3 .",
"The translation quality was evaluated with BLEU (Papineni et al., 2002) with n =4 using the SacreBLEU tool (Post, 2018) 4 , where small data sets employ case-insensitive BLEU while big data sets use case-sensitive BLEU.",
"We compare our method with other methods that can make global planning, including the reinforcement-based method (RL-NMT), the two-pass decoding method (ABDNMT), twin networks which match past and future information (TWINNET ) and the NMT model with an evaluate module to evaluate fluency and faithfulness (EVANMT).",
"In addition, we also explore learning mechanisms which can transfer knowledge from the seer decoder to the conventional decoder, including L 2 regularization (SEER +L 2 ), adversarial learning (SEER +AL) and knowledge distillation (Our Method).",
"We report results together with training time on the small and big data sets in Table 1 and Table 2, respectively.",
"5 As for different methods, in the small data sets, RL-NMT can only get small improvements over Transformer which are in line with the results reported in Wu et al. (2018), and ABDNMT cannot get consistent improvements over Transformer with an obvious difference on the EN RO data set and a small difference on the CN EN data set.",
"TWINNET can get comparable BLEU scores with our method on the small data sets but mostly negative difference on the big data sets.",
"EVANMT can achieve consistent improvements and greater improvements on the EN DE data set.",
"For the learning mechanisms, knowledge distillation show consistent superiority over L 2 regularization and adversarial learning, which is remarkable especially on the big data sets.",
"Adversarial learning can bring improvements over Transformer on all the data sets while L 2 regularization acts unstable on the big data sets.",
"In summary, our method proved to be effective not only in the term of the architecture but also in the learning mechanism.",
"To use seer forcing to guide teacher forcing, it should be ensured that the seer decoder can outperform the conventional decoder.",
"To verify this, we trained the two decoders together with the loss L = L t + L s without knowledge distillation.",
"Then we evaluated their performance on the small Chinese-English translation task as follows.",
"Both decoders are fed with ground truth words as context at test so that they can inference in the same way as at training, where the conventional decoder uses the past ground truth as context and the seer decoder employs the past and future ground truth words as context in the past and future subdecoders.",
"Besides translation performance, we also check the superiority of seer decoder in target language modeling.",
"We do this by dropping out cross-attention so that the decoder can only generate translation based on target language model.",
"In this way, the translation performance without cross-attention can demonstrate the ability of the two decoders in target language modeling.",
"We used the first reference of the test set as ground truth and calculated BLEU scores only with this reference.",
"From the results in Table 3, we can see that whether with or without cross-attention the seer decoder can make super large improvements over the conventional decoder consistently on all the test sets.",
"However, without cross-attention, the BLEU scores of both decoders decrease dramatically which means language model information is not enough for the translation task.",
"Therefore, we can conclude the seer decoder acts much better in target language modeling and cross-language projection and it is reasonable to use the seer decoder as the guider.",
"As the seer decoder achieves its superiority with the help of future target information, we hope that the conventional decoder can learn future information from the seer decoder with knowledge distillation.",
"To check this, we tested whether the hidden states of the conventional decoder could derive more future ground truth words after knowledge distillation.",
"The underlying belief is that the future ground information transferred from the seer decoder can help the conventional decoder derive more future ground truth words.",
"Assuming the hidden states generated by the conventional decoder are S t = [ s t 1 ; ... ; s t I ] T , the future words for each target position i can be predicted with the distribution P wi softmax( W w s ti ) (7) where W w is the weight matrix.",
"During training, we can get the bag of ground truth words for position i as y i = { y i +1 , ..., y I } and train W w with other parameters fixed by maximizing the likelihood of y i as L w = K (cid:88) k =1 I k (cid:88) i =1 (cid:88) w y i log p wi ( w ) (8) where K is the size of training sentences, I k is the length of the target sentence and log p wi ( w ) is the probability of the word w in Equation 7.",
"At test, we select the top best I b i words according to Equation 7 as the bag of future words b i for position i .",
"As we cannot get the ground truth, the size of b i is calculated approximately as I b i = max { 2 , ( J i ) 2 } where J is the length of source sentence.",
"As we do not know the target length during prediction, it may occur that i is greater than J and calculating I b i in this way can ensure b i contains 2 words at least.",
"We conducted experiments on Chinese-English translation and used MT02 as the test set only Figure 3: The similarity of the past and future information to the fused information AVG Our Method 46.24 -FUTURE 45.38 -0.86 -PAST 45.42 -0.82 -KD 44.84 -1.40 TRANSFORMER 44.40 -1.84 Table 5: Ablation study on NIST CN EN translation.",
"with the first reference as ground truth.",
"We calculated the accuracy and recall by comparing each b i against each y i .",
"The results in Table 4 show the conventional decoder in our method can achieve higher accuracy and recall compared to the decoder of Transformer.",
"This means knowledge distillation does transfer future information from the seer decoder to the conventional decoder.",
"In the seer decoder of our method, the information from the past and future subdecoders is fused (as shown in Equation 1) to get the final cross-attention.",
"The intuition is that at the beginning stage, the past subdecoder contains less information than the future subdecoder, so the fused information should rely more on the future subdecoder.",
"As the translation gets longer, the information embodied in the past subdecoder grows, and the fused information should depend more on the past subdecoder.",
"To confirm this hypothesis, we calculate the cosine similarity of the vectors in A given in Equation 1 with the corresponding weighted vectors of W p H (cid:48) p and W f H (cid:48)(cid:48) f .",
"We selected 205 sentences the length of which Figure 4: The BLEU scores on sentence bins with different lengths.",
"ranges [15 , 25] , then calculated the cosine similarities word by word.",
"Then the similarities at the same target position will be averaged and the chart over all the target positions is given in Figure 3.",
"The figure confirms our conjecture that at first, the fused information is highly related to the future information, and over time the similarity to past information increases gradually while the similarity to future information decreases faster.",
"We have proven that in our method the past and future information collaborate to achieve better global planning.",
"In this section, we will explore the influence of past and future information by separately deleting the future and past subdecoders from the seer decoder.",
"In both cases, only the structure of the seer decoder changes and the whole model is trained with knowledge distillation in the same way.",
"We also remove knowledge distillation loss in which case the seer and conventional decoders only interact via the shared encoder and only optimize their own cross-entropy losses during training.",
"The results are given in Table 5.",
"When we exclude future or past information, the translation performance decreases dramatically at almost the same extent, but they still have an obvious gain compared to Transformer.",
"This demonstrates that both the past and future information are necessary for global planning.",
"It is interesting that the translation performance still rise without future subdecoder where there is no additional information fed compared to Transformer.",
"The reason may be the conventional and seer decoder can restrict each other to avoid bad behaviors.",
"When knowledge distillation is dropped, the performance decline greatly which means only communicating via the encoder the conventional and seer decoders is not enough.",
"As the translation is generated word by word, the translation errors will be accumulated while the the translation grows, which will influence the later prediction.",
"In our method, the conventional decoder can learn future information from the seer decoder and hence it should make better global planning for the whole sequence.",
"From this, we deduce that our method performs better on long sentences than Transformer.",
"We checked this on the NIST CN EN translation task and split the sentences in all the test sets into 8 bins according to their length.",
"Then we translated for each bin and tested the BLEU scores.",
"The results in Figure 4 show that our method can achieve bigger improvements on longer sentences, especially in the last three bins.",
"In order to help the NMT model to make good global planning at inference, we propose to introduce a seer decoder which embodies future ground truth to guide the behaviors of the conventional decoder.",
"To this end, we employ the method of knowledge distillation to transfer future information from the seer decoder to the conventional decoder.",
"At test, the conventional decoder can perform translation on its own as if it knew some future information.",
"The experiments indicate our method can outperform strong baselines significantly on four data sets.",
"We are also the first to explore learning mechanisms of knowledge distillation, adversarial learning and L 2 regularization and knowledge distillation has proven to be the most effective one.",
"This paper was supported by National Key R&D Program of China (NO. 2017YFE0192900).",
"Thank Wanying Xie for running experiments of EVANMT.",
"Thank all the anonymous reviewers for the insightful and valuable comments."
] | [
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"abstain",
"result",
"objective",
"other",
"other",
"other"
] |
[
"Coreference resolution is an important component in analyzing narrative text from administrative data (e.g., clinical or police sources).",
"However, existing coreference models trained on general language corpora suffer from poor transferability due to domain gaps, especially when they are applied to gender-inclusive data with lesbian, gay, bisexual, and transgender (LGBT) individuals.",
"In this paper, we analyzed the challenges of coreference resolution in an exemplary form of administrative text written in English: violent death narratives from the USA's Centers for Disease Control's (CDC) National Violent Death Reporting System.",
"We developed a set of data augmentation rules to improve model performance using a probabilistic data programming framework.",
"Experiments on narratives from an administrative database, as well as existing gender-inclusive coreference datasets, demonstrate the effectiveness of data augmentation in training coreference models that can better handle text data about LGBT individuals.",
"Coreference resolution (Soon et al., 2001; Ng and Cardie, 2002) is the task of identifying denotative phrases in text that refer to the same entity.",
"It is an essential component in Natural Language Processing (NLP).",
"In real world applications of NLP, coreference resolution is crucial for analysts to extract structured information from text data.",
"Like all components of NLP, it is important that coreference resolution is robust and accurate, as applications of NLP may inform policy-making and other decisions.",
"This is especially true when coreference systems are applied to administrative data, since results may inform policy-making decisions.",
"from an important administrative database written in English: the National Violent Death Reporting System (NVDRS), maintained by the Centers for Disease Control (CDC) in the USA.",
"Violent death narratives document murders, suicides, murder-suicides, and other violent deaths.",
"These narratives are complex, containing information on one or more persons; some individuals are victims, others are partners (heterosexual or same-sex), family members, witnesses and law enforcement.",
"Specifically, we apply the End-to-End Coreference Resolution (E2E-Coref) system (Lee et al., 2017, 2018), which has achieved high performance on the OntoNotes 5.0 (Hovy et al., 2006) corpus.",
"We observe that when a model trained on OntoNotes is applied to violent death narratives, the performance drops significantly for the following reasons.",
"First, despite the fact that OntoNotes contains multiple genres 1 , it does not include administrative data.",
"Administrative text data is terse and contains an abundance of domain-specific jargon.",
"Because of the gap between training and administrative data, models trained on OntoNotes are poorly equipped to handle administrative data that are heavily skewed in vocabulary, structure, and style, such as violent death narratives.",
"Second, approximately 5% of the victims in the NVDRS are lesbian, gay, bisexual, or transgender (LGBT).",
"This is a vulnerable population; for example, existing data show LGB youth are 5 times more likely to attempt suicide than heterosexual youth (Clark et al., 2020) and are more likely to be bullied prior to suicide 2 .",
"It is essential that data-analytic models work well with these hard to identify but highly vulnerable populations; indeed correctly processing text data is an important step in revealing the true level of elevated risk for 1 OntoNotes contains news sources, broadcasts, talk shows, bible and others.",
"LGBT populations.",
"This remains challenging because of limitations of existing coreference systems.",
"Close relationship partners provide a marker of sexual orientations and can be used (Lira et al., 2019; Ream, 2020) by social scientists to identify relevant information in LGBT deaths.",
"However, OntoNotes is heavily skewed towards male entities (Zhao et al., 2018) and E2E-Coref relies heavily on gender when deciphering context (Cao and Daum III, 2020).",
"Consequently, E2E-Coref has a trouble dealing with narratives involving LGBT individuals where gender referents do not follow the modal pattern.",
"Figure 1 illustrates a scenario where coreference systems struggle.",
"The model mislabels the pronoun he and this error will propagate to downstream analysis.",
"Specifically, the model takes the context and resolves the coreference based on gender; it makes a mistake partially due to an incorrect presumption of the sexual orientation of the 50 year old male victim.",
"To study coreference resolution on violent death narratives (VDN), we created a new corpus that draws on a subset of cases from NVDRS where CDC has reported the sex of both victims and their partners.",
"We assigned ground truth labels using experienced annotators trained by social scientists in public health.",
"3 To bridge the domain gap, we further adapted E2E-coref by using a weakly supervised data creation method empowered by the Snorkel toolkit (Ratner et al., 2017).",
"This toolkit is often used to apply a set of rules to augment data by probabilistic programming.",
"Inspired by Snorkel, we designed a set of rules to 1) bridge the vocabulary difference between the source and target domains and 2) to mitigate data bias by augmenting data with samples from a more diverse population.",
"Because labeling public health data requires arduous human labor, data augmentation provide a promising method to enlarge datasets while covering a broader range of scenarios.",
"3 All annotators have signed the release form for accessing the NVDRS data.",
"We verified our adaptation approach on both the in-house VDN dataset as well as two publicly available English datasets, GICoref (Cao and Daum III, 2020) and MAP (Cao and Daum III, 2020).",
"We then measured the performance of our approach on documents heavily skewed toward LGBT individuals and on documents in which gendered terms were swapped with non-gendered ones (pronouns, names, etc.).",
"On all datasets, we achieved an improvement.",
"For LGBT specific datasets, we see much larger improvements, highlighting how poor the OntoNotes model performed on these underrepresented populations before.",
"Models trained on the new data prove more applicable in that domain.",
"Our experiments underscore the need for a modifiable tool to train specialized coreference resolution models across a variety of specific domains and use-cases.",
"Researchers have shown coreference systems exhibit gender bias and resolve pronouns by relying heavily on gender information (Cao and Daum III, 2020; Zhao et al., 2018; Rudinger et al., 2018; Webster et al., 2018; Zhao et al., 2019).",
"In particular, Cao and Daum III (2020) collected a gender-inclusive coreference dataset and evaluated how state of the art coreference models performed against them.",
"As NLP systems are deployed in social science, policy making, government, and industry, it is imperative to keep inclusivity in mind when working with models that perform downstream tasks with text data.",
"For example, Named Entity Recognition (NER) was used in processing Portuguese police reports to extract people, location, organization, and time from the set of documents (Carnaz et al., 2019).",
"These authors noted the need for a better training corpus with more NER entities.",
"Other NLP models face challenges in domain-adaptation like the one demonstrated in this paper.",
"One example from the biomedical field is BioBERT (Lee et al., 2019), in which the authors achieved better results on biomedical text mining tasks by pretraining BERT on a set of biomedical documents.",
"Likewise, even when evaluating a model on a general set, Babaeianjelodar et al. (2020) showed that many general-domain datasets include as much bias as datasets designed to be toxic and biased.",
"All these cases required re-evaluation of the corpus used to train the model.",
"This underscores the need for methodology that can evaluate, debias, and increase the amount of data used.",
"We first applied for and were given access to the CDC's National Violent Death Reporting-system's (NVDRS) Restricted Access Database.",
"From this, we sampled a total 115 of violent death cases 4 each over 200 words in length.",
"In these 115 cases, we had a total of 6,134 coreference links and 44,074 tokens, with a vocabulary size of 3,653.",
"Each case had information about the victim, the victim's partner, and the type of death.",
"We randomly sampled 30 cases from three strata: 1) the victim is male and the partner is female, 2) the victim is female and the partner is male, and 3) it was an LGBT case.",
"We also included 25 cases that were particularly challenging for the general E2E model.",
"The cases used were spell-checked and cleaned thoroughly.",
"To obtain gold-standard labels, we tasked a team of three annotators 5 to label the coreference ground truth, under the guidance of senior experts in suicide and public health.",
"Annotators were told that every expression referring to a specific person or group was to be placed into that person's or group's cluster.",
"From there, we resolved the three label sets into one by a majority voting method if two out of three annotators put the phrase in a cluster, we assigned it to that cluster.",
"Two of the annotators had previous experience with coding the NVDRS narratives for other tasks, while one was inexperienced.",
"Agreement was typically unanimous.",
"Reproducibility To get access to the NVDRS, Users must apply for access and follow a data management agreement executed directly with CDC.",
"We cannot release VDN or the annotations but we will provide the augmentation code and instructions on how reproducing the experiments.",
"To allow reproduction of our approaches on data without access-restriction, we perform evaluations on MAP and GICoref which are readily available.",
"Our next step was to build a pipeline for adapting E2E-Coref to resolve coreference on VDN.",
"The key component of this pipeline is the Snorkel toolkit 4 Homicides and Suicides 5 All annotators signed the release form for accessing the NVDRS data.",
"and its capacity to design rules that programmatically label, augment, and slice data.",
"We looked to adapt E2E-Coref to process domain-specific data by creating a set of augmentation rules that would improve training data performance.",
"Our rules can generate augmented data with diverse genders and then challenged our model to predict the coreference clusters.",
"Data Augmentation by Rules With Snorkel, we assessed the weakness of the current coreference model systems.",
"These experiments helped us to develop effective augmentation rules to create training data that mimics challenging data to guide the model going forward.",
"Specifically, we split data into groups and evaluated our model on split data.",
"In the case of VDN, we split a larger set of data into two groups (LGBT and non-LGBT) and gauged model performance on both groups.",
"We then isolated specific groups of data that posed a problem and came up with sets of augmentation rules that can be used to generate difficult training data from easier training cases.",
"For example, in our case, we sought to augment documents that contained more precisely defined gender into cases with vaguer language regarding gender often seen in gender-inclusive documents and LGBT violent death narratives.",
"This was seen in each rule's effort to strip gender from key phrases, leaving it more ambiguous to the model.",
"For example, our model struggles when terms like partner' are used to describe relationships.",
"To address this, we introduced a rule where gendered relationship terms like 'girlfriend' in one cluster were replaced by non-gendered terms like 'partner'.",
"In this manner, our model was forced to train against these examples.",
"Often, the model performance improved when training against these augmented examples.",
"We conducted experiments to analyze E2E-Coref on VDN and verified the effectiveness of the data augmentation method.",
"We used the following corpora 6 .",
"Figure 2: The proposed rules for GI data applied to a sample paragraph.",
"GICoref (Cao and Daum III, 2020) consists of 95 documents from sources that include articles about non-binary people, fan-fiction from Archive of Our Own, and LGBTQ periodicals with a plethora of neopronouns (e.g., zie).",
"MAP (Cao and Daum III, 2020) consists of snippets of Wikipedia articles with two or more people and at least one pronoun.",
"by domain experts and used as the test set for measuring model performance.",
"We split VDN into train/dev/test with a 20/5/90 document split.",
"We are interested in the setting where only a small set of training data is available, to emulate use-cases in which annotating a large amount of data is impractical.",
"We reserve more articles in the test set to ensure the evaluation is reliable.",
"We followed Cao and Daum III (2020) to use LEA (Moosavi and Strube, 2016) as the evaluation metric for coreference clusters.",
"We created 3 rules based on the approach described in Sec. 4: (R1) Replace gendered terms with another gender.",
"(R2)",
"Replace gendered relationship terms with non-gendered terms.",
"(R3)",
"Replace terms describing gender with non-gendered terms.",
"Examples of the generated data are in Fig.",
"2. 7 When applying the augmented rules to the current 20/5 document split of the train/development (dev), we ended up with 100/25 train/dev documents enlarging both sets by 5 times.",
"We compared the following models.",
"E2E 8 The E2E-Coref (Lee et al., 2018) model trained on the OntoNotes 5.0 corpus.",
"We used the implementation provided in the AllenNLP library (Gardner et al., 2017).",
"E2E-FT E2E-FT is a variant of E2E-Coref.",
"It was trained on OntoNotes first and then fine-tuned on the 20 target training documents.",
"E2E-Aug E2E-Aug trained on OntoNotes first and then fine-tuned on the augmented target training documents.",
"Results are shown in Table",
"1. By fine-tuning with a modest amount of in-domain data, E2E-FT significantly improved E2E in LEA F1.",
"We saw E2E-Aug further improved E2E-FT by 5% on LEA F1 with the 30 LGBT narratives in VDN's test set 9 .",
"Our results meaningfully improved the classifica-tion of LGBT-related data, and show the need for a more careful approach with data from underrepresented groups.",
"Further, this improvement extended beyond our domain-specific data: E2E-Aug further improved the E2E F1 score by 1.4% in LEA F1 on the overall set.",
"Overall, we saw a significant improvement when training coreference models with our augmented data, on both the overall and gender-neutral LGBT set.",
"We then evaluated the data augmentation approach on two publicly available datasets GICoref and MAP.",
"We experimented with the following 3 rules.",
"(R4)",
"Randomly pick a person-cluster in the document and replace all pronouns in the cluster with a gender neutral pronoun (e.g., his zir).",
"(R5)",
"Truncate the first name of each person.",
"(R6)",
"Same as the R4 but replacing only one pronoun in the cluster to the corresponding gender neutral pronoun.",
"We followed Zhao et al. (2018) and used GICoref and MAP only as the test data.",
"We compared E2E with its variant E2E-Aug.",
"The latter was trained on the union of the original dataset and variants of OntoNotes augmented using the above rules.",
"We also compared our results with those from a E2E-Coref model trained on the union of 9 Not found in tables Precision Recall F1 E2E 39.9 34.0 36.7 Zhao et al. (2018) 38.8 38.0 38.4 E2E-Aug-R4 40.5 43.8 42.1 E2E-Aug-R5 41.2 41.1 41.2 E2E-Aug-R6 40.55 41.5 41.0 E2E-Aug-R456 40.7 41.9 41.3 Table 2: Evaluation on GICoref.",
"the original and augmented data with the gender swapping rules described in (Zhao et al., 2018).",
"Results on GICoref Results on GICoref are shown Table",
"2. Few documents (0.3%) in Ontonotes contained neopronouns.",
"Therefore, E2E struggled with resolving pronouns refering to LGBT individuals.",
"Zhao et al. (2018) had proposed to apply gender-swapping and entity anonymiza-tion to mitigate bias towards binary genders.",
"However, their approach does not handle neopronouns and performs poorly compared to our models.",
"In contrast, E2E-Aug improved E2E from a range of 4% to 6% in F1 with various data augmentation rules.",
"When all the rules were applied, the performance was not superior to using only R4.",
"We further investigated the performance improvement of E2E-Aug-R4 on clusters containing binary pronouns and neopronouns.",
"As shown in Table 3, E2E-Aug-R4 yielded a 4% increase in recall among binary-gender pronouns and 12% among neopronouns as compared with E2E.",
"This reduced the performance gap between binary-gender pronouns and neopronouns from 12% to 3%.",
"Our results show that R4 is highly effective, despite its simplicity.",
"Results on MAP The core of MAP is constructed through different ablation mechanisms (Cao and Daum III, 2020).",
"Each ablation is a method for hiding various aspects of gender and then investigating the performance change of a model.",
"Performance was evaluated based on the accuracy of pronoun resolution over the four label classes: person A, B, both, or neither.",
"We considered four ablation mechanisms as described in the Appendix.",
"With these four Figure 3: Performance in accuracy on the ablations of MAP.",
"possible ablations, each document was ablated a total nine times with each possible combination of ablations, producing a separate document.",
"We compared E2E with E2E-R4 and showed the results in Figure",
"3. E2E-R4 was better than or competitive with E2E in all the ablation scenarios.",
"E2E-R4 especially outperformed E2E on the original set and the +Pro.",
"set, where the performance was improved by 30%.",
"With policy decisions increasingly informed by computational analysis, it is imperative that methods used in these analyses be robust and accurate especially for marginalized groups.",
"Our contributions improved coreference resolution for LGBT individuals, a historically underrepresented and marginalized population at high risk for suicide; they may improve the identification of LGBT individuals in NVDRS and hence inform better policy aimed to reduce LGBT deaths.",
"More generally, we show how to use augmentation rules to adapt NLP models to real-world application domains where it is not feasible to obtain annotated data from crowdworkers.",
"Finally, we introduced a novel dataset, VDN, which provide a challenging and consequential corpus for coreference resolution models.",
"Our studies demonstrate the challenges of applying NLP techniques to real-world data involving diverse individuals (in-cluding LGBT individuals and their families) and suggest ways to make these methods more accurate and robustthus contributing to algorithmic equity.",
"Our research was exempted from human subjects review by the UCLA IRB.",
"We applied for and were given access to the CDC's National Violent Death Reporting-System's Restricted Access Database.",
"As the data contain private information, we strictly follow their guidelines in our use of the dataset.",
"Despite our goal to improve gender inclusion in the coreference resolution system, we admit that our augmentation rules and data analyses may not fully address the diversities of sexual orientation in the population.",
"Although our approach improves the performance of coreference systems, the final system is still not perfect and may exhibit some bias in its predictions.",
"We thank all anonymous reviewers for their valuable feedback.",
"We want to thank our three annotators Mika Baumgardner, Vivian Nguyen, and Mikaela Gareeb for their contributions to our paper.",
"It is thanks to the amount of time they spent on annotations that we were able to produce this work.",
"Our work for this study was partially funded by NIH( MH115344, MD006923).",
"JGF was supported by an Infosys Membership in the School of Social Science at the Institute for Advanced Study."
] | [
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"method",
"result",
"result",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"objective",
"abstain",
"method",
"method",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other"
] |
[
"Existed pre-training methods either focus on single-modal tasks or multi-modal tasks, and cannot effectively adapt to each other.",
"They can only utilize single-modal data (i.e., text or image) or limited multi-modal data (i.e., image-text pairs).",
"In this work, we propose a UNIfied-MOdal pre-training architecture, namely UNIMO, which can effectively adapt to both single-modal and multi-modal understanding and generation tasks.",
"Large scale of free text corpus and image collections are utilized to improve the capability of visual and textual understanding, and cross-modal contrastive learning (CMCL) is leveraged to align the textual and visual information into a unified semantic space, over a corpus of image-text pairs augmented with related images and texts.",
"With the help of rich non-paired single-modal data, our model is able to learn more generalizable representations, by allowing textual knowledge and visual knowledge to enhance each other in the unified semantic space.",
"The experimental results show that UNIMO greatly improves the performance of several single-modal and multi-modal downstream tasks.",
"Our code and pre-trained models are public at https://github.com/PaddlePaddle/ Research/tree/master/NLP/UNIMO .",
"Large-scale pre-training has drawn much attention in both the community of Compute Vision (CV) and Natural Language Processing (NLP) due to its strong capability of generalization and effi-cient usage of large-scale data.",
"Firstly in CV, a series of models were designed and pre-trained on the large-scale dataset ImageNet, such as AlexNet (Krizhevsky et al., 2017), VGG (Simonyan and These authors contribute equally to this study and are listed with random order.",
"Zisserman, 2014) and ResNet (He et al., 2016), which effectively improved the capability of image recognition for numerous tasks.",
"Recent years have witnessed the burst of pre-training in NLP, such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), XLNet (Yang et al., 2019) and UniLM (Dong et al., 2019), which greatly improve the capabilities of language understanding and generation.",
"However, the above researches focus on the single-modal learning and can only be effectively used in single-modal (i.e., only text or image) scenarios.",
"In order to adapt to multi-modal scenarios, a series of multi-modal pre-training methods were proposed and pre-trained on the corpus of image-text pairs, such as ViLBERT (Lu et al., 2019), VisualBERT (Li et al., 2019b) and UNITER (Chen et al., 2020b), which greatly improve the ability to process multimodal information.",
"However, these models can only utilize the limited corpus of image-text pairs and cannot be effectively adapted to single-modal scenarios (Lin et al., 2020b).",
"A smarter AI system should be able to process different modalities of information effectively.",
"There are large scale of data in different modalities on the Web, mainly textual and visual information.",
"The textual knowledge and the visual knowledge [IMG] [ROI1] [ROI2] [ROI3] [ROI4] [ROI5] [ROI6] [ROIN ] Image Collections Text Corpus Any baseball game involves one or more umpires At a minimum, one umpire will stand behind the catcher Semantic Space Image representation Text representation [CLS] [tok1] [tok2] [tok3] [tok4] [tok5] [tokN] [SEP] Unified-Modal Transformer Image-Text Pairs The baseball player readies to swing at the pitch while the umpire behind him looks on.",
"usually can enhance and complement each other.",
"As the example shown in Figure 1, it's difficult to answer the question correctly only with the visual information in the image.",
"However, if we connect the visual information to the textual information which describes the background of a baseball game, it's very easy to determine the correct answer.",
"Also, the visual information can make it easier to understand the scene described by the text.",
"The research in neuroscience by Van Ackeren et al. (2018) reveals that the parts of the human brain responsible for vision can learn to process other kinds of information, including touch and sound.",
"Inspired by this research, we propose to design a unified-modal architecture UNIMO which aims to process multi-scene and multi-modal data input with one model, including textual, visual and vision-and-language data, as shown in Figure",
"2. The greatest challenge to unify different modalities is to align and unify them into the same semantic space which are generalizable to different modalities of data.",
"Existed cross-modal pretraining methods try to learn cross-modal representations based on only limited image-text pairs by simple image-text matching and masked language modeling (Chen et al., 2020b).",
"They can only learn specific representations for image-text pairs, and thus fail to generalize to single-modal scenarios.",
"So their performance will drop dramatically when applied to language tasks (Lin et al., 2020b), which has also been revealed by our experiments (see Section 4.2).",
"In this work, UNIMO learns visual representations and textual representations simultaneously, and unifies them into the same semantic space via cross-modal contrastive learning (CMCL) based on a large-scale corpus of image collections, text corpus and image-text pairs.",
"UNIMO effectively utilizes the large-scale of text corpus and image collections to learn general textual and visual representations.",
"The CMCL aligns the visual representations and textual representations, and unifies them into the same semantic space based on image-text pairs.",
"As shown in Figure 3, to facilitate different levels of semantic alignment between vision and language, we propose to utilize a series of text rewriting techniques to improve the diversity of cross-modal information.",
"Specifically, for an image-text pair, various positive examples and hard negative examples can be obtained by rewriting the original caption at different levels.",
"Moreover, to incorporate more background information from the single-modal data, text and image retrieval are also applied to augment each image-text pair with various related texts and images.",
"The positive pairs, negative pairs, related images and texts are learned jointly by CMCL.",
"In this way, our model can effectively unify different levels of visual and textual representations into the same semantic space, and incorporate more single-modal knowledge to enhance each other.",
"We can utilize large scale of non-paired text corpus and image collections on the Web to learn more generalizable textual and visual representations, and improve the capability of vision and language understanding and generation.",
"Our model can be effectively fine-tuned for both single-modal and multi-modal understanding and generation downstream tasks.",
"The visual knowledge and textual knowledge can enhance each other to achieve better performance on several single-modal and multimodal tasks than previous methods.",
"Humans perceive the world through many modalities, such as sound, vision and language.",
"Even Unified-Modal Transformer [IMG] [CLS] Unified-Modal Transformer [IMG] [CLS] Unified-Modal Transformer Cross-Modal Contrastive Learning When the referee looked up, the player was about to swing a baseball bat on the pitch.",
"though any individual modality might be incomplete or noisy, important information is still perceivable since they tend to be shared or enhanced each other.",
"With this motivation, we propose a unified-modal pre-training method UNIMO to learn representations that capture modality-invariant information at the semantic level.",
"Different from previous methods, UNIMO learns from different modalities of data, including images, texts and image-text pairs, thus achieving more robust and generalizable representations for both textual and visual input.",
"As shown in Figure 2, UNIMO employs multi-layer self-attention Transformers to learn unified semantic representations for both textual and visual data.",
"For a textual input W, it is firstly split into a sequence of subwords W = { [ CLS ] , w 1 , ..., w n , [ SEP ] } by Byte-Pair Encoding (BPE) (Sennrich et al., 2016), and then the self-attention mechanism is leveraged to learn contextual token representations { h [ CLS ] , h w 1 , ..., h w n , h [ SEP ] } .",
"The special tokens [ CLS ] and [ SEP ] denote the start and end of the textual sequence, respectively.",
"Similarly, for an image V, it is firstly converted to a sequence of region features V = { [ IMG ] , v 1 , ..., v t } ( [ IMG ] denotes the representation of the entire image), and then the self-attention mechanism is leveraged to learn contextual region representations { h [ IMG ] , h v 1 , ..., h v t } .",
"Similar to previous work (Chen et al., 2020b), we use Faster R-CNN (Ren et al., 2016) to detect the salient image regions and extract the visual features (pooled ROI features) for each region.",
"For an image-text pair ( V, W ) , its visual features and textual tokens are concatenated as a sequence { [ IMG ] , v 1 , ..., v t , [ CLS ] , w 1 , ..., w n , [ SEP ] } .",
"Then the sequence is feed into the multi-layer Transformer network to learn cross-modal contextual representations for both the textual tokens and image regions.",
"We extract the representations h [ IMG ] and h [ CLS ] as the semantic representations of image V and text W , respectively.",
"Based on large volumes of image collections { V } , text corpus { W } and image-text pairs { ( V, W ) } , UNIMO learns generalizable visual and textual representations in similar ways by masked prediction, and unify them into the same semantic space via CMCL.",
"Joint visual learning on image collections, language learning on text corpus and cross-modal learning on image-text pairs not only improve the capability of visual and language understanding and generation, but also enable the textual knowledge and visual knowledge to enhance each other in the unified semantic space.",
"The greatest challenge to unify different modalities is to align and unify their representations at different levels.",
"For the example shown in Figure 2, the model not only needs to connect the scene shown in the whole image to an article describing a baseball game, but also needs to align the two men and their location relationship in the image with baseball player, umpire and behind in the text, respectively.",
"Several existing cross-modal pre-training methods try to align visual and textual representations by simply image-text matching (Li et al., 2019a; Chen et al., 2020b) based on a limited corpus of image-text pairs.",
"They randomly sample a negative image or text from the same training batch for each image-text pair, and utilize a clas-sifier to determine whether the image and text are matching.",
"As the randomly sampled negative text or image is usually very different from the original text or image, they can only learn very coarse alignment between textual and visual representations.",
"In this work, we propose a novel CMCL method to align and unify different levels of textual and visual representations into the same semantic space.",
"The main idea is to let the representations of the paired image and text near in the representation space while the non-paired far away.",
"The representations of image V and text W are used to compute the similarity between them to measure their distance d ( V, W ) .",
"As shown in Figure 3, to facilitate semantic alignment between vision and language at different levels, we design several novel text rewriting techniques to rewrite the original caption of an image either at word, phrase or sentence level.",
"In this way, we can create large volumes of positive examples X + and negative examples X for each image-text pair ( V, W ) .",
"Moreover, to augment cross-modal learning with single-modal information, text and image retrieval are applied to obtain various related texts XT and images XI for each image-text pair ( V, W ) .",
"Different from the positive and negative image-text pairs, the retrieved images and texts are encoded individually as they mainly carry weak correlations, as shown in the right part of Figure",
"3. Based on these positive and negative examples, the following contrastive loss LCMCL is utilized to learn detailed semantic alignments across vision and language: E V,W (cid:34) log (cid:80) ( V + ,W + ) X { + ,I,T } exp ( d ( V + , W + ) / ) (cid:80) ( V (cid:48) ,W (cid:48) ) X { , + ,I,T } exp ( d ( V (cid:48) , W (cid:48) ) / ) (cid:35) (1) where denotes the temperature parameter.",
"Note that, for single-modal images XI and texts XT , the original text W and image V are used to compute the cross-modal relevance, respectively.",
"To the best of our knowledge, this is the first work that explores CMCL to unify visual and textual semantic space.",
"Text Rewriting To enhance multi-granularity of semantic alignment between image and text, we rewrite the caption of an image at different levels, including sentence-level, phrase-level and word-level.",
"For sentence-level rewriting, we utilize the back-translation techniques (Edunov et al., 2018) to obtain several positive samples for each image-text pair.",
"Specifically, each caption of an image is translated into another language and then translated back to the original language.",
"In this way, several similar captions can be obtained for an image.",
"Furthermore, for each image-text pair, the most similar captions of other images are retrieved based on TF-IDF similarity.",
"The retrieved results are very similar to the original caption but doesn't accurately describe the corresponding image, so they can be used as hard negative samples to enhance the sentence-level alignment between image and text.",
"For phrase-level and word-level rewriting, we first parse the image caption into a scene graph (Wang et al., 2018), then randomly replacing the object, attribute or relation nodes of the scene graph with a different object, attribute or relation from the corresponding vocabularies.",
"Instead of randomly sampling negative samples as previous methods, text rewriting can generate large volumes of hard negative samples.",
"In this way, we can help the model to learn more detailed semantic alignment from different levels between image and text.",
"Image/Text Retrieval In order to incorporate more single-modal information during cross-modal learning, each image-text pair is further augmented with various related images and texts that retrieved from the single-modal data.",
"Specifically, for an image, other images in the image collections will be ordered by their visual similarities.",
"Those images that have highly overlapped objects with the original image will be extracted to provide relevant visual information.",
"Similarly, sentences that are semantically related with the original caption are extracted based on semantic similarity to provide background language information.",
"The retrieved images and texts are encoded individually by the unified-modal Transformer as shown in Figure 3, then their representations are extracted to compute the cross-modal contrastive loss in Equation 1.",
"These retrieved single-modal information provide rich background information for better cross-modal learning.",
"Similar to the masked language modeling in BERT, we sample image regions and mask their visual features with a probability of 15% .",
"The visual features of the masked regions are replaced by zeros.",
"As the regions from an image usually are highly overlapped with each other, we choose to mask all regions that have a high proportion of mutual intersection to avoid information leakage.",
"Similar to Lin et al. (2020b), we randomly choose regions as masking anchors and mask the regions whose overlapping ratios with the anchors are larger than 0.3.",
"For an image V , the model is trained to reconstruct the masked regions v m given the remaining regions v \\ m : LV = EV D f ( v m | v \\ m ) (2) Similarly, for an image-text pair ( V, W ) , the model is trained to reconstruct the masked regions v m given the text W and the remaining regions v \\ m : LV = E V,W D f ( v m | v \\ m , W ) (3) As the visual features are high-dimensional and continuous, we utilize both feature regression and region classification objective to learn better visual representations.",
"The feature regression learns to regress the contextualized visual representations h v i to its visual features v i , which can be formulated as: f ( v m | v \\ m ) = (cid:80) Mi =1 (cid:107) r ( h v i ) v i (cid:107) 2 , where r indicates an FC layer to convert h v i into a vector of the same dimension as v i .",
"The region classification learns to recognize the object semantic class of each masked region based on its contextualized visual representation h v i .",
"An FC layer is utilized to compute the scores for K object classes s ( h v i ) , which further goes through a softmax function to obtain the normalized distribution.",
"The final objective minimizes the cross-entropy (CE) loss between the predicted distribution and the object detection output c ( v i ) from Faster R-CNN: f ( v m | v \\ m ) = (cid:80) Mi =1 CE ( softmax ( s ( h v i )) , c ( v i )) .",
"The score function f ( v m | v \\ m , W ) is formulated similarly.",
"To learn general language representations for both language understanding and generation tasks, our model is trained as a unified encoder-decoder model with two types of language modeling tasks: bidirectional prediction and sequence-to-sequence (Seq2Seq) generation.",
"The unified modeling is achieved by utilizing specific self-attention masks to control what context the prediction conditions on, inspired by Dong et al. (2019).",
"To improve the language learning process, we firstly detect seman-ticly complete phrases from the text, such as name entities by syntactic parsing, and then treat them as a whole in the following masking strategies.",
"Different from previous work, we always sample a sequence of complete words or phrases instead of subword tokens, for both bidirectional prediction and Seq2Seq generation.",
"Bidirectional prediction.",
"Given a sequence of tokens W = { [ CLS ] , w 1 , ..., w n , [ SEP ] } , we iteratively sampling spans of text until totally 15% tokens have been selected.",
"We sample the span length from a geometric distribution l Geo ( p ) , where p is set as 0.2, similar to SpanBERT (Joshi et al., 2020).",
"All tokens in the selected spans are replaced with either a special [ MASK ] token, a random token or the original token with probability 80% , 10% and 10% , respectively.",
"The goal is to predict these masked tokens w m based on their surrounding context w \\ m , by minimizing the negative log-likelihood: L Bidirectional = EW D logP ( w m | w \\ m ) (4) Seq2Seq generation.",
"For the Seq2Seq generation task, we iteratively sample fragments from the token sequence until the 25% budget has been spent, inspired by Xiao et al. (2020).",
"For each iterate, we first sample a fragment length from a uniform distribution l U (4 , 32) , and then sample a fragment with the specified length.",
"Every selected fragment { w i , ..., w j } is further appended with two special tokens [ CLS ] and [ SEP ] (i.e., { [ CLS ] , w i , ..., w j , [ SEP ] } ), which denotes the beginning and end of the fragment.",
"All selected fragments are removed from the text and concatenated as the target sequence T while the remaining parts are concatenated as the source sequence S .",
"The model is trained to generate the target sequence auto-regressively condition on the source sequence: L Seq 2 Seq = E ( S,T ) D logP ( T | S ) (5) where P ( T | S ) = (cid:81) | T | j =1 P ( T j | T <j , S ) .",
"During pre-training, we alternate between the bidirectional prediction objective and the Seq2Seq generation objective uniformly.",
"For image-text pairs, the two objectives are applied to the captions similarly to learn cross-modal understanding and generation.",
"In this section, we introduce the pre-training and finetuning experimental settings.",
"Our pre-training datasets consist of three types: text corpus, image collections and image-text pairs.",
"The text corpus includes two large-scale corpora: BookWiki and OpenWebText, which are part of the training dataset of RoBERTa.",
"BookWiki is composed of English Wikipedia and BookCorpus (Zhu et al., 2015), and OpenWebText is an open recreation of the WebText corpora.",
"The image collections are images without textual descriptions, including a subset of OpenImages (Krasin et al., 2017) and COCO unlabel.",
"The image-text pairs are composed of four existing multi-modal datasets: COCO (Lin et al., 2014), Visual Genome (VG) (Krishna et al., 2017), Conceptual Captions (CC) (Sharma et al., 2018) and SBU Captions (Ordonez et al., 2011), which have also been widely used in previous multi-modal pre-training models.",
"The statistics of them are shown in Appendix A. 3.2 Implementation Detail We evaluate UNIMO on two model sizes: UNIMO-base with 12 layers of Transformer block and UNIMO-large with 24 layers of Transformer block.",
"The maximum sequence length of text tokens and image-region features are set as 512 and 100, respectively.",
"We pre-train UNIMO-base by initializing from RoBERTa-base, and UNIMO-large by initializing from RoBERTa-large.",
"Both UNIMO-base and UNIMO-large are trained for at least 500K steps.",
"An Adam optimizer with initial learning rate 5e-5 and a learning rate linear decay schedule is utilized.",
"By virtue of float16 mixed precision training, it takes almost 7 days for training UNIMO-base with 32 Nvidia Telsa V100 32GB GPU and 10 days for UNIMO-large with 64 Nvidia Telsa V100 32GB GPU.",
"For visual learning, we adopt Faster R-CNN (Ren et al., 2016) pre-trained on the Visual-Genome dataset to select salient image regions and extract region features from images.",
"The regions with class detection probability exceeds a confi-dence threshold of 0.2 are selected and 100 boxes are kept.",
"For CMCL, we utilize back-translation to create 3 positive samples and apply rewriting to obtain 100 hard negative samples for each image-text pair.",
"The most similar of 100 images and 100 sentences are retrieved from the single-modal image collections and text corpus for each image-text pair, respectively.",
"More details are described in Appendix A. 3.3 Finetuning Tasks We fine-tune our model on two categories of downstream tasks: (1) single-modal language understanding and generation tasks; (2) multimodal vision-language understanding and generation tasks.",
"The single-modal generation tasks include: generative conversational question answering on the CoQA dataset (Reddy et al., 2019), question generation on the SQuAD 1.1 dataset (Ra-jpurkar et al., 2016), abstractive summarization on the CNN/DailyMail (CNNDM) dataset (Hermann et al., 2015), and sentence compression on the Gigaword dataset (Rush et al., 2015).",
"The single-modal understanding tasks include: sentiment classification on the SST-2 dataset (Socher et al., 2013), natural language inference on the MNLI dataset (Williams et al., 2017), linguistic acceptability analysis on the CoLA dataset (Warstadt et al., 2019) and semantic similarity analysis on the STS-B dataset (Cer et al., 2017).",
"The multi-modal tasks include: visual question answering (VQA) on the VQA v2.0 dataset (Goyal et al., 2017), image caption on the Microsoft COCO Captions dataset (Chen et al., 2015), visual entailment on the SNLI-VE dataset (Xie et al., 2019) and image-text retrieval on Flickr30k datasets (Young et al., 2014).",
"The detail statistics of the datasets and hyper-parameter settings for the above tasks are described in Appendix B. Model Flickr30k-IR Flickr30k-TR SNLI-VE VQA CoCo Caption R@1 / R@5 / R@10 R@1 / R@5 / R@10 Val / Test test-dev / -std BLUE4 / CIDEr ViLBERT-base 58.20 / 84.90 / 91.52 -70.55 / 70.92 VLP-base --70.5 / 70.7 36.5 / 116.9 UNITER-base 72.52 / 92.36 / 96.08 85.90 / 97.10 / 98.80 78.59 / 78.28 72.70 / 72.91 Oscar-base --73.16 / 73.44 36.5 / 123.7 Villa-base 74.74 / 92.86 / 95.82 86.60 / 97.90 / 99.20 79.47 / 79.03 73.59 / 73.67 Ernie-ViL-base 74.44 / 92.72 / 95.94 86.70 / 97.80 / 99.00 -72.62 / 72.85 UNIMO-base 74.66 / 93.40 / 96.08 89.70 / 98.40 / 99.10 80.00 / 79.10 73.79 / 74.02 38.8 / 124.4 UNITER-large 75.56 / 94.08 / 96.76 87.30 / 98.00 / 99.20 79.39 / 79.38 73.82 / 74.02 Oscar-large --73.61 / 73.82 37.4 / 127.8 Villa-large 76.26 / 94.24 / 96.84 87.90 / 97.50 / 98.80 80.18 / 80.02 74.69 / 74.87 ERNIE-ViL-large 76.70 / 93.58 / 96.44 88.10 / 98.00 / 99.20 -74.75 / 74.93 UNIMO-large 78.04 / 94.24 / 97.12 89.40 / 98.90 / 99.80 81.11 / 80.63 75.06 / 75.27 39.6 / 127.7 Table 1: Evaluation results on the multi-modal downstream tasks.",
"In this section, we report the evaluation results on both the multi-modal and single-modal tasks to show the adaptability and generalizability of UNIMO to different scenarios.",
"We further make several ablation studies to validate that textual knowledge and visual knowledge can enhance each other in the unified semantic space.",
"The visualization and case analysis of the model results are appended in Appendix C. 4.1 Multi-Modal tasks The evaluation results on the multi-modal tasks are shown in Table 1.",
"We compare with most of the existed multi-modal pre-training models, including ViLBERT (Lu et al., 2019), VLP (Zhou et al., 2020), UNITER (Chen et al., 2020b), Oscar (Li et al., 2020), Villa (Gan et al., 2020) and ERNIE-ViL (Yu et al., 2020).",
"The results show that UNIMO achieves the best results against almost all benchmarks under both the base and large size of models.",
"Particularly, UNIMO-large outperforms previous best performing model ERNIE-ViL-large by 1.34 R@1 on image retrieval and 1.3 R@1 on text retrieval, which are great improvements for the image-text retrieval tasks.",
"On the image caption task, UNIMO outperforms the best performing model Oscar by more than 2 BLUE4 score.",
"UNIMO achieves better performance on both the multi-modal understanding and generation tasks, while previous methods usually focus on either the understanding or generation tasks.",
"The above results demonstrate the effectiveness of the unified-modal learning architecture that takes advantage of the large scale of single-modal images and texts for cross-modal learning.",
"Previous multi-modal pre-training models usually cannot effectively adapt to single-modal scenar-ios.To further validate that, we remove the single-modal learning processes on the text corpus and",
"image collections (i.e., w/o single-modal) from UNIMO and replace the CMCL with an image-text matching objective.",
"Then, the model w/o single-modal is just a multi-modal pre-training method similar to UNITER (Chen et al., 2020b).",
"As shown in Table 2, the performance of the model on all the language understanding and generation tasks drop dramatically compared to UNIMO, which demonstrates that multi-modal pre-training only on image-text pairs cannot effectively adapt to the single-modal tasks.",
"To show the effectiveness of UNIMO on the language understanding and generation tasks, we further compare with existed pre-trained language models (PLMs), including BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), XLNet (Yang et al., 2019) and UniLM (Dong et al., 2019).",
"The comparison results in Table 2 demonstrate that UNIMO achieves better or comparable performance than existed PLMs on both the language understanding and generation tasks.",
"Specifically, UniLM (Dong et al., 2019) is designed for both natural language understanding and generation.",
"UNIMO outperforms UniLM on most of the tasks with a large margin, which demonstrates the effectiveness of UNIMO on the single-modal scenarios.",
"In all, UNIMO not only achieves the best performance on the multi-modal tasks, but also performs very well on the single-modal tasks, which demonstrate the superiority of our unified-modal learning architecture.",
"We further make several ablation studies to show that the unified-modal architecture can help textual knowledge and visual knowledge mutually enhance each other in the unified semantic space.",
"the cross-modal learning, we remove the language learning process on the text corpus from UNIMO (i.e., w/o texts), and compare their performance on the multi-modal tasks.",
"Table 3 summarizes the comparison results, which show that the performance of the model w/o texts declines consistently on both the multi-modal understanding and generation tasks.",
"The results demonstrate that the textual knowledge in the text corpus benefit the vision-language tasks by enhancing the cross-modal learning with more textual information.",
"Vision Enhance Text To further validate that the visual knowledge in the image collections and image-text pairs facilitates the language learning, we remove the images and image-text pairs from the pre-training dataset (i.e., w/o pairs&images) and compare their performance on the single-modal language tasks.",
"After removing the images and image-text pairs, our model is trained by only the language learning objectives, which are similar to previous pre-trained language models BERT and UniLM.",
"Table 4 summarizes the comparison results, which demonstrate that after removing the visual data, the performance of the model w/o pairs&images drops obviously on most of the language understanding tasks and all the language generation tasks.",
"The results reveal that visual knowledge can enhance the language tasks by enabling the model to learn more robust and generalizable representations in a unified semantic space.",
"Existing researches on pre-training can be mainly classified into two categories: single-modal pretraining and multi-modal pre-training.",
"The single-modal pre-training methods only focus on single-modal tasks, while the multi-modal pre-training methods only focus on multi-modal tasks.",
"Single-Modal Pre-training The single-modal pre-training methods mainly consist of visual pretraining and language pre-training.",
"Most visual pre-training methods are based on the multi-layer CNN architecture such as VGG (Simonyan and Zisserman, 2014) and ResNet (He et al., 2016), and trained on the ImageNet dataset.",
"Recently, contrastive self-supervised learning like SimCLR (Chen et al., 2020a) and MoCo (He et al., 2020) also greatly improve the performance of visual representation learning.",
"These pre-trained models only focus on visual tasks (e.g. image classification etc.), however, they cannot be used in textual or multimodal (i.e., with both text and image) tasks.",
"The language pre-training methods based on the Transformer architecture are also very popular in NLP models, such as GPT (Radford et al., 2018), BERT (Devlin et al., 2019), XLNet (Yang et al., 2019) and BART (Lewis et al., 2020).",
"However, they mainly focus on textual tasks.",
"They cannot effectively deal with the multi-modal tasks, such as image-text retrieval, image captioning, multimodal machine translation (Lin et al., 2020a; Su et al., 2021) and visual dialog (Murahari et al., 2020).",
"Multi-Modal Pre-training Recently, multimodal pre-training methods have been more and more popular for solving the multi-modal tasks.",
"All of them are trained on a corpus of image-text pairs, such as ViLBERT (Lu et al., 2019), VisualBERT (Li et al., 2019b), VL-BERT (Su et al., 2019), Unicoder-VL (Li et al., 2019a) and UNITER (Chen et al., 2020b).",
"Based on the multi-layer Transformer network, they all employ the BERT-like objectives to learn multi-modal representations from a concatenated-sequence of vision features and language embeddings.",
"Their architectures can be mainly classified into two categories: single-stream and two-stream.",
"The two-stream methods, such as ViLBERT, utilize two single-modal Transformer to process visual features and language embeddings respectively, and then learn their interactions based on a cross-modal Transformer.",
"The single-stream methods directly utilize a single Transformer network to model both the visual features and the language embeddings.",
"VisualBERT, VL-BERT, Unicoder-VL and UNITER all utilize the single-stream architecture, which show that fusing cross-modal information early and freely by a single-stream network can achieve better performance.",
"Recently, several contrastive learning-based multi-modal pre-training methods have also been proposed.",
"OpenAI CLIP (Radford et al., 2021) leverages large-scale image-text pairs to learn transferrable visual representations by image-text matching, which enables zero-shot transfer of the model to various visual classification tasks.",
"WenLan (Huo et al., 2021) further proposes a similar two-tower Chinese multi-modal pre-training model and adapts MoCo (He et al., 2020) to improve the contrastive cross-modal learning process.",
"Instead of extracting salient image regions by pre-trained object detection models like Faster-RCNN (Ren et al., 2016), the end-to-end vision-language pre-training architecture SOHO (Huang et al., 2021) proposes to jointly learn Convolutional Neural Network (CNN) and Transformer for cross-modal alignments from millions of image-text pairs.",
"All existed multi-modal pre-training methods only focus on multi-modal tasks with both vision and language inputs.",
"However, they cannot be effectively adapted to single-modal tasks.",
"Moreover, they can only utilize the limited corpus of image-text pairs.",
"By contrast, our unified-modal pre-training method UNIMO can employ large volumes of text corpus and image collections to enhance each other, and can be effectively adapted to both textual and multi-modal scenarios.",
"UNIMO also achieves the best performance on multi-modal tasks including image-text retrieval, visual entailment, VQA and image caption.",
"In this work, we propose UNIMO, a unified-modal pre-training architecture to leverage the large scale of non-paired text corpus and image collections for cross-modal learning.",
"We verify that UNIMO provides an effective way for textual knowledge and visual knowledge to mutually enhance each other in a unified semantic space, and UNIMO successfully adapts to both single-modal and multi-modal understanding and generation tasks.",
"In this way, UNIMO outperforms previous methods on both the multi-modal and single-modal downstream tasks.",
"In the future work, we will focus on end-to-end visual and language unified learning, and much larger scale of model size and data volumes.",
"This work was supported by the National Key Research and Development Project of China (No. 2018AAA0101900)"
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"objective",
"result",
"abstain",
"method",
"other"
] |
[
"Pre-trained word vectors are ubiquitous in Natural Language Processing applications.",
"In this paper, we show how training word embeddings jointly with bigram and even trigram embeddings, results in improved unigram embeddings.",
"We claim that training word embeddings along with higher n-gram embeddings helps in the removal of the contextual information from the unigrams, resulting in better stand-alone word embeddings.",
"We empirically show the validity of our hypothesis by outperforming other competing word representation models by a significant margin on a wide variety of tasks.",
"We make our models publicly available.",
"Distributed word representations are essential building blocks of modern NLP systems.",
"Used as features in downstream applications, they often enhance generalization of models trained on a limited amount of data.",
"They do so by capturing relevant distributional information about words from large volumes of unlabeled text.",
"Efficient methods to learn word vectors have been introduced in the past, most of them based on the distributional hypothesis of Harris (1954); Firth (1957): a word is characterized by the company it keeps .",
"While a standard approach relies on global corpus statistics (Pennington et al., 2014) formulated as a matrix factorization using mean square reconstruction loss, other widely used methods are the bilinear word2vec architectures introduced by Mikolov et al. (2013a): While skip-gram aims to predict nearby words from a given word, CBOW predicts a target word from its set of context words.",
"augmenting word-context pairs with sub-word information in the form of character n-grams (Bo-janowski et al., 2017), especially for morphologically rich languages.",
"Nevertheless, to the best of our knowledge, no method has been introduced leveraging collocations of words with higher order word n-grams such as bigrams or trigrams as well as character n-grams together.",
"In this paper, we show how using higher order word n-grams along with unigrams during training can significantly improve the quality of obtained word embeddings.",
"The addition furthermore helps to disentangle contextual information present in the training data from the unigrams and results in overall better distributed word representations.",
"To validate our claim, we train two modifica-tions of CBOW augmented with word-n-gram information during training.",
"One is a recent sentence embedding method, Sent2Vec (Pagliardini et al., 2018), which we repurpose to obtain word vectors.",
"The second method we propose is a modification of CBOW enriched with character ngram information (Bojanowski et al., 2017) that we again augment with word n-gram information.",
"In both cases, we compare the resulting vectors with the most widely used word embedding methods on word similarity and analogy tasks and show significant quality improvements.",
"The code used to train the models presented in this paper as well as the models themselves are made available to the public 1 .",
"Before introducing our model, we recapitulate fundamental existing word embeddings methods.",
"CBOW and skip-gram models .",
"Continuous bag-of-words (CBOW) and skip-gram models are 1 publicly available on http://github.com/ epfml/sent2vec standard log-bilinear models for obtaining word embeddings based on word-context pair information (Mikolov et al., 2013a).",
"Context here refers to a symmetric window centered on the target word w t , containing the surrounding tokens at a distance less than some window size ws : C t = { w k | k [ t ws , t + ws ] } .",
"The CBOW model tries to predict the target word given its context, maximizing the likelihood (cid:81) Tt =1 p ( w t | C t ) , whereas skip-gram learns by predicting the context for a given target word maximizing (cid:81) Tt =1 p ( C t | w t ) .",
"To model those probabilities, a softmax activation is used on top of the inner product between a target vector u w t and its context vector 1 | C t | (cid:80) w C t v w .",
"To overcome the computational bottleneck of the softmax for large vocabulary, negative sampling or noise contrastive estimation are well-established (Mikolov et al., 2013b), with the idea of employing simpler pairwise binary classifier loss functions to differentiate between the valid context C t and fake contexts NC t sampled at random.",
"While generating target-context pairs, both CBOW and skip-gram also use input word subsampling, discarding higher-frequency words with higher probability during training, in order to prevent the model from overfitting the most frequent tokens.",
"Standard CBOW also uses a dynamic context window size: for each subsampled target word w , the size of its associated context window is sampled uniformly between 1 and ws (Mikolov et al., 2013b).",
"Adding character n-grams .",
"Bojanowski et al. (2017) have augmented CBOW and skip-gram by adding character n-grams to the context representations.",
"Word vectors are expressed as the sum of its unigram and average of its character n-gram embeddings W w : v := v w + 1 | W w | (cid:88) c W w v c Character n-grams are hashed to an index in the embedding matrix .",
"The training remains the same as for CBOW and skip-gram.",
"This approach greatly improves the performances of CBOW and skip-gram on morpho-syntactic tasks.",
"For the rest of the paper, we will refer to the CBOW and skip-gram methods enriched with subword-information as CBOW-char and skip-gram-char respectively.",
"GloVe .",
"Instead of training online on local window contexts, GloVe vectors (Penning-ton et al., 2014) are trained using global co-occurrence statistics by factorizing the word-context co-occurrence matrix.",
"Ngram2vec .",
"In order to leverage the performance of word vectors, training of word vectors using the skip-gram objective function with negative sampling is augmented with n-gram co-occurrence information (Zhao et al., 2017).",
"CBOW-char with word n-grams .",
"We propose to augment CBOW-char to additionally use word ngram context vectors (in addition to char n-grams and the context word itself).",
"More precisely, during training, the context vector for a given word w t is given by the average of all word-n-grams N t , all char-n-grams, and all unigrams in the span of the current context window C t : v := (cid:80) w C t v w + (cid:80) n N t v n + (cid:80) w C t (cid:80) c W w v c | C t | + | N t | + (cid:80) w C t | W w | (1) For a given sentence, we apply input subsampling and a sliding context window as for standard CBOW.",
"In addition, we keep the mapping from the subsampled sentence to the original sentence for the purpose of extracting word n-grams from the original sequence of words, within the span of the context window.",
"Word n-grams are added to the context using the hashing trick in the same way char-n-grams are handled.",
"We use two different hashing index ranges to ensure there is no collision between char n-gram and word n-gram representations.",
"Sent2Vec for word embeddings .",
"Initially implemented for sentence embeddings, Sent2Vec (Pagliardini et al., 2018) can be seen as a derivative of word2vec's CBOW.",
"The key differences between CBOW and Sent2Vec are the removal of the input subsampling, considering the entire sentence as context, as well as the addition of word-n-grams.",
"Here, word and n-grams embeddings from an entire sentence are averaged to form the corresponding sentence (context) embedding.",
"For both proposed CBOW-char and Sent2Vec models, we employ dropout on word n-grams during training.",
"For both models, word embeddings are obtained by simply discarding the higher order n-gram embeddings after training.",
"We train all competing models on a wikipedia dump of 69 million sentences containing 1.7 billion words, following (Pagliardini et al., 2018).",
"Sentences are tokenized using the Stanford NLP library (Manning et al., 2014).",
"All algorithms are implemented using a modified version of the fast-text (Bojanowski et al., 2017; Joulin et al., 2017) and sent2vec (Pagliardini et al., 2018) libraries respectively.",
"Detailed training hyperparameters for all models included in the comparison are provided in Table 3 in the supplementary material.",
"During training, we save models checkpoints at 20 equidistant intervals and found out that the best performance for CBOW models occurs around 60 80% of the total training.",
"As a result, we also indicate the checkpoint at which we stop training the CBOW models.",
"We use 300-dimension vectors for all our word embedding models.",
"For the Ngram2vec model, learning source and target embeddings for all the n-grams upto bigrams was the best performing model and is included in the comparison.",
"For each method, we extensively tuned hyperparameters starting from the recommended values.",
"For each model, we select the parameters which give the best averaged results on our word-similarity and analogy tasks.",
"After selecting the best hyperparameters, we train 5 models for each method, using a different random seed.",
"The reported results are given as mean and standard deviation for those five models.",
"Word-similarity tasks .",
"Word-similarity tasks consist of word pairs along with their human annotated similarity scores.",
"To evaluate the performance of our models on pair-wise word-similarity tasks, we use WordSim353 (353 word-pairs) (Finkelstein et al., 2002) divided into two datasets, WordSim Similarity (203 word-pairs) and Model WS 353 WS 353 Relatedness WS 353 Similarity CBOW-char + bi.",
"WordSim Relatedness (252 word-pairs) (Agirre et al., 2009); MEN (3000 word-pairs) (Bruni et al., 2012); Mechanical Turk dataset (Radinsky et al., 2011) (287 word-pairs); Rare words dataset (2034 word-pairs) (Luong et al., 2013); and SimLex-999 (999 word-pairs) (Hill et al., 2015) dataset.",
"To calculate the similarity between two words, we use the cosine similarity between their word representations.",
"The similarity scores then, are compared to the human ratings using Spear-man's (Spearman, 1904) correlation scores.",
"Word-analogy tasks .",
"Word analogy tasks pose analogy relations of the form x is to y as x (cid:63) is to y (cid:63) , where y is hidden and must be guessed from the dataset vocabulary.",
"We use the MSR (Mikolov et al., 2013c) and the Google (Mikolov et al., 2013a) analogy datasets.",
"The MSR dataset contains 8000 syntactic analogy quadruplets while the Google set has 8869 semantic and 10675 syntactic relations.",
"To calculate the missing word in the relation, we use the 3CosMul method (Levy and Goldberg, 2014): y (cid:63) := arg max z V\\{ x,y,x (cid:63) } cos ( v z , v y ) cos ( v z , v x (cid:63) ) cos ( v z , v x ) + (2) where = 0 .",
"We remove all the out of vocabulary words and are left with 6946 syntactic relations for the MSR dataset and 1959 word-pairs for the Rare Words dataset.",
"All other datasets do not have any out of vocabulary words.",
"Impact of word n-grams .",
"In Table 1, we evaluate the impact of adding contextual word ngrams to two CBOW variations: CBOW-char and Sent2Vec.",
"By adding n-gram information, we consistently observe a boost in the Spearman correlation on the word similarity tasks.",
"On the few datasets where we do not observe an improvement, the results for word-n-gram augmented methods are within standard deviation reach.",
"The Rare Words dataset for Sent2Vec is the only exception, despite getting some improvement for CBOW-char based methods.",
"This observation can be attributed to the fact that character ngrams are shared between unigrams, enhancing generalization to infrequent words.",
"Without char n-grams, the model might underfit those rare words, even more so with word n-grams.",
"We also see that the boost obtained by adding ngrams on word-similarity tasks is much larger for Sent2Vec models as compared to the CBOW-char ones possibly due to the fact that during training, Sent2Vec models use a much larger context and hence can use much more n-gram information for obtaining a better context representation.",
"For analogy tasks, we see an improvement in the augmented CBOW-char methods for morpho-syntactic analogy datasets with little or no gain for semantic analogy datasets.",
"Yet, for Sent2Vec models, the gain is the other way around.",
"This observation indicates the strong role played by character n-grams in boosting the performance on the syntactic tasks as well as restricting the word ngrams from improving the performance on semantic analogies.",
"Comparison with competing methods .",
"In Table 2, we compare word n-gram augmented methods with the most prominent word embedding models.",
"We obtain state-of-the-art results for standalone unigram embeddings on most of the datasets confirming our hypothesis.",
"The Mechanical Turk dataset is the only exception.",
"We notice that Sent2Vec trigrams model dominates the word-similarity tasks as well as the semantic analogy tasks.",
"However, character n-grams are quite helpful when it comes to syntactic analogy tasks underlining the importance of subword information.",
"We also note that the Ngram2vec model outperforms our augmented CBOW-char model in some of the tasks but is always inferior to Sent2Vec in those cases.",
"We empirically show how augmenting the context representations using higher-order word ngrams improves the quality of word representations.",
"The empirical success also calls for a new theoretical model on the composite effect of training higher order n-grams simultaneously with unigrams.",
"Also, the success of Sent2Vec on word-level tasks, a method originally geared towards obtaining general purposed sentence embeddings, hints towards the additional benefits of using compositional methods for obtaining sentence/phrase representations."
] | [
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"result",
"abstain",
"result",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain"
] |
[
"Challenging problems such as open-domain question answering, fact checking, slot filling and entity linking require access to large, external knowledge sources.",
"While some models do well on individual tasks, developing general models is difficult as each task might require computationally expensive indexing of custom knowledge sources, in addition to dedicated infrastructure.",
"To catalyze research on models that condition on specific information in large textual resources, we present a benchmark for knowledge-intensive language tasks (KILT).",
"All tasks in KILT are grounded in the same snapshot of Wikipedia, reducing engineering turnaround through the reuse of components, as well as accelerating research into task-agnostic memory architectures.",
"We test both task-specific and general baselines, evaluating downstream performance in addition to the ability of the models to provide provenance.",
"We find that a shared dense vector index coupled with a seq2seq model is a strong baseline, outperforming more tailor-made approaches for fact checking, open-domain question answering and dialogue, and yielding competitive results on entity linking and slot filling, by generating disambiguated text.",
"KILT data and code are available at https://github.com/ facebookresearch/KILT .",
"1 1 Introduction There has been substantial progress on natural language processing tasks where the inputs are short textual contexts such as a sentences, paragraphs, or perhaps a handful of documents.",
"Critically, we have seen the emergence of general-purpose architectures and pre-trained models that can be applied to a wide range of such tasks (Devlin et al., 2019).",
"However, for many real world problems, processing at this local level is insufficient.",
"For example, 1 and at https://huggingface.co/datasets?",
"in open-domain question answering (Chen et al., 2017) models need to find answers within a large corpus of text.",
"Fact checking a claim (Thorne et al., 2018a) requires models to find evidence, often on the web.",
"In knowledgeable open dialogue (Dinan et al., 2019), models need access to knowledge from large corpora to sustain informed conversations.",
"In general, solving knowledge-intensive tasks requireseven for humansaccess to a large body of information.",
"Like in Information Retrieval (IR) this involves satisfying an information need leveraging large collections of text (Manning et al., 2008).",
"However, while IR focuses of finding relevant material (usually documents), the tasks we consider focus on more fine-grained behavior, such as producing specific answers to queries.",
"For such knowledge-intensive tasks, general infrastructure and architectures across tasks have yet to emerge, and fundamental research questions remain open.",
"For example, while it was long assumed that nonparametric and explicit memory accessed through retrieval is strictly required for competitive results (Chen et al., 2017), recent large pre-trained sequence-to-sequence models such as T5 (Raffel et al., 2019a) and BART (Lewis et al., 2019) store all knowledge in their parameters while performing remarkably well (Petroni et al., 2019).",
"Likewise, while the classical approach of information extraction for populating a Knowledge Base (KB, Riedel et al., 2013; Surdeanu and Ji, 2014) seems out-of-fashion, recent results show that they remain contenders (Fan et al., 2019a; Xiong et al., 2019).",
"While there are numerous datasets for knowledge-intensive tasks (e.g. Thorne et al., 2018a; Dinan et al., 2019; Kwiatkowski et al., 2019, to name just a few), it is difficult to answer the above questions generally across them.",
"Each dataset comes in a different format, is pre-processed with different assumptions, and requires different loaders, evaluations, and analysis Figure 1: Common KILT interface for knowledge intensive language tasks: each instance consists of input and output with a provenance (text span) from the common KILT knowledge source.",
"Critically, they all use different knowledge sources, from different versions of Wikipedia to entirely different corpora.",
"This makes task-to-task comparisons difficult and substantially increases computational overhead.",
"For example, one cannot easily assess whether the same knowledge representation can be re-used if each dataset is tied to a different source.",
"Moreover, if one decides to work with different sources across different tasks, many approaches require re-indexing and re-encoding large numbers of documents.",
"If a language model is pre-trained on one snapshot of Wikipedia to capture its knowledge, tasks that use other snapshots might require re-training.",
"To facilitate research on models that must access specific information in a knowledge source, we introduce KILT , a benchmark and library for K nowledge I ntensive L anguage T asks.",
"KILT aims to lower the entry barrier for such research by formulating several knowledge-intensive NLP tasks with respect to a common interface and the same unified knowledge sourcea single Wikipedia snapshot.",
"The KILT benchmark consists of eleven datasets spanning five distinct tasks, and includes the test set for all datasets considered.",
"2 An important aim of KILT is to cover many different ways of seeking knowledge.",
"For this reason, we select tasks that provide a variety of ways to formulate both the input query (e.g., a claim to verify, a text chunk to annotate, a structured query, a natural question or a conversation) and the expected output (e.g., discrete, extractive, or abstractive).",
"Moreover, while some tasks are factoid in nature (e.g., slot filling), others require using background knowledge to answer more complex questions (e.g, ELI5) or to sustain a conversation (e.g,. Wizard of Wikipedia).",
"The format of the KILT benchmark is model-agnostic, so any system capable of producing a textual output given a textual input is eligible to participate.",
"KILT is an in-KB resource (Petroni et al., 2015), i.e., the evidence required to answer each of the ~3.2M instances in KILT is present somewhere in the knowledge source.",
"Hence there are no unanswerable instances in KILT.",
"Although recognizing unanswerable instances is important, we believe the in-KB setting already poses an hard 2 A brand new portion of the Natural Question (NQ) dataset, originally held out, is used as the KILT test set for NQ.",
"challenge to current state-of-the-art techniques, and thus leave unanswerable instances as future work.",
"KILT enables researchers to develop general-purpose models and evaluate them across multiple domains, testing hypotheses around task-agnostic memory and knowledge representations without indexing different large-scale textual corpora or writing new IO routines.",
"Furthermore, the KILT library provides general building blocks to ease research on knowledge intensive NLP.",
"We provide various state-of-the-art information retrieval systems (both neural and non-neural) coupled with different models that read text in the knowledge source and make predictions for different tasks.",
"We evaluate several state-of-the-art models that represent diverse approaches to knowledge-intensive NLP, and find that a hybrid approach combining a neural retriever with a pretrained sequence-to-sequence model outperforms most task-specific solutions when trained end-to-end.",
"We additionally evaluate whether systems can provide evidence for their predictions.",
"With this aim, we augment every instance in KILT with provenance information in the form of textual spans in specific Wikipedia pages to corroborate the output.",
"We additionally perform an annotation campaign via Amazon Mechanical Turk to increase the provenance coverage.",
"Lastly, in addition to evaluating downstream performance with popular metrics we formulate novel KILT variants for those that award points only if systems find provenance Wikipedia pages for the output given the input.",
"The poor absolute performance of our baselines for those metrics indicates the need for focused research on systems able to explain their decisions.",
"In summary, we contribute:",
"1. a publicly-available benchmark of knowledge-intensive tasks aligned to a single Wikipedia snapshot, to spur the development of general-purpose models and enable their comparison;",
"2. an open-source library to facilitate the development of new architectures for knowledge-intensive tasks;",
"3. a provenance indication for all instances in KILT, made more comprehensive with an annotation campaign, which allows to jointly assess output accuracy and ability to provide supporting evidence in the knowledge source;",
"4. a comparative performance of various modeling approaches, showing promising results for general baselines across all tasks.",
"A main feature of the KILT benchmark is the use of a unified knowledge source that contains all information necessary for all tasks.",
"Defining a unified knowledge source is a challenging problem although all tasks use Wikipedia, they consider different snapshots.",
"As Wikipedia pages are constantly modified, added, and removed, the knowledge can differ drastically from snapshot to snapshot.",
"Concretely, the KILT knowledge source is based on the 2019/08/01 Wikipedia snapshot and contains 5.9M articles.",
"We describe how each dataset is represented in KILT, and our mapping strategy for aligning data to our chosen snapshot.",
"Additional details are in the appendix.",
"Mapping Datasets to a Fixed Snapshot The main challenge in defining a unified knowledge source is ensuring the knowledge for all task examples is available.",
"We assume tasks provide an input (e.g. a question in question answering, or a conversation in dialogue) needed to produce an output (e.g. an answer or a subsequent utterance).",
"In addition, tasks provide provenance , defined as a set of textual spans in Wikipedia that contain evidence for producing an output given a specific input.",
"These provenance spans range from single entities, short answers, sentences, paragraphs, to whole articles.",
"The idea of our mapping strategy is to identify provenance spans in the KILT knowledge sourceif we find all the provenance spans for an input-output pair, the knowledge needed to produce the output is available in our snapshot.",
"The provenance can be a span of any size, from a single token to a paragraph to an entire document.",
"Concretely, the mapping strategy operates as follows.",
"3 First, we try to match Wikipedia pages in each dataset to our snapshot, relying on Wikipedia URL redirections for pages that changed title.",
"Second, we look for the provenance span in the matched page.",
"We scan the whole page and return the span with the highest BLEU (Papineni et al., 2002) with the given provenance span.",
"4 Third, we replace the original provenance in a task's input-output pair with the span from the KILT knowledge source, and we report the BLEU score between the two.",
"Finally, we remove from the dev and test sets all outputs for which the BLEU score is lower than a threshold for at least one provenance span (we 3 Scripts for the mapping algorithm available on GitHub. 4 We return the shortest span if there's a tie in BLEU score. use 0.5 as threshold) this is meant to ensure high quality mappings in the evaluation sets discarding on average 18% of test and dev data (for all tasks except entity linking).",
"We keep all input-output pairs in the train sets (see Figure 5 in the appendix for more details).",
"We consider five tasks that use Wikipedia as a knowledge source for KILT: fact checking, open domain question answering, slot filling, entity linking, and dialogue.",
"The diversity of these tasks challenge models to represent knowledge flexibly.",
"Some tasks require a discrete prediction (e.g., an entity), others, such as extractive question answering, can copy the output directly from a Wikipedia page, while still other tasks must synthesize multiple pieces of knowledge in an abstractive way to produce an output.",
"KILT also provides a variety of ways to seek knowledge, from a claim to verify to a text chunk to annotate, from a structured or natural question to a conversation (see Table 1 for details).",
"We are able to include the test set for all datasets in KILT, either because the test set is public, or because we were able to obtain the test set from the authors of the original dataset.",
"These test sets are not publicly released, but are used for the KILT challenge on EvalAI (Yadav et al., 2019) where participants can upload their models' predictions and be listed on the public leaderboard.",
"5 To facilitate experimentation, we define a consistent interface for all datasets in the KILT Benchmark.",
"Each dataset is represented in JSON Line format , where each record contains three fields: id , input , output .",
"The input is a natural language string and the output a non-empty list of equally-valid outputs (e.g. if multiple answers to a question are valid in a question answering dataset).",
"Each output is a string and it is accompanied by a non-empty list of complementary provenance spans (all should be used to acquire the knowledge needed to provide a valid output).",
"Figure 1 displays an example for all considered tasks (Figure 3 in the appendix contains further details on the common interface).",
"Fact checking verifies a claim against a collection of evidence.",
"It requires deep knowledge about the claim and reasoning over multiple documents.",
"We 5 available at https://evalai.cloudcv.org/ web/challenges/challenge-page/689 .",
"consider the claim as input and the classification label as output .",
"Each label is accompanied by a set of provenance spans that corroborate the classification label.",
"We model multiple equally-valid provenance sets per label.",
"FEVER (Thorne et al., 2018a) is a large dataset for claim veracity that requires retrieving sentence-level evidence to support if a claim is supported or refuted.",
"Additional details are in the appendix.",
"Entity Linking (EL) assigns a unique Wikipedia page to entities mentioned in text.",
"Each KILT record for EL has text in the input (max 256 tokens) where a single entity mention is tagged with two special tokens (i.e., [START_ENT] and [END_ENT] see Figure 1 for an example).",
"The output is the title of the Wikipedia page for the entity mention plus provenance pointing to the entire page (through a unique identifier).",
"Since Wikipedia associates unambiguous titles to entities 6 , finding the correct output is enough to link entity mention and Wikipedia page.",
"The provenance mimics the canonical approach to EL, that is to produce an identifier for each mention (Wu et al., 2019).",
"To map the provenance (whole Wikipedia page), we simply match Wikipedia pages specified in various datasets to the KILT knowledge source.",
"We consider three popular EL datasets in KILT, two of which do not contain a train set but should be assessed in a zero-shot fashion.",
"Note that, in addition to the AY2 train set, the whole knowledge source can be used as training data by exploiting hyperlinks.",
"To facilitate experimentation, we release such data in KILT format (9M train instances), following the splits of Wu et al. (2019).",
"AIDA CoNLL-YAGO (Hoffart et al., 2011b) supplements the CoNLL 2003 dataset (Sang and De Meulder, 2003) with Wikipedia URL annotations for all entities using the YAGO2 system (Hof-fart et al., 2011a).",
"The original data is split into three parts: train , testa , testb .",
"Following Hoffart et al. (2011b) we consider testa as dev and testb as test.",
"WNED-WIKI (Guo and Barbosa, 2018) is a dataset automatically created by sampling document from the 2013/06/06 Wikipedia dump, and balancing the difficulty of linking each mention (using a baseline as proxy).",
"We randomly split the dataset into dev and test.",
"WNED-CWEB (Guo and Barbosa, 2018) is a dataset created with the same strategy as WNED-WIKI, but sampling from the ClueWeb 2012 corpora annotated with the FACC1 system.",
"7 Similarly, we randomly split into dev and test.",
"The goal of the Slot Filling (SF) is to collect information on certain relations (or slots) of entities (e.g., subject entity Albert Einstein and relation educated_at ) from large collections of natural language texts.",
"A potential application is structured Knowledge Base Population (KBP Surdeanu and Ji, 2014).",
"SF requires (1) disambiguation of the input entity and (2) acquiring relational knowledge for that entity.",
"For KILT, we model the input as a structured string subject entity [SEP] relation , the output as a list of equally-valid object-entities, each one accompanied with provenance where the subject-relation-object fact manifests.",
"Zero Shot RE (Levy et al., 2017) is a dataset designed to translate relation extraction into a reading comprehension problem.",
"We consider the open-domain version of this dataset and align the input/output with the KILT interface.",
"Additional details are in the appendix.",
"T-REx (Elsahar et al., 2018) provides a large-scale collection of facts aligned to sentences in Wikipedia abstracts through distant supervision.",
"We consider each sentence as provenance and formulate the input as above (details in the appendix).",
"Open domain Question Answering (Chen et al., 2017) is the task of producing the correct answer for a question, without a predefined location for the",
"answer.",
"Standard tasks such as SQuAD (Rajpurkar et al., 2016) provide an evidence document, but in open domain tasks, models must reason over an entire knowledge source (such as Wikipedia).",
"We consider the question as input and the answer as output with dataset-specific provenance .",
"Natural Questions (Kwiatkowski et al., 2019) is a corpus of real questions issued to the Google search engine.",
"Each question comes with an accompanied Wikipedia page with an annotated long answer (a paragraph) and a short answer (one or more entities).",
"We consider the open-version of the dataset and use both long and short answers spans as provenance .",
"We collaborated with the authors of Natural Questions to access a held out, unpublished portion of the original dataset to form a new test set for KILT.",
"By construction each QA pair is associated with a single Wikipedia page, although other pages might contain enough evidence to answer the question.",
"To increase the provenance coverage we perform an Amazon Mechanical Turk campaign for the dev and test sets and increase the average number of provenance pages per question from 1 to 1.57 (details in section 4).",
"HotpotQA (Yang et al., 2018) requires multihop reasoning over multiple Wikipedia pages to answer each question.",
"For each question-answer pair, a set of supporting sentences are provided, and we consider these as provenance .",
"We focus on the fullwiki setting, where systems are required to retrieve and reason over the whole Wikipedia.",
"TriviaQA (Joshi et al., 2017) is a collection of question-answer-evidence triples.",
"Evidence documents are automatically gathered from Wikipedia or the Web.",
"We consider only the Wikipedia case.",
"We use the answer span as provenance and consider the full version of the dev and test set.",
"ELI5 (Fan et al., 2019b) 8 is a collection of question-answer-evidence triples where the questions are complex, and the answers are long, explanatory, and free-form.",
"For dev and test, we collect annotations using Amazon Mechanical Turk, asking evaluators to select which supporting documents from Wikipedia can be used to answer the question.",
"We treat these as gold provenance annotations for evaluation (details in section 4).",
"Chitchat dialogue is the task of developing an engaging chatbot that can discuss a wide array of topics with a user, which often relies on topical, factual knowledge.",
"For example, it would be difficult to have a conversation about grayhounds without any information about that dog breed.",
"We consider the conversation history as input and the next utterance as output .",
"Wizard of Wikipedia (Dinan et al., 2019) is a large dataset of conversation grounded with knowledge retrieved from Wikipedia.",
"One speaker in the conversation must ground their utterances in a specific knowledge sentence, chosen from a Wikipedia page.",
"The chosen sentence forms the provenance for KILT.",
"We perform an Amazon Mechanical Turk campaign on the NQ and ELI5 datasets for the dev and test splits.",
"While for the NQ our aim is to increase the provenance coverage (i.e., we already have a provenance page for each qa pair) for ELI5 we want to collect provenance information from scratch.",
"For each question we ask annotators to indicate if four pre-determined passages contain enough evidence to answer the question and additionally highlight a salient span in them.",
"We select the passages to annotate using our baseline retrieval models, namely Tf-idf, DPR, RAG and BLINK + flair (details in the Appendix).",
"9 We only consider passages with some tokens overlap with the gold answers (at least 10%).",
"For NQ, we additionally include gold passages among those to annotate, with the twofold objective of controlling the quality of the annotation process and filter out questions that can't be an-8 https://yjernite.github.io/lfqa.html 9 for Tf-idf and BLINK + flair we consider the first passage in the retrieved page swered given the KILT Wikipedia snapshot.",
"10 If no passage is selected by an annotator we ask to provide either another one from Wikipedia or an explanation.",
"We collect three annotations for each passage, and insert the passage as new provenance for the question if at least two annotators found enough evidence to answer in it.",
"The average inter-annotator agreement is 0.3 and 0.1 Cohen's kappa for NQ and ELI5 respectively.",
"Note that ELI5 questions are in general more complex than NQ ones, the required answer is not an extracted span from a page but a free-form explanation that not always can be grounded in Wikipedia.",
"To make ELI5 data more robust we computed the overlap between provenance passages and answers for each instance using ROUGE-L and manually annotate instances with low overlap (ROUGE-L < 0.15).",
"Overall, we were able to collect provenance information for 1507 dev instances (3000 annotated) and 600 test instances (2000 annotated) for ELI5, with an average of 1.18 Wikipedia pages as provenance per instance.",
"For NQ, we filter out on average 8% of data (258 dev and 110 test instances) and include on average 1.57 Wikipedia pages as provenance per instance.",
"Additional details in the Appendix, table 6.",
"Various tasks in the KILT Benchmark need to be evaluated differently, which can make task-wide comparison challenging.",
"Further, there are multiple aspects of each system that we want to assess, namely (1) downstream results, (2) performance in retrieving relevant evidence to corroborate a prediction and (3) a combination of the two.",
"We report different metrics to capture these aspects.",
"11 Downstream performance.",
"We consider different metrics to capture the uniqueness of the different tasks in KILT and mimic the typical way to assess performance for each dataset.",
"We use Accuracy for tasks that require a discrete output (e.g., an entity); Exact Match (EM) for tasks with extractive (i.e., Natural Questions, TriviaQA) or short abstractive output format (i.e., HotpotQA); finally, for tasks with long abstractive output format, we use ROUGE-L (Lin, 2004) for ELI5 and F1-score for Wizard of Wikipedia.",
"For EM and F1-score we follow standard post-processing to lowercase, 10 we present passages in random order to the annotator to exclude biases.",
"strip articles, punctuation, and duplicate whitespace from gold and predicted output (Rajpurkar et al., 2016).",
"Note that Accuracy is equivalent to strict exact match, without post-processing.",
"We report additional metrics for some datasets in the appendix (Table 7-17).",
"Retrieval.",
"We adopt a page-level formulation and measure the ability of a model to provide a set of Wikipedia pages as evidence for a prediction.",
"12 For most datasets in KILT a single page is enough to provide complete evidence, with the exception of FEVER (~12% which requires more than one page) and HotpotQA (two pages are always required).",
"We consider the following retrieval metrics in KILT: R-precision , calculated as rR , where R is the number of Wikipedia pages inside each provenance set and r is the number of relevant pages among the topR retrieved pages.",
"For most of the datasets R = 1 and this formulation is equivalent to Precision@1 .",
"Concretely, R-precision=1 if all Wikipedia pages in a provenance set are ranked at the top.",
"We report the maximum value among all provenance sets for any given input.",
"Recall@k , calculated as wn , where n is the number of distinct provenance sets for a given input and w is the number of complete provenance sets among the topk retrieved pages.",
"For datasets that require more than one page of evidence (e.g., FEVER and HotpotQA), we use the lowest ranked page in each provenance set to determine its position and remove the other pages in the set from the rank.",
"For both metrics, we report the mean over all test datapoints.",
"KILT scores.",
"We propose a KILT version for downstream metrics that, inspired by the FEVER-score (Thorne et al., 2018a), takes into account the provenance supporting the output.",
"For each datapoint, we only award Accuracy, EM, ROUGE-L, and F1 points to KILT-AC , KILT-EM , KILT-RL and KILT-F1 respectively, if the R-precision is",
"1. This is equivalent to awarding points if the system finds (and ranks at the top) a complete set of provenance Wikipedia pages for at least one ground truth output given the input.",
"We choose this metric to emphasize that systems must be able to explain their output with proper evidence, not simply answer.",
"The KILT tasks provide a dual challenge of retrieving information and conditioning upon that to create an output.",
"Various directions could be applied to these.",
"For example, the Wikipedia knowledge could be represented explicitly , as natural language or in a structured form, or represented implicitly , as knowledge stored in model parameters.",
"Models could be discriminative , extractive , where a specific span is selected as output, or generative , where the model writes an output.",
"We consider retrieval, task-specific, and general baselines for KILT (see Table 2).",
"Additional details are in the appendix.",
"We summarize the main results in three tables: downstream performance in Table 3, retrieval in Table 4 and KILT scores in Table 5.",
"Additional results, as well as comparisons with recent works reported numbers, can be found in the appendix.",
"It's possible to get the performance of a system for the KILT test sets by uploading its predictions to our EvalAI challenge.",
"5 When considering downstream performance (Ta-ble 3), although pre-trained sequence-to-sequence models can embed knowledge implicitly in their parameters to some extent (Petroni et al., 2019; Roberts et al., 2020), they clearly lag behind models with explicit knowledge access in almost all datasets.",
"The BART+DPR baseline that incorporates an explicit retrieval step in addition to the generative pretraining, works well.",
"It outperforms some of the task-specific solutions, and gets close to others.",
"Performance are even stronger when the retriever and reader components are trained end-to-end, as in the case of RAG.",
"We find this a promising direction for knowledge intensive tasks.",
"By formulating Entity Linking within KILT, we can evaluate the ability of seq2seq models at this task.",
"They perform surprisingly well, even without any explicit access to knowledge (i.e., BART and T5).",
"These solutions are able to link entity mentions by either leaving them untouched (if they match the correct Wikipedia title), completely altering mention text (e.g., European Cup UEFA Champions League), or adding disambiguation tokens (e.g., Galatasaray Galatasaray S.K. (football)).",
"We report an example in Figure",
"4. When considering retrieval alone (Table 4) there is no clear winnerentity-centric tasks (Entity Linking and Slot Filling) clearly benefit from entity-based retrieval, while DPR works better for NQ, FEV and ELI5, that require more fine grained passages supervision.",
"We believe that combining all these ingredients (i.e., dense representations, fine grained supervision, entity awareness) will be necessary for general task-agnostic memories.",
"Moreover, jointly training a single DPR model on all KILT training data (Multi-task DPR) led to strong performance gains on all datasets compared with the original model (DPR), that considers only NQ and TQA as training data (Karpukhin et al., 2020).",
"This suggests synergies between KILT datasets that are beneficial in terms of model performance.",
"Finally, the KILT scores formulation allows us to systematically assesses the performance for output and provenance jointly (Table 5).",
"We don't report results for BART and T5 since answers are generated solely from the input with no explicit retrieval and there is no straightforward way to access provenance for each prediction.",
"The relative performance of the other baselines with respect to KILT scores is consistent with downstream results.",
"However, the generally low absolute numbers leave a large room for improvement for systems able to provide the correct output but also successfully justify their decision.",
"There are custom solutions that can easily simplify the slot filling task.",
"For instance, subject entities can be used for lookups by title in Wikipedia to retrieve knowledge (this heuristic will always work for zsRE), and structured human-curated resources (such as Wikidata 13 ) could be used to get all answers right.",
"Nevertheless, we are interested in testing if a general model can extract attributes about specific entities from a large body of text.",
"The provenance to justify each system prediction can come from anywhere, including a different system, and this is difficult to detect.",
"Moreover our provenance might not be exhaustivegiven the redundancy of information in Wikipedia there could be other pages with the knowledge needed to solve a KILT instance.",
"We conduct an annotation campaign to mitigate the problem.",
"Several natural language benchmarks have been introduced to track and support NLP progress, including natural language understanding (Wang et al., 2018, 2019), multitask question answering (Mc-Cann et al., 2018), reading comprehension (Dua et al., 2019), question understanding (Wolfson et al., 2020), and dialogue (Shuster et al., 2019).",
"We focus on multi-domain tasks that need to seek knowledge in a large body of documents to produce an output.",
"Although there exist several tasks and resources that define large-scale external knowledge sourcesincluding the TAC-KBP challenges (Mc-Namee and Dang, 2009; Ji et al., 2010; Surdeanu, 2013; Surdeanu and Ji, 2014), ARC (Clark et al., 13 https://www.wikidata.org 2018), TriviaQA-web (Joshi et al., 2017), Quasar-T (Dhingra et al., 2017), WebQuestions (Berant et al., 2013) and ComplexWebQuestions (Talmor and Berant, 2018)in KILT we exclusively consider publicly available Wikipedia-based datasets in order to merge and unify the knowledge source.",
"We introduce KILT, a benchmark for assessing models that need to condition on specific knowledge in a defined snapshot of Wikipedia to solve tasks spanning five domains.",
"The goal is to catalyze and facilitate research towards general and explainable models equipped with task-agnostic representations of knowledge.",
"Our experiments show promising results for a general solution combining dense retrieval and seq2seq generations, although there is large room for improvements.",
"In particular, we find that provenance of current models is generally low.",
"The authors would like to greatly thank the team behind Natural Questions 14 for the held out data, that defines our NQ test set; FEVER 15 , HotpotQA 16 and TriviaQA 17 teams for sharing official test data for the KILT leaderboard; Luke Zettlemoyer and Scott Wen-tau Yih for helpful discussions; Rishabh Jain for the help in setting up the EvalAI challenge."
] | [
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"objective",
"method",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"other",
"objective",
"abstain",
"result",
"result",
"other"
] |
[
"Named Entity Recognition (NER) performance often degrades rapidly when applied to target domains that differ from the texts observed during training.",
"When in-domain labelled data is available, transfer learning techniques can be used to adapt existing NER models to the target domain.",
"But what should one do when there is no hand-labelled data for the target domain?",
"This paper presents a simple but powerful approach to learn NER models in the absence of labelled data through weak supervision .",
"The approach relies on a broad spectrum of labelling functions to automatically annotate texts from the target domain.",
"These annotations are then merged together using a hidden Markov model which captures the varying accuracies and confusions of the labelling functions.",
"A sequence labelling model can finally be trained on the basis of this unified annotation.",
"We evaluate the approach on two English datasets (CoNLL 2003 and news articles from Reuters and Bloomberg) and demonstrate an improvement of about 7 percentage points in entity-level F 1 scores compared to an out-of-domain neural NER model.",
"Named Entity Recognition (NER) constitutes a core component in many NLP pipelines and is employed in a broad range of applications such as information extraction (Raiman and Raiman, 2018), question answering (Molla et al., 2006), document de-identification (Stubbs et al., 2015), machine translation (Ugawa et al., 2018) and even conversational models (Ghazvininejad et al., 2018).",
"Given a document, the goal of NER is to identify and classify spans referring to an entity belonging to pre-specified categories such as persons, organisations or geographical locations.",
"NER models often rely on convolutional or recurrent neural architectures, sometimes completed by a CRF layer (Chiu and Nichols, 2016; Lample et al., 2016; Yadav and Bethard, 2018).",
"More recently, deep contextualised representations relying on bidirectional LSTMS (Peters et al., 2018), transformers (Devlin et al., 2019; Yan et al., 2019) or contextual string embeddings (Akbik et al., 2019) have also been shown to achieve state-of-the-art performance on NER tasks.",
"These neural architectures require large corpora annotated with named entities, such as Ontonotes (Weischedel et al., 2011) or ConLL 2003 (Tjong Kim Sang and De Meulder, 2003).",
"When only modest amounts of training data are available, transfer learning approaches can transfer the knowledge acquired from related tasks into the target domain, using techniques such as simple transfer (Rodriguez et al., 2018), discriminative fine-tuning (Howard and Ruder, 2018), adversarial transfer (Zhou et al., 2019) or layer-wise domain adaptation approaches (Yang et al., 2017; Lin and Lu, 2018).",
"However, in many practical settings, we wish to apply NER to domains where we have no labelled data, making such transfer learning methods difficult to apply.",
"This paper presents an alternative approach using weak supervision to bootstrap named entity recognition models without requiring any labelled data from the target domain.",
"The approach relies on labelling functions that automatically annotate documents with named-entity labels.",
"A hidden Markov model (HMM) is then trained to unify the noisy labelling functions into a single (probabilistic) annotation, taking into account the accuracy and confusions of each labelling function.",
"Finally, a sequence labelling model is trained using a cross-entropy loss on this unified annotation.",
"As in other weak supervision frameworks, the labelling functions allow us to inject expert knowledge into the sequence labelling model, which is often critical when data is scarce or non-existent (Hu et al., 2016; Wang and Poon, 2018).",
"New labelling functions can be easily inserted to leverage the knowledge sources at our disposal for a given textual domain.",
"Furthermore, labelling functions can often be ported across domains, which is not the case for manual annotations that must be reiterated for every target domain.",
"The contributions of this paper are as follows: 1. A broad collection of labelling functions for NER, including neural models trained on various textual domains, gazetteers, heuristic functions, and document-level constraints.",
"2. A novel weak supervision model suited for sequence labelling tasks and able to include probabilistic labelling predictions.",
"3. An open-source implementation of these labelling functions and aggregation model that can scale to large datasets 1 .",
"Unsupervised domain adaptation: Unsupervised domain adaptation attempts to adapt knowledge from a source domain to predict new instances in a target domain which often has substantially different characteristics.",
"Earlier approaches often try to adapt the feature space using pivots (Blitzer et al., 2006, 2007; Ziser and Reichart, 2017) to create domain-invariant representations of predictive features.",
"Others learn low-dimensional transformation features of the data (Guo et al., 2009; Glorot et al., 2011; Chen et al., 2012; Yu and Jiang, 2016; Barnes et al., 2018).",
"Finally, some approaches divide the feature space into general and domain-dependent features (Daume III, 2007).",
"Multi-task learning can also improve cross-domain performance (Peng and Dredze, 2017).",
"Recently, Han and Eisenstein (2019) proposed domain-adaptive fine-tuning , where contextualised embeddings are first fine-tuned to both the source and target domains with a language modelling loss and subsequently fine-tuned to source domain labelled data.",
"This approach outperforms several strong baselines trained on the target domain of the WNUT 2016 NER task (Strauss et al., 2016).",
"Aggregation of annotations: Approaches that aggregate annotations from multiples sources have largely concentrated on noisy data from crowd sourced annotations, with some annotators possibly 1 https://github.com/NorskRegnesentral/ weak-supervision-for-NER .",
"being adversarial.",
"The Bayesian Classifier Combination approach of Kim and Ghahramani (2012) combines multiple independent classifiers using a linear combination of predictions.",
"Hovy et al. (2013) learn a generative model able to aggregate crowd-sourced annotations and estimate the trustworthiness of annotators.",
"Rodrigues et al. (2014) present an approach based on Conditional Random Fields (CRFs) whose model parameters are learned jointly using EM.",
"Nguyen et al. (2017b) propose a Hidden Markov Model to aggregate crowd-sourced sequence annotations and find that explicitly modelling the annotator leads to improvements for POS-tagging and NER.",
"Finally, Simpson and Gurevych (2019) proposed a fully Bayesian approach to the problem of aggregating multiple sequential annotations, using variational EM to compute posterior distributions over the model parameters.",
"Weak supervision: The aim of weakly supervised modelling is to reduce the need for hand-annotated data in supervised training.",
"A particular instance of weak supervision is distant supervision , which relies on external resources such as knowledge bases to automatically label documents with entities that are known to belong to a particular category (Mintz et al., 2009; Ritter et al., 2013; Shang et al., 2018).",
"Ratner et al. (2017, 2019) generalised this approach with the Snorkel framework which combines various supervision sources using a generative model to estimate the accuracy (and possible correlations) of each source.",
"These aggregated supervision sources are then employed to train a discriminative model.",
"Current frameworks are, however, not easily adaptable to sequence labelling tasks, as they typically require data points to be independent.",
"One exception is the work of Wang and Poon (2018), which relies on deep probabilistic logic to perform joint inference on the full dataset.",
"Finally, Fries et al. (2017) presented a weak supervision approach to NER in the biomedical domain.",
"However, unlike the model proposed in this paper, their approach relies on an ad-hoc mechanism for generating candidate spans to classify.",
"The approach most closely related to this paper is Safranchik et al. (2020), which describe a similar weak supervision framework for sequence labelling based on an extension of HMMs called linked hidden Markov models.",
"The authors introduce a new type of noisy rules, called linking rules, to determine how sequence elements should be grouped into spans of same tag.",
"The main differences be-x 1 y 1 h 1 x 2 y 2 h 2 x 3 y 3 h 3 x t y t h t ...",
"tween their approach and this paper are the linking rules, which are not employed here, and the choice of labelling functions, in particular the document-level relations detailed in Section 3.1.",
"Ensemble learning: The proposed approach is also loosely related to ensemble methods such bagging, boosting and random forests (Sagi and Rokach, 2018).",
"These methods rely on multiple classifiers run simultaneously and whose outputs are combined at prediction time.",
"In contrast, our approach (as in other weak supervision frameworks) only requires labelling functions to be aggregated once, as an intermediary step to create training data for the final model.",
"This is a non-trivial difference as running all labelling functions at prediction time is computationally costly due to the need to run multiple neural models along with gazetteers extracted from large knowledge bases.",
"The proposed model collects weak supervision from multiple labelling functions .",
"Each labelling function takes a text document as input and outputs a series of spans associated with NER labels.",
"These outputs are then aggregated using a hidden Markov model (HMM) with multiple emissions (one per labelling function) whose parameters are estimated in an unsupervised manner.",
"Finally, the aggregated labels are employed to learn a sequence labelling model.",
"Figure 1 illustrates this process.",
"The process is performed on documents from the target domain, e.g. a corpus of financial news.",
"Labelling functions are typically specialised to detect only a subset of possible labels.",
"For instance, a gazetteer based on Wikipedia will only detect mentions of persons, organisations and geographical locations and ignore entities such as dates or percents.",
"This marks a departure from existing aggregation methods, which are originally designed for crowd-sourced data and where annotators are supposed to make use of the full label set.",
"In addition, unlike previous weak supervision approaches, we allow labelling functions to produce probabilistic predictions instead of deterministic values.",
"The aggregation model described in Section 3.2 directly captures these properties in the emission model associated with each labelling function.",
"We first briefly describe the labelling functions integrated into the current system.",
"We review in Section 3.2 the aggregation model employed to combine the labelling predictions.",
"The final labelling model is presented in Section 3.3.",
"The complete list of 52 labelling functions employed in the experiments is available in Appendix A. 3.1 Labelling functions Out-of-domain NER models The first set of labelling functions are sequence labelling models trained in domains from which labelled data is available.",
"In the experiments detailed in Section 4, we use four such models, respectively trained on Ontonotes (Weischedel et al., 2011), CoNLL 2003 (Tjong Kim Sang and De Meulder, 2003) 2 , the Broad Twitter Corpus (Derczynski et al., 2016) and a NER-annotated corpus of SEC filings (Sali-nas Alvarado et al., 2015).",
"For the experiments in this paper, all aforementioned models rely on a transition-based NER model (Lample et al., 2016) which extracts features with a stack of four convolutional layers with filter size of three and residual connections.",
"The model uses attention features and a multi-layer percep-tron to select the next transition.",
"It is initialised with GloVe embeddings (Pennington et al., 2014) and implemented in Spacy (Honnibal and Montani, 2017).",
"However, the proposed approach does not impose any constraints on the model architecture and alternative approaches based on e.g. contextualised embeddings can also be employed.",
"Gazetteers As in distant supervision approaches, we include a number of gazetteers from large knowledge bases to identify named entities.",
"Concretely, we use resources from Wikipedia (Gei et al., 2018), Geonames (Wick, 2015), the Crunchbase Open Data Map, DBPedia (Lehmann et al., 2015) along with lists of countries, languages, nationalities and religious or political groups.",
"To efficiently search for occurrences of these entities in large text collections, we first convert each knowledge base into a trie data structure.",
"Prefix search is then applied to extract matches (using 2 The ConLL 2003 NER model is of course deactivated for the experimental evaluation on ConLL 2003. both case-sensitive and case-insensitive mode, as they have distinct precision-recall trade-offs).",
"Heuristic functions We also include various heuristic functions, each specialised in the recognition of specific types of named entities.",
"Several functions are dedicated to the recognition of proper names based on casing, part-of-speech tags or dependency relations.",
"In addition, we integrate a variety of handcrafted functions relying on regular expressions to detect occurrences of various entities (see Appendix A for details).",
"A probabilistic parser specialised in the recognition of dates, times, money amounts, percents, and cardinal/ordinal values (Braun et al., 2017) is also incorporated.",
"Document-level relations All labelling functions described above rely on local decisions on tokens or phrases.",
"However, texts are not loose collections of words, but exhibit a high degree of internal coherence (Grosz and Sidner, 1986; Grosz et al., 1995) which can be exploited to further improve the annotations.",
"We introduce one labelling function to capture label consistency constraints in a document.",
"As noted in (Krishnan and Manning, 2006; Wang et al., 2018), named entities occurring multiple times through a document have a high probability of belonging to the same category.",
"For instance, while Komatsu may both refer to a Japanese town or a multinational corporation, a text including this mention will either be about the town or the company, but rarely both at the same time.",
"To capture these non-local dependencies, we define the following label consistency model: given a text span e occurring in a given document, we look for all spans Z e in the document that contain the same string as e .",
"The (probabilistic) output of the labelling function then corresponds to the relative frequency of each label l for that string in the document: P doc majority ( e ) ( l ) = (cid:80) z Z e P label ( z ) ( l ) | Z e | (1) The above formula depends on a distribution P label ( z ) , which can be defined on the basis of other labelling functions.",
"Alternatively, a two-stage model similar to (Krishnan and Manning, 2006) could be employed to first aggregate local labelling functions and subsequently apply document-level functions on aggregated predictions.",
"Another insight from Grosz and Sidner (1986) is the importance of the attentional structure .",
"When introduced for the first time, named entities are often referred to in an explicit and univocal manner, while subsequent mentions (once the entity is a part of the focus structure) frequently rely on shorter references.",
"The first mention of a person in a given text is for instance likely to include the person's full name, and is often shortened to the person's last name in subsequent mentions.",
"As in Ratinov and Roth (2009), we determine whether a proper name is a substring of another entity mentioned earlier in the text.",
"If so, the labelling function replicates the label distribution of the first entity.",
"The outputs of these labelling functions are then aggregated into a single layer of annotation through an aggregation model .",
"As we do not have access to labelled data for the target domain, this model is estimated in a fully unsupervised manner.",
"Model We assume a list of J labelling functions { 1 , ... J } and a list of S mutually exclusive NER labels { l 1 , ...l S } .",
"The aggregation model is represented as an HMM, in which the states correspond to the true underlying labels.",
"This model has multiple emissions (one per labelling function) assumed to be mutually independent conditional on the latent underlying label.",
"Formally, for each token i { 1 , ..., n } and labelling function j , we assume a Dirichlet distribution for the probability labels P ij .",
"The parameters of this Dirichlet are separate vectors s i j RS [0 , 1] , for each of the latent states s i { 1 , ..., S } .",
"The latent states are assumed to have a Markovian dependence structure between the tokens { 1 , ..., n } .",
"This results in the HMM represented by a dependent mixtures of Dirichlet model: P ij | s i j ind Dirichlet (cid:16) s i j (cid:17) , (2) p ( s i | s i 1 ) = logit 1 (cid:16) ( s i ,s i 1 ) (cid:17) , (3) logit 1 (cid:16) ( s i ,s i 1 ) (cid:17) = e ( si,si 1) 1+ e ( si,si 1) .",
"(4) Here, ( s i ,s i 1 ) R are the parameters of the transition probability matrix controlling for a given state s i 1 the probability of transition to state s i .",
"Figure 2 illustrates the model structure.",
"Parameter estimation The learnable parameters of this HMM are",
"(a) the transition matrix between states and",
"(b) the vectors of the Dirichlet distribution associated with each labelling function.",
"The The plugged wells have ... s i 1 s i s i +1 s i +2 ... s i j P ij Labelling function j { 1 , ...J } Figure 2: Aggregation model using a hidden Markov model with multiple probabilistic emissions.",
"transition matrix is of size | S | | S | , while we have | S | | J | vectors, each of size | S | .",
"The parameters are estimated with the Baum-Welch algorithm, which is a variant of EM algorithm that relies on the forward-backward algorithm to compute the statistics for the expectation step.",
"To ensure faster convergence, we introduce a new constraint to the likelihood function: for each token position i , the corresponding latent label s i must have a non-zero probability in at least one labelling function (the likelihood of this label is otherwise set to zero for that position).",
"In other words, the aggregation model will only predict a particular label if this label is produced by least one labelling function.",
"This simple constraint facilitates EM convergence as it restricts the state space to a few possible labels at every time-step.",
"Prior distributions The HMM described above can be provided with informative priors.",
"In particular, the initial distribution for the latent states can be defined as a Dirichlet based on counts for the most reliable labelling function 3 : p ( s i ) d = Dirichlet ( ) .",
"The prior for each row k of the transition probabilities matrix is also a Dirichlet based on the frequencies of transitions between the observed classes for the most reliable labelling function k :",
"3 The most reliable labelling function was found in our experiments to be the NER model trained on Ontonotes 5.0.",
"Assuming we can provide rough estimates of the recall r jk and precision jk for the labelling function j on label k , the initial values for the parameters of the emission model are expressed as: s i jk (cid:40) r jk , if s i = k, (1 r s i k ) (1 jk ) k , if s i (cid:54) = k.",
"The probability of observing a given label k emitted by the labelling function j is thus proportional to its recall if the true label is indeed k .",
"Otherwise (i.e. if the labelling function made an error), the probability of emitting k is inversely proportional to the precision of the labelling function j .",
"Decoding Once the parameters of the HMM model are estimated, the forward-backward algorithm can be employed to associate each token marginally with a posterior probability distribution over possible NER labels (Rabiner, 1990).",
"Once the labelling functions are aggregated on documents from the target domain, we can train a sequence labelling model on the unified annotations, without imposing any constraints on the type of model to use.",
"To take advantage of the posterior marginal distribution p s over the latent labels, the optimisation should seek to minimise the expected loss with respect to p s : = arg min n (cid:88) i E y p s [ loss ( h ( x i ) , y )] (7) where h ( ) is the output of the sequence labelling model.",
"This is equivalent to minimising the cross-entropy error between the outputs of the neural model and the probabilistic labels produced by the aggregation model.",
"We evaluate the proposed approach on two English-language datasets, namely the CoNLL 2003 dataset and a collection of sentences from Reuters and Bloomberg news articles annotated with named entities by crowd-sourcing.",
"We include a second dataset in order to evaluate the approach with a more fine-grained set of NER labels than the ones in CoNLL 2003.",
"As the objective of this paper is to compare approaches to unsupervised domain adaptation, we do not rely on any labelled data from these two target domains.",
"CoNLL 2003 The CoNLL 2003 dataset (Tjong Kim Sang and De Meulder, 2003) consists of 1163 documents, including a total of 35089 entities spread over 4 labels: ORG , PER , LOC and MISC .",
"Reuters & Bloomberg We additionally crowd annotate 1054 sentences from Reuters and Bloomberg news articles from Ding et al. (2014).",
"We instructed the annotators to tag sentences with the following 9 Ontonotes-inspired labels: PERSON , NORP , ORG , LOC , PRODUCT , DATETIME , PERCENT , MONEY , QUANTITY .",
"Each sentence was annotated by at least two annotators, and a qualifying test with gold-annotated questions was conducted for quality control.",
"Cohen's for sentences with two annotators is 0.39, while Krippendorff's for three annotators is 0.44.",
"We had to remove QUANTITY labels from the annotations as the crowd results for this label were highly inconsistent.",
"Ontonotes-trained NER The first baseline corresponds to a neural sequence labelling model trained on the Ontonotes 5.0 corpus.",
"We use here the same model from Section 3.1, which is the single best-performing labelling function (that is, without aggregating multiple predictions).",
"We also experimented with other neural architectures but these performed similar or worse than the transition-based model, presumably because they are more prone to overfitting on the source domain.",
"Majority voting (MV) The simplest method for aggregating outputs is majority voting, i.e. outputting the most frequent label among the ones predicted by each labelling function.",
"However, specialised labelling functions will output O for most tokens, which means that the majority label is typically O .",
"To mitigate this problem, we first look at tokens that are marked with a nonO label by at least T labelling functions (where T is a hyper-parameter tuned experimentally), and then apply majority voting on this set of nonO labels.",
"Snorkel model The Snorkel framework (Ratner et al., 2017) does not directly support sequence labelling tasks as data points are required to be independent.",
"However, heuristics can be used to extract named-entity candidates and then apply labelling functions to infer their most likely labels (Fries et al., 2017).",
"For this baseline, we use the three functions nnp detector , proper detector and compound detector (see Appendix A) to generate candidate spans.",
"We then create a matrix expressing the output of each labelling function for each span (in-cluding a specific abstain value to denote the absence of prediction) and run the matrix-completion-style approach of Ratner et al. (2019) to aggregate the predictions from all functions.",
"mSDA is a strong domain adaptation baseline (Chen et al., 2012) which augments the feature space of a model with intermediate representations learned using stacked denoising autoencoders.",
"In our case, we learn the mSDA representations on the unlabeled source and target domain data.",
"These 800 dimensional vectors are concatenated to 300 dimensional word embeddings and fed as input to a two-layer LSTM with a skip connection.",
"Finally, we train the LSTM on the labeled source data and test on the target domain.",
"AdaptaBERT This baseline corresponds to a state-of-the-art unsupervised domain adaptation approach (AdaptaBERT) (Han and Eisenstein, 2019).",
"The approach first uses unlabeled data from both the source and target domains to domain-tune a pretrained BERT model.",
"The model is finally task-tuned in a supervised fashion on the source domain labelled data (Ontonotes).",
"At inference time, the model makes use of the pretraining and domain tuning to predict entities in the target domain.",
"In our experiments, we use the cased-version of the base BERT model and perform three fine-tuning epochs for both domain-tuning and task-tuning.",
"We additionally include an ensemble model, which averages the predictions of five BERT models fine-tuned with different random seeds.",
"Following the notation from Section 3.2, we define Y i,j,k = I ( P i,j,k = max k (cid:48) { 1 ,...,S } P i,j,k (cid:48) ) to be the most probable label for word i by source j .",
"One can model Y ij with a Multinomial probability distribution.",
"The first four baselines (the fifth one assumes Markovian dependence between the latent states) listed below use the following independent, i.e. p ( s i , s i 1 ) = p ( s i ) p ( s i 1 ) , mixtures of Multinomials model for Y ij : Y ij | p s i j ind Multinomial ( p s i j ) , s i ind Multinomial ( ) .",
"Confusion vector (CV) (Nguyen et al., 2017a) extends ACC by relying on separate success probabilities for each token label: p s i jk = (cid:40) jk , if s i = k, 1 jk J 1 s i (cid:54) = k.",
"Sequential Confusion Matrix (SEQ) extends the CM model of Simpson and Gurevych (2019), where an auto-regressive component is included in the observed part of the model.",
"We assume dependence on a covariate indicating that the label has not changed for a given source, i.e.: p s i jk = logit 1 ( s i jk + I ( Y Ti 1 ,j,k = Y Ti,j,k ) s i jk ) .",
"the CM-distinct accuracies conditional on the latent states of (8) and the Markovian dependence of (3).",
"The evaluation results are shown in Tables 1 and 2, respectively for the CoNLL 2003 data and the crowd-annotated sentences.",
"The metrics are the (micro-averaged) precision, recall and F 1 scores at both the token-level and entity-level.",
"In addition, we indicate the token-level cross-entropy error (in log-scale).",
"As the labelling functions are defined on a richer annotation scheme than the four labels of ConLL 2003, we map GPE to LOC and EVENT , FAC , LANGUAGE , LAW , NORP , PRODUCT and WORK OF ART to MISC .",
"The results for the ACC and CV baselines are not included as the parameter estimation did not converge and hence did not provide reliable posteriors over parameters.",
"entity-level F 1 from 0.702 to 0.716.",
"This highlights the importance of these relations in NER.",
"The last line of the two tables reports the performance of the sequence labelling model (Section 3.3) trained on the aggregated labels.",
"We observe that its performance remains close to the HMM-aggregated labels.",
"This shows that the knowledge from the labelling functions can be injected into a standard neural model without substantial loss.",
"Although not shown in the results due to space constraints, we also analysed whether the informative priors described in Section 3.2 influenced the performance of the aggregation model.",
"We found informative and non-informative priors to yield similar performance for CoNLL 2003.",
"However, the performance of non-informative priors was very poor on the Reuters and Bloomberg sentences ( F 1 at 0.12), thereby demonstrating the usefulness of informative priors for small datasets.",
"We provide in Figure 3 an example with a few selected labelling functions.",
"In particular, we can observe that the Ontonotes-trained NER model mistakenly labels Heidrun as a product.",
"This erroneous label, however, is counter-balanced by other labelling functions, notably a document-level function looking at the global label frequency of this string through the document.",
"We do, however, notice a few remaining errors, e.g. the labelling of Status Weekly as an organisation.",
"Figure 4 illustrates the pairwise agreement and disagreement between labelling functions on the CoNLL 2003 dataset.",
"If both labelling functions make the same prediction on a given token, we count this as an agreement, whereas conflicting predictions (ignoring O labels), are seen as disagreement.",
"Large differences may exist between these functions for specific labels, especially MISC .",
"The functions with the highest overlap are those making predictions on all labels, while labelling functions specialised to few labels (such as legal detector ) often have less overlap.",
"We also observe that the two gazetteers from Crunchbase and Geonames disagree in about 15% of cases, presumably due to company names that are also geographical locations, as in the earlier Komatsu example.",
"In terms of computational efficiency, the estimation of HMM parameters is relatively fast, requiring less than 30 mins on the entire CoNLL 2003 data.",
"Once the aggregation model is estimated, it Token-level Entity-level Model: P R F 1 CEE P R F 1 Ontonotes-trained NER 0.719 0.706 0.712 2.671 0.694 0.620 0.654 Majority voting (MV) 0.815 0.675 0.738 2.047 0.751 0.619 0.678 Confusion Matrix (CM) 0.786 0.746 0.766 1.964 0.713 0.700 0.706 Sequential Confusion Matrix (SEQ) 0.736 0.716 0.726 2.254 0.642 0.668 0.654 Dependent Confusion Matrix (DCM) 0.785 0.744 0.764 1.983 0.710 0.698 0.704 Snorkel-aggregated labels 0.710 0.661 0.684 2.264 0.714 0.621 0.664 mSDA (OntoNotes) 0.640 0.569 0.603 2.813 0.560 0.562 0.561 AdaptaBERT (OntoNotes) 0.693 0.733 0.712 2.280 0.652 0.736 0.691 AdaptaBERT (Ensemble) 0.704 0.754 0.729 2.103 0.684 0.743 0.712 HMM-agg.",
"can be directly applied to new texts with a single forward-backward pass, and can therefore scale to datasets with hundreds of thousands of documents.",
"This runtime performance is an important advantage compared to approaches such as AdaptaBERT (Han and Eisenstein, 2019) which are relatively slow at inference time.",
"The proposed approach can also be ported to other languages than English, although heuristic functions and gazetteers will need to be adapted to the target language.",
"This paper presented a weak supervision model for sequence labelling tasks such as Named Entity Recognition.",
"To leverage all possible knowledge sources available for the task, the approach uses a broad spectrum of labelling functions, including data-driven NER models, gazetteers, heuristic functions, and document-level relations between entities.",
"Labelling functions may be specialised to recognise specific labels while ignoring oth-Well repairs to lift Heidrun PRODUCT LOC oil output Statoil COMPANY .",
"Document level functions: doc majority uncased ; Aggregated predictions: HMM-aggregated model Figure 3: Extended example showing the outputs of 6 labelling functions, along with the HMM-aggregated model.",
"ers.",
"Furthermore, unlike previous weak supervision approaches, labelling functions may produce probabilistic predictions.",
"The outputs of these labelling functions are then merged together using a hidden Markov model whose parameters are estimated with the Baum-Welch algorithm.",
"A neural sequence labelling model can finally be learned on the basis of these unified predictions.",
"Evaluation results on two datasets (CoNLL 2003 and news articles from Reuters and Bloomberg) show that the method can boost NER performance by about 7 percentage points on entity-level F 1 .",
"In particular, the proposed model outperforms the unsupervised domain adaptation approach through contextualised embeddings of Han and Eisenstein (2019).",
"Of specific linguistic interest is the contribution of document-level labelling functions, which take advantage of the internal coherence and narrative structure of the texts.",
"functions in the aggregation model, as done in e.g. (Bach et al., 2017).",
"Furthermore, some of the labelling functions can be rather noisy and model selection of the optimal subset of the labelling functions might well improve the performance of our model.",
"Model selection approaches that can be adapted are discussed in Adams and Beling (2019); Hubin (2019).",
"We also wish to evaluate the approach on other types of sequence labelling tasks beyond Named Entity Recognition.",
"The research presented in this paper was conducted as part of the innovation project FinAI: Artificial Intelligence tool to monitor global financial mar-kets in collaboration with Exabel AS 4 .",
"Additionally, this work is supported by the SANT project (Sentiment Analysis for Norwegian Text), funded by the Research Council of Norway."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"method",
"other",
"other",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"other",
"other"
] |
[
"Fake news detection is crucial for preventing the dissemination of misinformation on social media.",
"To differentiate fake news from real ones, existing methods observe the language patterns of the news post and zoom in to verify its content with knowledge sources or check its readers' replies.",
"However, these methods neglect the information in the external news environment where a fake news post is created and disseminated.",
"The news environment represents recent mainstream media opinion and public attention, which is an important inspiration of fake news fabrication because fake news is often designed to ride the wave of popular events and catch public attention with unexpected novel content for greater exposure and spread.",
"To capture the environmental signals of news posts, we zoom out to observe the news environment and propose the News Environment Perception Framework (NEP).",
"For each post, we construct its macro and micro news environment from recent mainstream news.",
"Then we design a popularity-oriented and a novelty-oriented module to perceive useful signals and further assist final prediction.",
"Experiments on our newly built datasets show that the NEP can efficiently improve the performance of basic fake news detectors.",
"1 1 Introduction The wide spread of fake news on online social media has influenced public trust (Knight Foundation, 2018) and poses real-world threats on politics (Fisher et al., 2016), finance (ElBoghdady, 2013), public health (Naeem and Bhatti, 2020), etc.",
"Under such severe circumstances, automatically detecting fake news has been an important countermeasure in practice.",
"Besides directly observing the post's content patterns (Volkova et al., 2017; Wang et al., 2018) (FigCorresponding author.",
"ure",
"1(a)), most existing methods for fake news detection zoom in for finding richer post-level signal by checking user replies to the post (Shu et al., 2019a; Zhang et al., 2021) and verifying the claim with knowledge sources (Popat et al., 2018; Wang et al., 2020) (Figure",
"1(b)).",
"However, these methods neglect a different line of zooming out to observe the external news environment where a fake news post is created and disseminated.",
"Our starting point is that a news environment, which represents recent mainstream media opinion and public attention, is an important inspiration of the fabrication of contemporary fake news.",
"Since any gains of fake news achieve only if it widely exposes and virally spreads, a fake news creator would carefully design how to improve the post's visibility and attract audiences' attention in the context (environ-ment) of recently published news.",
"Such intentional design connects fake news with its news environment and conversely, we might find useful signals from the news environment to better characterize and detect fake news.",
"Figure 2 shows an example, where we name the whole set of recent news items the macro news environment and the event-similar subset as the 4543 POST: Syriaannounced a48-hour ceasefire tocelebratethewin over ChinaMen'sNational FootballTeam.",
"micro news environment.",
"For the fake news post p on Syria's ceasefire thanks to a win over China in a football match, we observe two important signals from its news environments: 1) Popularity.",
"In the macro news environment that contains all recent news items, p is related to a relatively popular event (Syria-China football match) among the five events in different domains.",
"This would bring p greater exposure and further greater impact.",
"2) Novelty.",
"In the micro news environment, the items mostly focus on the game itself (e.g., Wu Lei had a shot), while p provides novel side information about Syria's unusual celebration.",
"This would help catch audiences' attention and boost the spread of p (Vosoughi et al., 2018).",
"Unfortunately, these potentially useful signals could be hardly considered by post-only and zoom-in methods, as they focus on digging in the direction towards inherent properties of a single post (e.g., styles, emotions and factual correct-ness), rather than observing the surrounding environments of the post.",
"To enable fake news detection systems to exploit information from news environments, we propose the News Environment Perception Framework (NEP).",
"As presented in Figure 3, for the post p , we construct two news environments, MACROENV and MICROENV , using recent mainstream news data to facilitate the perception from different views.",
"We then design a popularity-oriented and a novelty-oriented perception module to depict the relationship between p and these recent news items.",
"The environment-perceived vectors are fused into an existing fake news detector for prediction.",
"Our contributions are as follows: Problem : To the best of our knowledge, we are the first to incorporate news environment perception in fake news detection.",
"Method : We propose the NEP framework which exploits the perceived signals from the macro and micro news environments of the given post for fake news detection.",
"Data & Experiments : We construct the first dataset which includes contemporary mainstream news data for fake news detection.",
"Experiments on offline and online data show the effectiveness of NEP.",
"Fake news detection is mostly formulated as a binary classification task where models are expected to accurately judge the given post as real or fake.",
"Existing works focus on discovering distinctive features in the post from various aspects as Figure 2 shows, which we roughly group them as: Post-only methods aim at finding shared patterns in appearances across fake news posts (Fig-ure",
"1(a)).",
"Text-based studies focus on better constructing features based on sentiment (Ajao et al., 2019), writing style (Przybyla, 2020), language use (Volkova et al., 2017), discourse (Karimi and Tang, 2019), etc.",
"Other works rely on deep neural models to encode contents and handle certain scenarios, such as visual-based (Qi et al., 2019; Cao et al., 2020), multi-modal (Wang et al., 2018; Qi et al., 2021) and multi-domain (Nan et al., 2021) detection.",
"Our NEP provides additional news environmental information and can coordinate with post-only methods (will show in Section 4).",
"Zoom-in methods introduce related sources to understand the post delicately.",
"One line is to use social contexts (bottom of Figure",
"1(b)).",
"Some directly analyze the network information to find patterns shaped by user relationship and information diffusion (Shu et al., 2019b; Zhou and Zafarani, 2019; Nguyen et al., 2020; Silva et al., 2021), and others leverage collective wisdom reflected by user responses (Ma et al., 2018; Kochkina et al., 2018; Shu et al., 2019a; Zhang et al., 2021).",
"For example, a refuting reply saying FYI, this is false would be an important reference to make a prediction.",
"Another line refers to knowledge sources (top of Figure",
"1(b)) and aims at verifying the post with retrieved evidence for detection.",
"The knowledge sources can be webpages (Popat et al., 2018; Ma 4544 Figure 3: Architecture of the News Environment Perception Framework (NEP).",
"et al., 2019; Vo and Lee, 2021; Wu et al., 2021; Sheng et al., 2021b), knowledge graphs (Cui et al., 2020), online encyclopedias (Thorne et al., 2018; Aly et al., 2021), fact-checking article bases (Au-genstein et al., 2019; Shaar et al., 2020), etc.",
"Our NEP starts from a different view, for it zooms out to observe the news environment where the post spreads.",
"Note that our method is not equivalent to a knowledge-based method that uses news environments as evidence bases, as it does not pick evidential news items to prove or disprove the given post, but aims at reading the news atmosphere when the post is published.",
"In that sense, zoom-in and zoom-out methods can actually be integrated for comprehensively detecting fake news (will also show in Section 4).",
"Figure 3 overviews our proposed framework NEP, whose goal is to empower fake news detectors with the effective perception of news environments.",
"Given a post p , we first construct its macro and micro environment (MACROENV and MICROENV ) using recent news data.",
"Then we model the post-environment relationships to generate environment-perceived vectors v p,mac and v p,mic .",
"Finally, the two vectors are fused with post representation o derived from the fake news detector to predict if p is real or fake.",
"The environment is the objects, circumstances, or conditions by which one is surrounded (Merriam-Webster, 2021).",
"Accordingly, a news environment should contain news reports which can reflect the present distribution of mainstream focuses and audiences' attention.",
"To this end, we collect news items published by mainstream media outlets as basic environmental elements, in that their news reports generally face a large, common audience.",
"Let E be the set of all collected news items published earlier than p .",
"We construct a macro environment (MACROENV ) and a micro environment (MICROENV ), which are defined as follows: MACROENV is the set of news items in E released within T days before p is published: E mac = { e : e E , 0 < t p t e T } , (1) where t p and t e respectively denote the publication date of p and the news item e .",
"MICROENV is the set of news items in E mac that are relevant to p .",
"Here, we query E mac using p and obtain the top k as the set: E mic = { e : e Topk( p, E mac ) } , (2) 4545 where k = r |E mac | and r (0 , 1) determines the proportion.",
"Intuitively, the time-constrained environment MACROENV provides a macro perspective of what the mass audience read and focus on recently, while the further relevance-constrained one MICROENV describes the distribution of items about similar events.",
"We use a pretrained language model M (e.g., BERT (Devlin et al., 2019)) to obtain the post/news representation.",
"For p or each item in the macro/micro environment e , the initial representation is the output of M for the [ CLS ] token: p = M ( p ) , e = M ( e ) .",
"The perception of news environments of p is to capture useful signals from existing mainstream news items.",
"The signals are expected to discover unique post-environment interactive patterns of fake news.",
"Starting from the motivation of fake news creators to widely diffuse fabricated information to the whole online news ecosystem, we guide the model to perceive from two important diffusion-related perspectives, i.e., popularity and novelty, in the MACROENV and the MICROENV .",
"Popularity-Oriented MACROENV Perception.",
"A fabricated post would be more likely to go viral and thus gain more influence when it is related to trending news.",
"Thus, a fake news creator might consider how to chase clouts of hot events during writing a fake news post.",
"Here we consider how popular the main event of p is in the MACROENV .",
"We transform the perception of popularity into the similarity estimation between p and individual news items.",
"That is, if many items in the MACROENV are similar to p , then p might be also popular in such an environment.",
"Following (Reimers and Gurevych, 2019), we first calculate cosine similarity between p and each news item (say, i ) in E mac : s ( p , e i ) = p e i p e i .",
"The similarity list { cos ( p , e i ) } |E mac | i =1 of variable length |E mac | does not work well with networks mostly taking fixed-dimensional vectors as inputs.",
"Thus, the list requires a further transformation, where we expect the transformed environment-perceived vector to reflect how similar p is to the environment without much information loss.",
"Following (Xiong et al., 2017; Liu et al., 2020), we here choose to calculate a soft counting on the list to obtain a distribution that mimics a hard bin plot.",
"Specifically, we employ a Gaussian Kernel Pooling proposed in (Xiong et al., 2017) across the range of cosine similarity to get soft counting values.",
"Assuming that we use C kernels { K i } Ci =1 , the output of k -th kernel is: K ik = exp (cid:32) ( s ( p , e i ) k ) 2 2 2 k (cid:33) , (5) K k ( p , E mac ) = |E mac | (cid:88) i =1 K ik , (6) where k and k is the mean and width of the k th kernel.",
"In Eq.",
"(5), if the similarity between p and e is close to k , the exponential term will be close to 1; otherwise to 0.",
"We then sum the exponential terms with Eq.",
"(6).",
"This explains why a kernel is like a soft counting bin of similarities.",
"We here scatter the means { k } Ck =1 of the C kernels in [ 1 , 1] to completely and evenly cover the range of cosine similarity.",
"The widths are controlled by { k } Ck =1 .",
"Appendix B.1 provides the details.",
"AC -dim similarity feature in the MACROENV is obtained by concatenating all kernels' outputs and normalizing with the summation of the outputs: K ( p , E mac )=Norm (cid:32) C (cid:77) k =1 K k ( p , E mac ) (cid:33) , (7) where (cid:76) is the concatenation operator and Norm( ) denotes the normalization.",
"By calculating K ( p , E mac ) , we obtain a soft distribution of similarities between p and the MACROENV as the perception of popularity.",
"To enrich the perceived information, we generate the MACROENV -perceived vector for p by fusing the similarity and semantic information.",
"Specifically, we aggregate the post vector, the center vector of the MACROENV m ( E mac ) (by averaging all vec-tors), and the similarity feature using an MLP: v p,mac =MLP( p m ( E mac ) K ( p , E mac )) .",
"Novelty-Oriented MICROENV Perception.",
"Different from MACROENV , MICROENV contains mainstream news items close to p , which indicates that they are likely to share similar events.",
"However, even in a popular event, a post may still be not attended if it is too similar to others.",
"Vosoughi et al. (2018) found that false news was more novel than true news on Twitter with the reference to the 4546 tweets that the users were exposed to (could be regarded as a user-level news environment).",
"This might explain why fake news spread better.",
"We thus consider how novel p is in the event-similar MICROENV .",
"2 If the content of a post is novel, it is expected to be an outlier in such an event.",
"Here, we use the center vector m ( E mic ) of MICROENV as a reference.",
"Specifically, we again use Eqs.",
"(5) to (7), but here, calculate two similarity features K ( p , E mic ) and K ( m ( E mic ) , E mic ) .",
"The latter serves as a reference for the former and facilitates the model cal-ibrate its perception.",
"The generation of the MICROENV -perceived vector for p is as follows: u sem = MLP( p m ( E mic )) , (9) u sim =MLP(g( K ( p , E mic ) , K ( m ( E mic ) , E mic ))) , (10) v p,mic = MLP( u sem u sim ) , (11) where the comparison function g( x , y ) = ( x y ) ( x y ) and is the Hadamard product operator.",
"u sem and u sim respectively aggregate the semantic and similarity information.",
"The MLP s are individually parameterized.",
"We omit their index numbers in the above equations for brevity.",
"As our environment perception does not necessarily depend on a certain detection model, we expect our NEP to have a good compatibility with various fake news detectors.",
"In our NEP, we achieve this by gate fusion.",
"Take a post-only detector as an example.",
"We apply the gate mechanism for adaptively fusing v p,mac and v p,mic according to o : v p = g v p,mac + ( 1 g ) v p,mic , (12) where the gating vector g = sigmoid(Linear( o v p,mac )) , sigmoid is to constrain the value of each element in [0 , 1] , and o denotes the last-layer feature from a post-only detector.",
"3 o and v p are further fed into an MLP and a softmax layer for final prediction: y = softmax(MLP( o v p )) .",
"For example, we can concatenate v p with the post-article joint representation if the fake news detector is knowledge-based.",
"During training, we minimize the cross-entropy loss.",
"EQ1: Can NEP improve the performance of fake news detection?",
"EQ2: How effective does the NEP model the macro and micro news environments?",
"EQ3: In what scenarios do news environments help with fake news detection?",
"We integrated existing datasets in Chinese and English and then collected news items released in the corresponding time periods.",
"The reasons why we do not use a single, existing dataset include 1) no existing dataset provides the contemporary news items of verified news posts to serve as the elements in news environments; 2) most datasets were collected in a short time period and some suffer from a high class imbalance across years.",
"4 The statistics are shown in Table 1 and the details are as follows: Chinese Dataset Post: We merged the non-overlapping parts of multiple Weibo datasets from (Ma et al., 2016) (excluding those unverified), (Song et al., 2019), (Zhang et al., 2021) and (Sheng et al., 2021a) to achieve a better coverage of years and avoid spurious correlation to specific news environments (e.g., one full of COVID-19 news).",
"To balance the post amount of real/fake classes across the years, we added news posts verified by a news verification system NewsVerify 5 and resampled the merged 4 For example, Weibo-20 (Zhang et al., 2021) is roughly balanced as a whole but has a ratio of 5.2:1 between real and fake news samples in 2018.",
"News Environment: We collected the news items from the official accounts of six representative mainstream news outlets that have over 30M followers on Weibo (see sources in Appendix A).",
"The further post-processing resulted in 583,208 news items from 2010 to 2021.",
"Post: Similarly, we merged the datasets from (Kochkina et al., 2018) (excluding unverified), (Au-genstein et al., 2019) (excluding those without claim dates), and (Shaar et al., 2020).",
"For posts or claims from fact-checking websites, we used the provided claim dates instead of the publication dates of the fact-checking articles, to avoid potential data contamination where the later news environment is more likely to contain corresponding fact-checking news and support direct fact verification.",
"We obtained 6,483 posts from 2014 to 2018 after dropping the posts labeled as neutral and re-sampling.",
"News Environment: We use news headlines (plus short descriptions if any) from Huffington Post, NPR, and Daily Mail as the substitute of news tweets due to the Twitter's restriction (see sources in Appendix A).",
"The bias rates of the three outlets are respectively left, center, and right according to AllSides Media Bias Chart 6 , for enriching the diversity of news items.",
"We preserved the news headlines from 2014 to 2018 and obtained a set of 1,003,646 news items.",
"Base Models Technically, our NEP could coordinate with any fake news detectors that produce post representation.",
"Here we select four post-only methods and two zoom-in (knowledge-based) methods as our base models.",
"7 Post-Only: 1) Bi-LSTM (Graves and Schmid-huber, 2005) which is widely used to encode posts in existing works (Shu et al., 2019a; Karimi and Tang, 2019); 2) EANNT (Wang et al., 2018) which uses adversarial training to remove event-specific features obtained from TextCNN (Kim, 2014); 3) BERT (Devlin et al., 2019); 4) BERT-Emo (Zhang et al., 2021) which fuses a series of emotional features with BERT encoded features for classification (publisher emotion version).",
"8 Zoom-in: 1) DeClarE (Popat et al., 2018) which considers both the post and retrieved documents as possible evidence; 2) MAC (Vo and Lee, 2021) which build a hierarchical multi-head attention network for evidence-aware detection.",
"Implementation Details We obtained the sentence representation from SimCSE (Gao et al., 2021) based on pretrained BERT models in the Transformers package (Wolf et al., 2020) 9 and were 7 We do not select social context-based methods because it would be impractical to integrate our NEP with them at the cost of timeliness, for the model has to wait for the accumulation of user responses/reposts.",
"We suppose that an asynchronous integration at the system level (using post-only/knowledge-based methods with NEP to obtain instant predictions, and update the results later) would be an option, which is beyond our scope.",
"8 As our work is based on the post text, we use the text-only variant of the original EANN that excludes the image modality and the publisher-emotion-only variant in (Zhang et al., 2021) that excludes the social emotion features.",
"9 bert-base-chinese and bert-base-uncased 4548 Table 3: Performance comparison of the NEP and its variants without the fake news detector or without the environment perception module.",
"post-trained on collected news items.",
"We frozed SimCSE when training NEP.",
"For DeClarE and MAC, we prepared at most five articles in advance as evidence for each post by retrieving against fact-checking databases.",
"10 In environment modeling, T = 3 , r = 0 .",
"1 , and C = 22 .",
"We limit |E mac | 10 .",
"We implemented all methods using PyTorch (Paszke et al., 2019) with AdamW (Loshchilov and Hutter, 2019) as the optimizer.",
"We reported test results w.r.t. the best validation epoch.",
"Appendix B provides more implementation details.",
"Evaluation Metrics.",
"As the test sets are roughly balanced, we here report accuracy (Acc.), macro F1 score (macF1) and the F1 scores of fake and real class (F1 fake and F1 real ).",
"We will use a new metric for skewed test data (see Section 5).",
"Table 2 shows the performance of base models with and without the NEP on the two datasets.",
"We have the following observations: First, with the help of our NEP, all six base models see an performance improvement in terms of accuracy and macro F1.",
"This validates the effectiveness and compatibility of NEP.",
"Second, for post-only methods, F1 fake generally benefits more than F1 real when using NEP, which indicates that news environments might be more helpful in highlighting the characteristics of fake news.",
"This is a practical property of the NEP as we often focus more on the fake news class.",
"10 We attempted to collect webpages using our posts as queries as Popat et al. (2018) did but rare ones could serve as evidence except fact-checking articles.",
"As an alternative, we directly used articles from (Sheng et al., 2021a) for Chinese and collected ~8k articles from a well-known fact-checking website Snopes.com for English.",
"(b) Day Difference T Figure 4: Effects of",
"(a) the proportion factor r and",
"(b) the day difference T .",
"Lines show the accuracies and bars show the average numbers of news items in the micro/macro environments.",
"Third, the zoom-in knowledge-based methods outperform their corresponding post-only base model (here, Bi-LSTM) with the help of relevant articles, but the improvement is small.",
"This might be led by the difficulty of finding valuable evidence.",
"Our NEP brings additional gains, indicating that the information perceived from news environments is different from verified knowledge, and they play complementary roles.",
"Ablation Study.",
"We have two ablative groups as shown in Table 3: w/o Fake News Detector : We directly use one of the two environment-perceived vectors or both to see whether they can work when not cooperating with the fake news detector's output o .",
"The macro F1 scores on both datasets indicate their moderate effectiveness as sole inputs, and that coordinating with a post-only detector is a more practical setting.",
"w/o Environment Perception Modules : By respectively removing MACROENV and MICROENV from the best-performing models BERT-Emo+NEP and DeClarE+NEP, we see a performance drop in macro F1 when removing either of them, indicating 4549 Figure 5: Categories of MACROENVand MICROENV preferred samples.",
"Effects of the proportion factor r for the MICROENV .",
"We adjusted r from 0.05 to 0.30 with a step of 0.05 on BERT-Emo+NEP to see the impact of the scale of the MICROENV ( T = 3 ).",
"As Figure",
"4(a) shows, the change of r leads to an increase on the size of the MICROENV , but only fluctuations w.r.t. the accuracy.",
"We do not see significant improvement after r = 0 .",
"1 .",
"We speculate that a too small r may hardly cover enough event-similar items while a large r may include much irrelevant information, bringing little gains (e.g., r = 0 . 3 in Chinese) or even lowering the performance (e.g., r = 0 . 15 for both datasets).",
"Effects of the day difference T for the MACROENV .",
"We set T = 1 , 3 , 5 , 7 , 9 on BERT-Emo+NEP to see how many days of news items to be considered is proper ( T = 0 exactly corresponds to the base model).",
"Figure",
"4(b) shows a tendency similar to",
"(a).",
"We find the highest accuracy when T = 3 on both of the two datasets.",
"This is reasonable as the popularity should be considered in a moderately short time interval to allow the events to develop but not to be forgotten.",
"Categorization of macroand micro-preferred samples.",
"We selected the top 1% of Chinese fake news samples which NEP relies more on MACROENV or MICROENV according to the gate vectors.",
"Then we manually categorized these samples to probe what information the macro/micro environment might provide.",
"From Figure 5, we see that MACROENV is more useful for samples about natural disasters and accidents (e.g., earthquakes and air crashes), while MICROENV works effectively in Society & Life (e.g., robbery and education).",
"This is in line with our intuition: MACROENV -preferred fake news posts are often related to sensational events, so the popularity in MACROENV would help more; and MICROENV preferred ones are often related to common events in daily news, and thus its novelty in MICROENV would be highlighted.",
"This analysis would deepen our understanding on the applicability of different news environments.",
"Case study.",
"Figure 6 shows three fake news cases in different scenarios.",
"Case",
"(a) relies more on MICROENV than MACROENV .",
"We can see moderate popularity of its event about Huawei but the message about HarmonyOS is novel among the items on the 5G and cooperations.",
"In contrast, the admit card in case",
"(b) is moderately novel but Gaokao is the most popular event, so the NEP puts higher weight on MACROENV .",
"Case",
"(c) is a popular and novel fake news about Japan's great healthcare for citizens coming back from Wuhan which is posted during the first round of COVID-19 pandemic in China.",
"The exploitation of both-side information makes a tie between the two environments.",
"These cases intuitively show how NEP handles different scenarios.",
"We incorporate further analysis on the case that the news environment might be ineffective in Appendix D. 5 Discussion in Practical Systems Evaluation on skewed online data.",
"We tested BERT-Emo and BERT-Emo+NEP on a dump of seven-month data from a Chinese fake news detection system.",
"Different from offline datasets, this real-world set is highly skewed (30,977 real vs. 309 fake, roughly 100:1).",
"11 Under such skewed circumstance, some metrics we used in Tables 2 and 3 could hardly show the differences of performances among models (e.g., a model predicting all samples as real will have an incredible accuracy of 0.990).",
"Here, we report macro F1 and standardized partial AUC with false positive rate of at most 0.1 (spAUC FPR 0 . 1 , McClish, 1989, see Appendix C for the calculation detail) under different real/fake ratios (from 10:1 to 100:1).",
"As shown in Figure 7, NEP brings relative improvements of 16.89% and 5.20% in macF1 and spAUC FPR 0 .",
"1 , showing its effectiveness in skewed, real scenarios.",
"Friendliness to Practical Systems.",
"The NEP is not only a new direction for fake news detection but also inherently friendly to practical systems: 1) Timeliness.",
"Our NEP works instantly as it only requires the post and mainstream news published a few days before.",
"In practice, a system would not 11 The online test set and the offline sets do not intersect.",
"*Gaokao:National College Entrance Examination in China.",
"Figure 6: Three fake news cases with different preferences on environmental information.",
"Underlined regular words hit the keywords in the MACROENV and underlined italic words are related to the MICROENV .",
"Keywords are extracted using TextRank (Mihalcea and Tarau, 2004).",
"construct the required collection on demand but prepare it ahead by maintaining a queue of news items.",
"2) Compatibility.",
"Our perception module can be integrated with existing methods, which we validated on six representative ones (Table 2).",
"3) Data Accessibility.",
"The data to construct news environments is easy to access, especially compared with obtaining credible knowledge sources.",
"The advantages may encourage the deployment of NEP into practical systems.",
"We proposed the NEP to observe news environments for fake news detection on social media.",
"We designed popularityand novelty-oriented perception modules to assist fake news detectors.",
"Experiments on offline and online data show the effectiveness of NEP in boosting the performance of existing models.",
"We drew insights on how NEP help to interpret the contribution of macro and micro environment in fake news detection.",
"As this is the first work on the role of news environments for fake news detection, we believe further exploration is required for a deeper understanding of the effects of news environments and beyond.",
"In the future, we plan to explore: 1) including historical news or background to handle posts weakly related to the present environment; 2) modeling post-environment relationships with diverse similarity metrics or even from other perspectives; 3) investigating the effects of different news environments (e.g., biased vs. neutral ones) to make the environment construction more principled; 4) extending this type of methodology from the text-only detection to multi-modal and social graph-based detection.",
"The authors thank Guang Yang, Peng Qi, Zihao He, and anonymous reviewers for their insightful comments.",
"This work was supported by the Zhe-jiang Provincial Key Research and Development Program of China (No. 2021C01164).",
"Application.",
"Our framework does not present direct societal consequence and is expected to benefit the defense against the fake news issue.",
"It can serve as a detection module for fake news detection systems, especially when the given post is closely related to the events that happened recently, with no need to wait for the accumulation of user responses or query to knowledge sources.",
"Due to the requirement of real-time access to open news sources (source list can be determined as needed), 4551 it might be easier to deploy for service providers (e.g., news platforms) and media outlets.",
"Data.",
"Our data is mostly based on existing datasets, except the news items for constructing news environments.",
"All news items (or headlines) are open and accessible to readers and have no issues with user privacy.",
"The media outlets in the English dataset might be considered biased, so we carefully select a left, a center, and a right outlet (whose headlines are available) according to the AllSides Media Bias Chart.",
"In China, a media outlet might be state-run (e.g., CCTV News), local-government-run (e.g., The Paper), or business-run (e.g., Toutiao News).",
"With no widely recognized bias chart of Chinese media as a reference, we select media outlets based on their influence (e.g., number of followers) on Weibo from the three categories for the sake of representativeness."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"objective",
"objective",
"objective",
"objective",
"other",
"objective",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain"
] |
[
"To alleviate the data scarcity problem in training question answering systems, recent works propose additional intermediate pre-training for dense passage retrieval (DPR).",
"However, there still remains a large discrepancy between the provided upstream signals and the downstream question-passage relevance, which leads to less improvement.",
"To bridge this gap, we propose the H yper L ink-induced P re-training (HLP), a method to pre-train the dense retriever with the text relevance induced by hyperlink-based topology within Web documents.",
"We demonstrate that the hyperlink-based structures of dual-link and co-mention can provide effective relevance signals for large-scale pre-training that better facilitate downstream passage retrieval.",
"We investigate the effectiveness of our approach across a wide range of open-domain QA datasets under zero-shot, few-shot, multihop, and out-of-domain scenarios.",
"The experiments show our HLP outperforms the BM25 by up to 7 points as well as other pre-training methods by more than 10 points in terms of top-20 retrieval accuracy under the zero-shot scenario.",
"Furthermore, HLP significantly outperforms other pre-training methods under the other scenarios.",
"Open-domain question answering (OpenQA) aims to answer factual open questions with a large external corpus of passages.",
"Current approaches to OpenQA usually adopt a two-stage retriever-reader paradigm (Chen et al., 2017; Zhu et al., 2021) to fetch the final answer span.",
"The performance of OpenQA systems is largely bounded by the retriever as it determines the evidential documents for the reader to examine.",
"Traditional retrievers, such as TF-IDF and BM25 (Robertson and Zaragoza, 2009), are considered incapable of adapting to sce-Our code and trained models are available at https: //github.com/jzhoubu/HLP .",
"narios where deep semantic understanding is required.",
"Recent works (Lee et al., 2019; Karpukhin et al., 2020; Qu et al., 2021) show that by fine-tuning pre-trained language models on sufficient downstream data, dense retrievers can significantly outperform traditional term-based retrievers.",
"Considering the data-hungry nature of the neural retrieval models, extensive efforts (Lee et al., 2019; Chang et al., 2019; Sachan et al., 2021) have been made to design self-supervised tasks to pre-train the retriever.",
"However, these pre-training tasks construct relevance signals largely depending on easily attainable sentence-level or document-level contextual relationships.",
"For example, the relationship between a sentence and its originated context (shown by the ICT query in Figure 1) may not be sufficient enough to facilitate question-passage matching for the tasks of OpenQA.",
"We also find that these pre-trained retrievers still fall far behind BM25 in our pilot study on the zero-shot experiment.",
"In order to address the shortcomings of the matching-oriented pre-training tasks as mentioned above, we propose a pre-training method with better surrogates of real natural question-passage (Q-P) pairs.",
"We consider two conditions of relevance within Q-P pairs, which is similar to the process of distantly supervised retriever learning (Mintz et al., 2009; Chen et al., 2017).",
"1) Evidence Existence The evidence, such as entities and their corresponding relations, should exist across the query and the targeted passage as they both discuss similar facts or events related to the answer.",
"2) Answer Containing The golden passage should contain the answer of the query, which means that a text span within the passage can provide the information-seeking target of the query.",
"In this paper, we propose H yper L ink-induced P re-training (HLP), a pre-training method to learn effective Q-P relevance induced by the hyperlink topology within naturally-occurring Web documents.",
"Specifically, these Q-P pairs are automatically extracted from the online documents with relevance adequately designed via hyperlink-based topology to facilitate downstream retrieval for question answering.",
"Figure 1 shows an example of comparison between the human-written query and different pseudo queries.",
"By the guidance of hyperlinks, our HLP query hold the relevance of answer containing with the passage (query title occurs in the passage).",
"Meanwhile, the HLP query can introduce far more effective relevance of evidence existence than other pseudo queries by deeply mining the hyperlink topology, e.g., the dual-link structure.",
"In figure 1, both HLP query and the passage both contain information corresponding to the same fact of Mitja Okorn directed the film of Letters to Santa .",
"This makes our pseudo query low-cost and a good surrogate for the manually written query.",
"Our contributions are two-fold.",
"First, we present a hyperlink-induced relevance construction methodology that can better facilitate downstream passage retrieval for question answering, and specifically, we propose a pre-training method: Hyperlink-induced Pre-training (HLP).",
"Second, we conduct evaluations on six popular QA datasets, investigating the effectiveness of our approach under zero-shot, few-shot, multi-hop, and out-of-domain (OOD) scenarios.",
"The experiments show HLP outperforms BM25 in most of the cases under the zero-shot scenario and other pre-training methods under all scenarios.",
"Dense Retriever Pre-training Previous works have attempted to conduct additional pre-training for dense retrievers on various weakly supervised data.",
"Borisov et al. (2016) and Dehghani et al. (2017) pre-trained ranking models on click-logs and BM25-induced signals respectively for web search.",
"Lee et al. (2019) proposed the inverse cloze task (ICT) to pre-train a dense retrieval model, which randomly selects sentences as pseudo queries, and matched them to the passages that they originate from.",
"Besides, Chang et al. (2019) proposed the pre-training task of wiki link prediction (WLP) and body first selection (BFS) tasks.",
"Similar to our work, the WLP task also leveraged the hyperlinks within Wikipedia to construct relevant text pairs.",
"However, as shown in figure 1, the WLP pseudo query can only ensure the weak doc-wise contextual relationship with the passage.",
"Guu et al. (2020) proposed the masked-salient-span pre-training task which optimizes a retrieval model by the distant supervision of language model objective.",
"As a follow-up, Sachan et al. (2021) combined ICT with the masked-salient-span task and further improved the pre-training effectiveness.",
"Data Augmentation via Question Generation Ma et al. (2021), Reddy et al. (2021) and Oguz et al. (2021) all investigate training a dense retriever on questions synthesized by large question generative (QG) models.",
"Targeting on the zero-shot setting, Ma et al. (2021) trained a question generator on general-domain question passage pairs from community platforms and publicly available academic datasets.",
"Reddy et al. (2021) focused more on domain transfer and trained the QG model on QA datasets of Wikipedia articles.",
"Oguz et al. (2021) uses the synthetically generated questions from PAQ dataset (Lewis et al., 2021) and the post-comment pairs from dataset of Reddit conversations for retrieval pre-training.",
"Recently, Shinoda et al. (2021) reveals that the QG models tend to generate questions with high lexical overlap which amplify the bias of QA dataset.",
"Different to these studies, our method focuses on a more general setting where the retriever is only trained with the naturally occurring web documents, and has no access to any downstream datasets.",
"In this section, we firstly discuss the background of OpenQA retrieval, then our methodology and training framework.",
"Passage Retrieval Given a question q , passage retrieval aims to provide a set of relevant passages p from a large corpus D .",
"Our work adopts Wikipedia as source corpus and each passage is a disjoint segment within a document from D .",
"OpenQA Q-P Relevance For OpenQA, a passage p is considered relevant to the query q if p conveys similar facts and contains the answer to q .",
"These two conditions of relevance, namely evidence existence and answer containing, are properly introduced into the HLP Q-P pairs under the guidance of desired hyperlink structure.",
"We will discuss more in this section.",
"To better formulate the relevance of pseudo Q-P pairs, we denote the sequence of passages within a document as A = [ a 1 , a 2 , ..., a n A ] where A D .",
"The corresponding topical entity and the title of document A and its passage splits are denoted as e A and t A , respectively.",
"We use m A to indicate a mention of entity e A , which is a hypertext span linking to document A .",
"Note that the mention span m A is usually identical to the document title t A or a variant version of it.",
"Further, we define F ( p ) as the entity-level factual information conveyed by the passage p , which is a set consists of the topical entity e P and the entities mentioned within passage p .",
"Evidence Existence in HLP With appropriately designed hyperlink topologies, our HLP Q-P pairs guarantee the co-occurrence of entities which are presented as hypertext or topics in q and p .",
"This is considered as evidence across the Q-P pairs: F ( q ) F ( p ) = (1) Furthermore, we conjecture that HLP is more likely to achieve fact-level relevance than entity-level overlap.",
"We conduct human evaluation in Section 6.3 and case studies in Appendix G to support this conjecture.",
"Moreover, we demonstrate that any Q-P pair containing hyperlink-induced factual evidence, which can be represented as triples, is included in our proposed topologies, which are included in Appendix E. Answer Containing in HLP We consider the document title t Q as the information-seeking target of q .",
"Accordingly, the relevance of answer containing can be formulated as t Q p (2) The rationale behind this is that both the natural question and the Wikipedia document are intended to describe related facts and events regarding a targeted object, whereas the object is an answer for a question but a topical entity for a Wikipedia document.",
"This similarity leads us to take the document title as the information-seeking target of its context.",
"Based on analysis of how queries match their evidential passages in the NQ (Kwiatkowski et al.,",
"2019) dataset, we propose two kinds of hyperlink topology for relevance construction: Dual-link and Co-mention.",
"We present our exploratory data analysis on NQ dataset in Appendix C. Here we discuss the desired hyperlink topologies and the corresponding relevance of the pseudo Q-P pairs.",
"Dual-link (DL) Among all NQ training samples, 55% of questions mention the title of their corresponding golden passage.",
"This observation motivates us to leverage the topology of dual-link (DL) for relevance construction.",
"We consider a passage pair ( a i , b j ) follows the dual-link topology if they link to each other.",
"An example of a DL pair ( a i , b j ) is shown in Figure 2, in which passage b j mentions the title of document A as m A , satisfying the condition of answer containing: t A m A and m A b j (3) Further, since the passages a i and b j both mention the topical entity of the other, the entities e A and e B appear in both passages as evidence: { e A , e B } F ( a i ) F ( b j ) (4) Co-mention (CM) Among all NQ training samples, about 40% of questions fail to match the dual-link condition but mention the same third-party entity as their corresponding golden passages.",
"In light of this observation, we utilize another topology of Co-mention (CM).",
"We consider that a passage pair ( c k , d l ) follows the Co-mention topology if they both link to a third-party document E and d l links to c k .",
"Figure 2 illustrates a CM pair ( c l , d k ) where answer containing is ensured as the title of c k occurs in d l : t C m C and m C d l (5) Since both c l and d k mention a third-party entity e E , and that e C is a topical entity in c l while a mentioned entity in d k , we have entity-level evidence across c l and d k as: { e C , e E } F ( c k ) F ( d l ) (6) In practice, we use sentence-level queries which contain the corresponding evidential hypertext, and we do not prepend the title to the passage in order to reduce the superficial entity-level overlap.",
"To improve the quality of CM pairs, we filter out those with a co-mentioned entity which has a top 10% highest-ranked in-degree among the Wikipedia entity.",
"We also present pseudo code in Appendix D to illustrate how we construct our pseudo Q-P pairs.",
"Furthermore, we highlight that HLP has the following advantages: 1) it introduces more semantic variants and paraphrasing for better text matching.",
"2) The hypertext reflects potential interests or needs of users in relevant information, which is consistent to the downstream information-seeking propose.",
"We adopt a BERT-based bi-encoder to encode queries and passages separately into d-dimension vectors.",
"The output representation is derived from the last hidden state of the [CLS] token and the final matching score is measured by the inner product: h q = BERTQ ( q )([CLS]) h p = BERTP ( p )([CLS]) S( p, q ) = h T q h p Let B = { q i , p + i , p i } n i =1 be a mini-batch with n instances.",
"Each instance contains a question q i paired with a positive passage p + i and a negative passage p i .",
"With in-batch negative sampling, each question q i considers all the passages in B except its own gold p + i as negatives, resulting in 2 n 1 negatives per question in total.",
"We use the negative log likelihood of the positive passage as our loss for optimization: L ( q i , p + i , p i, 1 , ..., p i, 2 n 1 ) = log e S ( q i ,p + i ) e S ( q i ,p + i ) + (cid:80) 2 n 1 j =1 e S ( q i ,p i,j ) 4 Experimental Setup In this session, we discuss the pre-training corpus preparation, downstream datasets, the hyperparameter and the basic setup for our experiments.",
"We adopt Wikipedia as our source corpus D for pretraining as it is the largest encyclopedia covering diverse topics with good content quality and linking structures.",
"We choose the snapshot 03-01-2021 of an English Wikipedia dump, and process it with WikiExtractor 2 to obtain clean context.",
"After filtering out documents with blank text or a title less than three letters, following previous work (Karpukhin et al., 2020), we split the remaining documents into disjoint chunks of 100 words as passages, resulting in over 22 million passages in the end.",
"We evaluate our method on several open-domain question answering benchmarks which are shown below.",
"Natural Questions (NQ) (Kwiatkowski et al., 2019) is a popular QA dataset with real queries from Google Search and annotated answers from Wikipedia.",
"TriviaQA (Joshi et al., 2017) contains question-answer pairs scraped from trivia websites.",
"entity-level answers from Freebase.",
"HotpotQA (Fullwiki) (Yang et al., 2018) is a human-annotated multi-hop question answering dataset.",
"BioASQ (Tsatsaronis et al., 2015) is a competition on biomedical semantic indexing and question answering.",
"We evaluate its factoid questions from task 8B.",
"MS MARCO (Passage Ranking) (Nguyen et al., 2016) consists of real-world user queries and a large collection of Web passages extracted by Bing search engine.",
"Retrieval Corpus For downstream retrieval, we use the 21M Wikipedia passages provided by DPR (Karpukhin et al., 2020) for NQ, TriviaQA and WQ.",
"For BioASQ, we take the abstracts of PubMed articles from task 8A with the same split to Reddy et al. (2021)'s work.",
"For HotpotQA and MS MARCO, we use the official corpus.",
"During the pre-training, we train the bi-encoder for 5 epochs with parameters shared, using a batch size of 400 and an Adam optimizer (Kingma and Ba, 2014) with a learning rate 2 10 5 , linear scheduling with 10% warm-up steps.",
"Our HLP and all the reproduced baselines are trained on 20 million Q-P pairs with in-batch negative sampling, and the best checkpoints are selected based on the average rank of gold passages evaluated on the NQ dev set.",
"The pre-training takes around 3 days using eight NVIDIA V100 32GB GPUs.",
"For the downstream, we use the same hyper-parameters for all experiments.",
"Specifically, we fine-tune the pre-trained models for 40 epochs with a batch size of 256 and the same optimizer and learning rate settings to the pre-training.",
"We conduct evaluation on respective dev sets to select best checkpoints, and we use the last checkpoint if there is no dev set or test set (e.g. HotpotQA).",
"Most existing baselines have been implemented under different experimental settings, which have a substantial effect on the retrieval performance.",
"To ensure fairness, we reproduce several pre-training methods (ICT, WLP, BFS, and their combination) under the same experimental setting, such as batch size, base model, amount of pre-training data, and so on.",
"The only difference between our method and the re-implemented baselines is the self-supervision signal derived from the respective pre-training samples.",
"Our reproduced BM25 baseline is better than that reported in Karpukhin et al. (2020), and the re-implemented pre-training methods also perform better than those reported by the recent work 3 .",
"In addition, we include the work REALM (Guu et al., 2020) as a baseline which has recently been reproduced by Sachan et al. (2021) using 240 GPUs and is named masked salient spans (MSS).",
"We note that most related works gain improvements from varying downstream setting or synthetic pre-training with access to the downstream data of respective domain, which is out of the scope of our interests.",
"Table 1 shows the retrieval accuracy of different models on three popular QA datasets under zero-shot and full-set fine-tuning settings.",
"Under zero-shot setting, HLP consistently outperforms BM25 except for the top-5 retrieval accuracy of TriviaQA, while all other pre-training baselines are far behind.",
"We attribute the minor improvement over BM25 on TriviaQA to a high overlap between questions and passages, which gives term-based retriever a clear advantage.",
"We investigate the coverage of the question tokens that appear in the gold passage and find that the overlap is indeed higher in TriviaQA (62.8%) than NQ (60.7%) and WQ (57.5%).",
"After fine-tuning, all models with intermediate pre-training give better results than the vanilla DPR while our HLP achieves the best in nearly all cases.",
"3 Our reproduced ICT and BFS surpass the reproduction from recent work (Oguz et al., 2021) by 15 and 12 points, respectively, in terms of top-20 retrieval accuracy on NQ test set under zero-shot setting.",
"Among ICT, WLP and BFS, we observe that WLP is the most competitive with or without fine-tuning, and additional improvements can be achieved by combining three of them.",
"This observation indicates that pre-training with diverse relevance leads to better generalization to downstream tasks, while document-wise relevance is more adaptable for the OpenQA retrieval.",
"The advantage of document-wise relevance may come from the fact that texts in different documents are likely written by different parties, providing less superficial cues for text matching, which is beneficial for the downstream retrieval.",
"Our HLP learns both coarse-grained document-wise relationships as well as the fine-grained entity-level evidence, which results in a significant improvement.",
"To investigate the retrieval effectiveness in a more realistic scenario, we conduct experiments for few-shot learning.",
"Specifically, we fine-tune the pre-trained models on large datasets (NQ, TriviaQA) with m ( m { 16 , 256 , 1024 } ) samples and present the few-shot retrieval results in Table",
"2. With only a few hundred labeled data for fine-tuning, all the models with intermediate pretraining perform better than those without, and HLP outperforms the others by a larger margin when m is smaller.",
"Moreover, among three reimplemented baselines, WLP gains the largest improvement with increasing number of samples, outperforming ICT and BFS when a thousand labelled samples are provided for fine-tuning.",
"While HLP is pre-trained on Wikipedia pages, we conduct additional experiments on BioASQ and MS MARCO datasets with non-Wikipedia corpus to further verify its out-of-domain (OOD) generalization.",
"Following Gururangan et al. (2020), we measure the similarity between corpus by computing the vocabulary overlap of the top 10K frequent words (excluding stopwords).",
"We observe a vocabulary overlap of 36.2% between BioASQ and Wikipedia while 61.4% between MS MARCO and Wikipedia, indicating that these two domains differ considerably from our pre-training corpus.",
"and MS MARCO datasets are presented in Table 4.",
"For BioASQ, HLP is competitive with both BM25 and AugDPR(Reddy et al., 2021) while significantly outperforming ICT, WLP, and BFS.",
"Note that AugDPR is a baseline that has access to NQ labeled data whereas our HLP is trained in an unsupervised way.",
"For MS MARCO, HLP consistently outperforms other pre-training methods but falls behind BM25 under zero-shot setting.",
"We conjecture the performance degradation on MS MARCO is attributed to two factors: 1) the Q-P lexical overlap of MS MARCO (65.7%) is higher than that in BioASQ (48.7%) as well as other datasets; 2) the information-seeking target of the MS MARCO query is the entire passage rather than a short answer span, which is biased towards our proposed answer containing.",
"we also observe that pre-training exclusively with DL pairs achieves better results in MS MARCO, indicating the generality of relevance induced by DL topology.",
"We evaluate our methods on HotpotQA in a single-hop manner.",
"Specifically, for each query, we randomly selects one golden passage from the two as a positive passage and one additional passage with high TF-IDF scores as a negative passage.",
"Our models are further fine-tuned on the HotpotQA training set and evaluated on the bridge and the comparison type questions from the development set, respectively.",
"The results of our study are shown in Table 5 which reveals that HLP consistently outperforms others methods, with up to a 11-point improvement on top-5 retrieval accuracy of bridge questions.",
"Furthermore, WLP yields a 4-point advantages in average over ICT and BFS on bridge questions, showing that document-wise relevance contributes to better associative abilities.",
"We include a case study in Appendix F. Bridge Comparison top5 top20 top100 top5 top20 top100 No Pre-train 25.0 40.5 58.0 83.0 94.2 97.4 ICT 28.1 43.8 61.8 84.8 94.4 98.3 WLP 32.1 49.1 66.0 89.7 97.3 99.2 BFS 29.0 44.7 62.1 87.4 95.8 98.7 HLP 36.9 53.0 68.5 94.4 98.5 99.5 Table 5: Retrieval accuracy on questions from HotpotQA dev set, measured as the percentage of top-k retrieved passages which include both golds.",
"To better understand how different key factors affect the results, we conduct ablation experiments with results shown in Table 3.",
"Hyperlink-based Topologies Our proposed dual-link (DL) and co-mention (CM) Q-P pairs, provide evidence induced by different hyperlink-based topologies.",
"To examine their respective effectiveness, we pre-train retrievers on Q-P pairs derived from each topology and their combinations.",
"We present zero-shot retrieval results in Table 3, which show that retrievers pre-trained on DL pairs has a distinct advantage over that on CM pairs, while combining both gives extra improvement.",
"is essential for learning a high-quality encoder.",
"Besides in-batch negative, our reported HLP employs one additional negative for each query.",
"We further explore the impact of the additional negatives during pre-training.",
"In our ablation study, pre-training with additional negatives improves the results significantly, which may be attributed to using more in-batch pairs for text matching.",
"More details on implementation and negative sampling strategies can be found in Appendix B. 6.2 Analysis on Q-P Overlap We carry out extensive analysis on the Q-P lexical overlap in the task of retrieval.",
"Specifically, we tokenize q , p using the BERT tokenizer and measure the Q-P overlap as the proportion of the question tokens that appear in the corresponding passage.",
"Based on the degree of Q-P overlap, we divided the NQ dev set into five categories for further analysis.",
"Distribution of Q-P Overlap Figure 3 shows both the pre-training and the retrieved pairs of HLP have a more similar overlap distribution with the downstream NQ dataset than the other methods, which implies the consistency between the relevance provided by HLP and that in real information-seeking scenario.",
"Retrieval Performance vs. Q-P Overlap Figure 4 shows the top-20 retrieval accuracy on the samples with varying degrees of Q-P overlap.",
"Both figures show that the retrievers are more likely to return answer-containing passages when there is higher Q-P overlap, suggesting that all these models can exploit lexical overlap for passage retrieval.",
"Under the zero-shot setting, HLP outperforms all the methods except BM25 when r is larger than 0.8, which reflects the strong reasoning ability of HLP and the overlap-dependent nature of the term-based retrievers.",
"After fine-tuning, models with additional pre-training perform better than the vanilla DPR while HLP outperforms all other methods in most of the cases.",
"It is important to note that HLP is pre-trained on more high-overlap text pairs while it performs better than all the other methods when fewer overlaps are provided.",
"We speculate that this is because the overlapping in HLP Q-P pairs mostly comes from the factual information, such as entity, which introduces fewer superficial cues, allowing for better adaptation to the downstream cases.",
"We conduct human evaluation to investigate the proportion of Q-P pairs that convey the similar fact-level information.",
"Specially, we randomly selected one hundred examples from our constructed Q-P pairs and asked annotators to identify whether the query and the corresponding passage convey similar facts.",
"Each case is evaluated by three annotators and the result is determined by their votes.",
"Our results are shown in Table 6, and we further present case studies in Appendix G. DL CM WLP Votes 61% 40% 15% Table 6: Human evaluation on pseudo Q-P pairs constructed by different methods.",
"This paper proposes Hyperlink-induced Pretraining (HLP), a pre-training method for OpenQA passage retrieval by leveraging the online textual relevance induced by hyperlink-based topology.",
"Our experiments show that HLP gains significant improvements across multiple QA datasets under different scenarios, consistently outperforming other pre-training methods.",
"Our method provides insights into OpenQA passage retrieval by analyzing the underlying bi-text relevance.",
"Future work involves addressing tasks like MS MARCO where the granularity of the information-seeking target is at the passage level.",
"This work is partially supported by the Hong Kong RGC GRF Project 16202218, CRF Project C6030-18G, C1031-18G, C5026-18G, AOE Project AoE/E-603/18, China NSFC No. 61729201.",
"We thank all the reviewers for their insightful comments."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"result",
"method",
"abstain",
"other",
"other"
] |
[
"Tencent AI Lab, Shenzhen, China { pijili,hansonzhang,kieranliu,shumingshi } @tencent.com",
"Abstract",
"Neural text generation has made tremendous progress in various tasks.",
"One common characteristic of most of the tasks is that the texts are not restricted to some rigid formats when generating.",
"However, we may confront some special text paradigms such as Lyrics (assume the music score is given), Sonnet, SongCi (classical Chinese poetry of the Song dynasty), etc.",
"The typical characteristics of these texts are in three folds: (1) They must comply fully with the rigid predefined formats.",
"(2) They must obey some rhyming schemes.",
"(3) Although they are restricted to some formats, the sentence integrity must be guaranteed.",
"To the best of our knowledge, text generation based on the predefined rigid formats has not been well investigated.",
"Therefore, we propose a simple and elegant framework named SongNet to tackle this problem.",
"The backbone of the framework is a Transformer-based auto-regressive language model.",
"Sets of symbols are tailor-designed to improve the modeling performance especially on format, rhyme, and sentence integrity.",
"We improve the attention mechanism to impel the model to capture some future information on the format.",
"A pre-training and fine-tuning framework is designed to further improve the generation quality.",
"Extensive experiments conducted on two collected corpora demonstrate that our proposed framework generates significantly better results in terms of both automatic metrics and the human evaluation.",
"1 1 Introduction Recent years have seen the tremendous progress in the area of natural language generation especially benefiting by the neural network models such as Recurrent Neural Networks (RNN) or Convolutional Neural Networks (CNN) based sequence-to-sequence (seq2seq) frameworks (Bahdanau et al., 1 Code: http://github.com/lipiji/SongNet (cid:27220)(cid:1497)(cid:1933)(cid:2057)(cid:2822)(cid:875)(cid:12627)(cid:12651)(cid:1680)(cid:1943)(cid:1055)(cid:28574)(cid:1629)(cid:1961)(cid:3056)(cid:14881)(cid:2270)(cid:2584)(cid:13784)(cid:875)(cid:1982)(cid:2635)(cid:3022)(cid:1401)(cid:2983)(cid:28574) (cid:1943)(cid:1758)(cid:2612)(cid:1078)(cid:1958)(cid:875)(cid:9650)(cid:1120)(cid:2537)(cid:21559)(cid:12116)(cid:28574)(cid:2985)(cid:2634)(cid:1768)(cid:16434)(cid:19238)(cid:1153)(cid:1592)(cid:875)(cid:1361)(cid:1988)(cid:3034)(cid:1522)(cid:1915)(cid:28574) Let me not to the marriage of true minds Admit impediments, love is not love Which alters when it alteration finds Or bends with the remover to remove .",
"2014; Gehring et al., 2017), Transformer and its variants (Vaswani et al., 2017; Dai et al., 2019), pre-trained auto-regressive language models such as XLNet (Yang et al., 2019) and GPT2 (Radford et al., 2019), etc.",
"Performance has been improved significantly in lots of tasks such as machine translation (Bahdanau et al., 2014; Vaswani et al., 2017), dialogue systems (Vinyals and Le, 2015; Shang et al., 2015; Li, 2020), text summarization (Rush et al., 2015; Li et al., 2017; See et al., 2017), story telling (Fan et al., 2018; See et al., 2019), poetry writing (Zhang and Lapata, 2014; Lau et al., 2018; Liao et al., 2019), etc.",
"Generally, most of the above mentioned tasks can be regarded as free text generation, which means that no constraints on the format and structure, say the number of words and rhyming rules.",
"Note that tasks of dialogue generation and story telling are almost in an open-ending generation style as long as the generated content is relevant with the conditional input text.",
"Although there are formats constraints on the poetry text, the proposed models just treat the formats as kind of latent information and let the model capture this feature implicitly during training (Liao et al., 2019).",
"The model trained on the five-character quatrain corpus cannot generate seven-character verses.",
"Moreover, it is impossible to trigger these models to generate satisfying results according to arbitrary new defined formats.",
"In practice we will confront some special text paradigms such as Lyrics (assume the music score is given), Sonnet (say Shakespeare's Sonnets (Shakespeare, 2000)), SongCi (a kind of Ci. Ci is a type of lyric poetry in the tradition of Classical Chinese poetry. 2 , SongCi is the Ci created during Song dynasty), etc., and some examples are illustrated in Figure 1.",
"The typical characteristics of these text can be categorized into three folds: (1) The assembling of text must comply fully with the predefined rigid formats .",
"Assume that the music score is composed, then the lyricist must fill the lyric content strictly tally with the schemes lie in the notation.",
"Take partial of song Edelweiss as shown in the first row of Figure 1 as example, the syllables of the lyric words must align with the tones of the notation.",
"The second row of Figure 1 depicts the content of a SongCi created based on the CiPai of Bu Suan Zi.",
"Given the CiPai, the number of characters and the syntactical structure of the content are also defined (e.g., the number of characters of each clause: 5, 5. 7, 5. 5, 5. 7, 5.).",
"(2) The arrangement of the content must obey the defined rhyming schemes.",
"For example, all the final words (words in red color and italic font) of the SongCi content in Figure1 are rhyming (the spelling of each word is: zhu, yu, du, and gu.).",
"The example in the third row of Figure 1 comes from Shakespeare's Sonnet 116",
"(Shake-speare, 2000), the first four sentences.",
"Usually, the rhyming schemes of Shakespeare's Sonnets is ABAB CDCD EFEF GG 3 .",
"In the example, the rhyming words in scheme ABAB are minds, love, finds, and remove.",
"(3)",
"Even though the format is rigid, the sentence integrity must always be guaranteed.",
"Incomplete sentence such as love is not the is inappropriate.",
"To the best of our knowledge, text generation based on the predefined rigid formats constraints has not been well investigated yet.",
"In this work, 2 http://en.wikipedia.org/wiki/Ci",
"we propose a simple and elegant framework named SongNet to address this challenging problem.",
"The backbone of the framework is a Transformer-based auto-regressive language model.",
"Considering the three folds characteristics mentioned above, we introduce sets of tailor-designed indicating symbols to improve the modeling performance, especially for the robustness of the format, rhyme, as well as sentence integrity.",
"We improve the attention mechanism to impel the model to capture the future information on the format to further enhance sentence integrity.",
"Inspired by BERT",
"(Devlin et al., 2019)",
"and GPT",
"(Radford et al., 2018, 2019), a pretraining and fine-tuning framework is designed to further improve the generation quality.",
"To verify the performance of our framework, we collect two corpora, SongCi and Sonnet, in Chinese and English respectively.",
"Extensive experiments on the collected datasets demonstrate that our proposed framework can generate satisfying results in terms of both the tailor-designed automatic metrics including format accuracy, rhyming accuracy, sentence integrity, as well as the human evaluation results on relevance, fluency, and style.",
"In summary, our contributions are as follows: We propose to tackle a new challenging task: rigid formats controlled text generation.",
"A pre-training and fine-tuning framework named SongNet is designed to address the problem.",
"Sets of symbols are tailor-designed to improve the modeling performance.",
"We improve the attention mechanism to impel the model to capture the future information to further enhance the sentence integrity.",
"To verify the performance of our framework SongNet, we collect two corpora, SongCi and Sonnet, in Chinese and English respectively.",
"We design several automatic evaluation metrics and human evaluation metrics to conduct the performance evaluation.",
"Extensive experiments conducted on two collected corpora demonstrate that our proposed framework generates significantly better results given arbitrary formats, including the cold-start formats or even the formats newly defined by ourselves.",
"The task of rigid formats controlled text generation is defined as follows:",
"where C is the set of all possible formats.",
"Note that we can define arbitrary new formats not restricted to the ones pre-defined in the corpus, thus |C| .",
"Format token c i denotes a place-holder symbol of C which need to be translated into a real word token.",
"Format C contains 10 words plus two extra punctuation characters , and .",
"where the example sentences are extracted from the Shakespeare's Sonnets",
"(Shakespeare, 2000).",
"From the result Y we can observe that the count of words is 10 which is consistent with the format C .",
"The punctuation characters , and . are also correct.",
"Thus, we claim that it is a 100% format accuracy result.",
"Also, since the two clause sentences are complete, we can get a good sentence integrity score.",
"If C is defined on the literary genres of SongCi or Sonnet which have rhyming constraints, the rhyming performance should be evaluated as well.",
"Recall that C can be arbitrary and flexible, thus we can rebuild a new format C",
"(cid:48)",
"based on the generated result Y by masking partial content, say C",
"(cid:48)",
"= { c 0 c 1 c 2 love, c 0 c 1 c 2 c 3 c 4 remove.",
"} , then we may obtain better results by re-generating based on C",
"(cid:48)",
".",
"We name this operation as polishing .",
"Finally, the target of this problem is to find a mapping function G to conduct the rigid formats controlled text generation: Y = G",
"As shown in Figure 2, the backbone of our framework is a Transformer-based auto-regressive language model.",
"The input can be the whole token sequences of samples from SongCi or Sonnet.",
"We tailor-design several sets of indicating symbols to enhance the performance in terms of accuracy on format, rhyme, and sentence integrity.",
"Specifi-cally, symbols C = { c i } are introduced for format and rhyming modeling; Intra-position symbols P = { p i } are designed to represent the local positions of the tokens within each sentence aiming to improve the rhyming performance and the sentence integrity.",
"Segment symbols S = { s i } are employed to identify the sentence border to further improve the sentence quality.",
"Attention mechanism is improved to impel the model to capture the future format information such as the sentence ending markers.",
"Similar to BERT",
"(Devlin et al., 2019)",
"and GPT",
"(Radford et al., 2018, 2019), pre-training and fine-tuning paradigm is utilized to boost the performance of the original models.",
"We use two sentences",
"(as shown in Figure 1)",
"love is not love, ..., bends with the remover to remove extracted from the Shakespeare's Sonnets (Shake-speare, 2000) as examples to describe the details of our framework SongNet.",
"Since our basic model is a Transformer-based auto-regressive language model, during training, the input is (cid:104) bos (cid:105) love is not love, (cid:104) /s (cid:105) ..., bends with the remover to remove. (cid:104) /s (cid:105) , and the corresponding output is a left-shifting version of the input (tokenized, and we ignore ... for convenience and clarity): love is not love , (cid:104) /s (cid:105) bends with the remover to remove .",
"where (cid:104) /s (cid:105) denotes the clause or sentence separator, and (cid:104) eos (cid:105) is the ending marker of the whole sequence.",
"The target of our framework is to conduct the formats controlled text generation.",
"Therefore, the indicating symbols for format and rhyme as well as the sentence integrity are designed based on the target output sequence.",
"Format and Rhyme Symbols : C = { c 0 , c 0 , c 0 , c 2 , c 1 , (cid:104) /s (cid:105) c 0 , c 0 , c 0 , c 0 , c 0 , c 2 , c 1 , (cid:104) /s (cid:105) , (cid:104) eos (cid:105)} (3) where we use { c 0 } to represent the general tokens; { c 1 } depict the punctuation characters; { c 2 } represent the rhyming tokens love and remove.",
"(cid:104) /s (cid:105) and (cid:104) eos (cid:105) are kept.",
"Intra-Position Symbols : P = { p 4 , p 3 , p 2 , p 1 , p 0 , (cid:104) /s (cid:105) p 6 , p 5 , p 4 , p 3 , p 2 , p 1 , p 0 , (cid:104) /s (cid:105) , (cid:104) eos (cid:105)} (4) { p i } denote the local positions of tokens within the same clause or sentence.",
"Note that we align the position symbol indices in a descending order .",
"The aim is to improve the sentence integrity by impelling the symbols capture the sentence dynamic information, precisely, the sense to end a sequence.",
"For example, { p 0 } usually denote punctuation characters, thus { p 1 } should be the ending words of sentences.",
"Segment Symbols : S = { s 0 , s 0 , s 0 , s 0 , s 0 , (cid:104) /s (cid:105) s 1 , s 1 , s 1 , s 1 , s 1 , s 1 , s 1 , (cid:104) /s (cid:105) , (cid:104) eos (cid:105)} (5) where s i is the symbol index for sentence i .",
"The purpose is to enhance the interactions between different sentences in different positions by defining the sentence index features.",
"During training, all the symbols as well as the input tokens are fed into the transformer-based language model.",
"Contrast to Transformer (Vaswani et al., 2017), BERT (Devlin et al., 2019), and GPT2 (Radford et al., 2019), we modify the traditional attention strategies slightly to fit our problem.",
"Specifically, for the input, we first obtain the representations by summing all the embeddings of the input tokens and symbols, as shown in the red solid box of Figure 2: H 0 t = E w t + E c t + E p t + E s t + E g t (6) where 0 is the layer index and t is the state index.",
"E is the embedding vector for input .",
"w t is the real token at position t .",
"c , p , and s are three pre-defined symbols.",
"g is the global position index same as position symbols used in Transformer (Vaswani et al., 2017).",
"Moreover, the state at time t need to know some future information to grasp the global sequence dynamic information.",
"For example, the model may want to know if it should close the decoding progress by generating the last word and a punctuation character to end the sentence.",
"To represent the global dynamic information, we introduce another variable F 0 by only summing the pre-defined symbols as shown in the blue dash box of Figure 2: F 0 t = E c t + E p t + E s t (7) After processing the input, two blocks of attention mechanisms are introduced to conduct the feature learning procedure.",
"The first block is a masking multi-head self-attention component, and the second block is named global multi-head attention.",
"Masking Multi-Head Self-Attention : C 1 t = LN (cid:0) FFN ( C 1 t ) + C 1 t (cid:1) C 1 t = LN (cid:0) SLF-ATT ( Q 0 t , K 0 t , V 0 t ) + H 0 t (cid:1) Q 0 = H 0 WQK 0 , V 0 = H 0 WK , H 0 WV (8) where SLF-ATT ( ), LN ( ), and FFN ( ) represent self-attention mechanism, layer normalization, and feed-forward network respectively.",
"Note that we only use the states whose indices t as the attention context.",
"After obtaining C 1t from Equation (8), we feed it into the second attention block to capture the global dynamic information from F 0 .",
"Global Multi-Head Attention : H 1 t = LN (cid:0) FFN ( H 1 t ) + H 1 t (cid:1) H 1 t = LN (cid:0) GLOBAL-ATT ( Q 1 t , K 1 , V 1 ) + C 1 t (cid:1) Q 1 = C 1 WQK 1 , V 1 = F 0 WK , F 0 WV (9) We can observe that all the context information from F 0 are considered.",
"This is the reason why we name it as global attention and why the input real token information E w t is NOT considered.",
"Then the calculation of the unified first model layer is fin-ished.",
"We can iteratively apply these two attention blocks on the whole L model layers until obtain the final representations HL .",
"Note that H is renewed layerly, however the global variable F 0 is fixed.",
"Finally, the training objective is to minimize the negative log-likelihood over the whole sequence: L nll = n (cid:88) t =1 log P ( y t | y <t ) (10) 3.3 Pre-training and Fine-tuning Although our framework can be trained purely on the training dataset of the target corpus, usually the scale of the corpus is limited.",
"For example, there are only about 150 samples in the corpus of Shakespeare's Sonnets (Shakespeare, 2000).",
"Therefore, we also design a pre-training and fine-tuning framework to further improve the generation quality.",
"Recall that in the task definition in Section 2, we claim that our model owns the ability of refining and polishing.",
"To achieve this goal, we adjust the masking strategy used in BERT (Devlin et al., 2019) to our framework according to our defini-tions.",
"Specifically, we randomly (say 20%) select partial of the original content and keep them not changed when building the format symbols C .",
"For example, we will get a new symbol set C (cid:48) for the example sentences: C (cid:48) = { c 0 , c 0 , c 0 , love, c 1 , (cid:104) /s (cid:105) bends, c 0 , c 0 , c 0 , c 0 , remove, c 1 , (cid:104) /s (cid:105) , (cid:104) eos (cid:105)} where love, bends and remove are kept in the format C (cid:48) .",
"After the pre-training stage, we can conduct the fine-tuning procedure directly on the target corpus without adjusting any model structure.",
"We can assign any format and rhyming symbols C to control the generation.",
"Given C , we will obtain P and S automatically.",
"And the model can conduct generation starting from the special token (cid:104) bos (cid:105) iteratively until meet the ending marker (cid:104) eos (cid:105) .",
"Both beam-search algorithm (Koehn, 2004) and truncated top-k sampling (Fan et al., 2018; Radford et al., 2019) method are utilized to conduct the decoding.",
"The parameter size of our model are fixed in both the pre-training stage and the fine-tuning stage.",
"The number of layers L = 12 , and hidden size is 768.",
"We employ 12 heads in both the masking multihead self-attention block and the global attention block.",
"Adam (Kingma and Ba, 2014) optimization method with Noam learning-rate decay strategy and 10,000 warmup steps is employed to conduct the pre-training.",
"We conduct all the experiments on two collected corpus with different literary genres: SongCi and Sonnet, in Chinese and English respectively.",
"The statistic number are shown in Table",
"3. We can see that Sonnet is in small size since we only utilize the samples from the Shakespeare's Sonnets (Shakespeare, 2000).",
"Since SongCi and Sonnet are in different languages, thus we conduct the pre-training procedure on two large scale corpus in the corresponding languages respectively.",
"For Chinese, we collect Chinese Wikipedia (1700M Characters) and a merged Chinese News (9200M Characters) corpus from the Internet.",
"We did not conduct the word segmenting operations on the Chinese datasets, which means that we just use the characters to build the vocabulary, and the size is 27681.",
"For English, same as BERT, we employ English Wikipedia (2400M words) and BooksCor-pus (980M words) (Zhu et al., 2015) to conduct the pre-training.",
"We did not use BPE operation (Sennrich et al., 2015) on this corpus considering the format controlling purpose.",
"We keep the most frequent 50,000 words to build the vocabulary.",
"Besides PPL and Distinct (Li et al., 2016), we also tailor-design several metrics for our task to conduct the evaluation for format, rhyme, and sentence integrity.",
"Format Assume that there are m sentences defined in the format C = { C s 1 , C s 2 , ..., C sm } , and the generated results Y contains n sentences Y = { Y s 1 , Y s 2 , ..., Y sn } .",
"Without loss of generality, we align C and Y from the beginning, and calculate the format quality according to the following rules: (1) the length difference || C si | | Y si || ; (2) the punctuation characters must be same.",
"For SongCi, we let = 0 and rule (2) must be conforming.",
"For Sonnet, we relax the condition where we let = 1 and ignore rule (2).",
"Assume that the number of format-correct sentences is n (cid:48) , then we can obtain Precision p = n (cid:48) /n , Recall r = n (cid:48) /m , and F1-measure.",
"We report both the Macro-F1 and Micro-F1 in the results tables.",
"Rhyme For SongCi, usually, there is only one group of rhyming words in one sample.",
"As the example shown in Table 1, the pronunciation of the red rhyming words are zhu, yu, du, and gu respectively, and the rhyming phoneme is u.",
"For the generated samples, we first use the tool pinyin 4 to get the pronunciations (PinYin) of the words in the rhyming positions, and then conduct the evaluation.",
"For Shakespeare's Sonnets corpus, the rhyming rule is clear ABAB CDCD EFEF GG and there are 7 groups of rhyming tokens.",
"For the generated samples, we employ the CMU Pronouncing Dictionary 5 (Speech@CMU, 1998) to obtain the phonemes of the words in the rhyming positions.",
"For example, the phonemes for word asleep and steep are ['AH0', 'S', 'L', 'IY1', 'P'] and ['S', 'T', 'IY1', 'P'] respectively.",
"And then we can conduct the evaluation by counting the overlapping units from both the original words and the extracted phonemes group by group.",
"We report the Macro-F1 and Micro-F1 numbers in the results tables as well.",
"Integrity Since the format in our task is strict and 4 http://github.com/mozillazg/python-pinyin 5 http://www.speech.cs.cmu.edu/cgi-bin/cmudict Model PPL Diversity (Distinct) VALTESTMA -D-1 MI -D-1 MA -D-2 MI -D-2 SongNet 12.75 14.73 75.96 2.69 97.59 37.26 SongNet-GRU 16.52 20.49 74.73 1.77 98.30 28.98 SongNet w/o C 13.51 15.38 75.42 2.48 97.36 34.85 SongNet w/o P 14.16 17.16 73.73 2.56 97.52 34.82 SongNet w/ inverse-P 13.40 15.13 74.95 2.54 97.76 35.65 SongNet w/o S 13.23 15.44 75.38 2.74 97.31 37.50 Model Format Rhyme Integrity MA -F1 MI -F1 MA -F1 MI -F1 SongNet 99.81 99.83 79.23 78.63 2.14 0.10 SongNet-GRU 98.99 98.99 52.13 50.93 3.28 1.67 SongNet w/o C 84.73 85.39 78.59 78.24 1.77 0.53 SongNet w/o P 99.61 99.59 67.85 67.29 3.33 0.18 SongNet w/ inverse-P 99.68 99.69 65.89 65.43 2.24 0.21 SongNet w/o S 99.84 99.86 80.43 80.13 1.99 0.10 Table 4: Ablation analysis on SongCi rigid, thus the number of words to be predicted is also pre-defined.",
"Our model must organize the language using the limited positions, thus sentence integrity may become a serious issue.",
"For example, the integrity of love is not love . (cid:104) /s (cid:105) is much better thanlove is not the . (cid:104) /s (cid:105) .",
"To conduct the evaluation of sentence integrity, we design a straightforward method by calculating the prediction probability of the punctuation characters before (cid:104) /s (cid:105) given the prefix tokens: Integrity = 2 1 | Y | | Y | (cid:80) i =1 log( P ( y ipunc | y i 0 ,y i 1 ,...,y i<punc )) (11) where Y is the generated sequence of sentences.",
"Smaller integrity metric value indicates higher sentence quality.",
"To achieve this goal, we conduct pre-trainings for two GPT2 (Radford et al., 2019) models on the large scale Chinese corpus and English corpus respectively.",
"Then we utilize the GPT2 models to conduct the evaluation for sentence integrity.",
"Human Evaluations For SongCi, we sampled 50 samples for 25 CiPais.",
"For Sonnet, the whole 27 samples in the test set are selected for human evaluation.",
"We recruit three helpers to score the Relevance , Fluency , and Style .",
"The rating criteria are as follows: Relevance : +2 : all the sentences are relevant to the same topic; +1 : partial sentences are relevant; 0 : not relevant at all.",
"Fluency : +2 : flu-ent; +1 : readable but with some grammar mistakes; 0 : unreadable.",
"Style : +2 : match with SongCi or Sonnet genres; +1 : partially match; 0 : mismatch.",
"S2S Sequence-to-sequence framework with attention mechanism (Bahdanau et al., 2014).",
"We regard the format and rhyme symbols C as the input sequence, and the target as the output sequence.",
"GPT2 We fine-tune the GPT2 models (the pretraining versions are used for sentence integrity evaluation) on SongCi and Sonnet respectively.",
"SongNet Out proposed framework with both the per-training and fine-tuning stages.",
"We also conduct ablation analysis to verify the performance of the defined symbols as well as the variants of model structures.",
"SongNet (only pre-tuning) Without the fine-tuning stage.",
"SongNet (only fine-tuning) Without the pretraining stage.",
"SongNet-GRU Employ GRU (Cho et al., 2014) to replace Transformer as the core structure.",
"SongNet w/o C Remove the format and rhyme symbols C .",
"SongNet w/o P Remove the intra-position symbols P .",
"SongNet w/o S Remove the sentence segment symbols S .",
"SongNet w/ inverse-P Arrange the intra-position indices in ascending order instead of the descending order.",
"Please note that we mainly employ topk sampling method (Fan et al., 2018; Radford et al., 2019) to conduct the generation, and we let k = 32 here.",
"The parameter tuning of k is described in Section 5.3.",
"Table 1 and Table 2 depict the experimental results of SongNet as well as the baseline methods S2S and GPT2 on corpus SongCi and Sonnet respectively.",
"It is obvious that our pre-training and fine-tuning framework SongNet obtain the best performance on most of the automatic metrics.",
"Especially on the metric of Format accuracy, SongNet can even obtain a 98%+ value which means that our framework can conduct the generation rigidly matching with the pre-defined formats.",
"On the metric of PPL, Rhyme accuracy, and sentence integrity, SongNet also performs significantly better in a large gap than the baseline methods such as S2S and GPT2 as well as the model variants only with the pre-training or fine-tuning stage.",
"Another observation is that some of the results on corpus Sonnet are not as good as the results Model Relevance Fluency Style SongNet-SongCi 1.36 1.45 2.00 SongNet-Sonnet 0.58 0.42 0.83 Table 7: Human evaluation results.",
"on SongCi.",
"The main reason is that Sonnet only contains 100 samples in the training set as shown in Table",
"3. Therefore, the model cannot capture sufficient useful features especially for the rhyming issue.",
"We conduct ablation study on corpus SongCi and the experimental results are depicted in Table",
"4. It should note that all the models are purely trained on SongCi corpus without any pre-training stages.",
"From the results we can conclude that the introduced symbols C , P , and S indeed play crucial roles in improving the overall performance especially on the metrics of format, rhyme, and sentence integrity.",
"Even though some of the components can not improve the performance simultaneously on all the metrics, the combination of them can obtain the best performance.",
"Since we employ topk sampling as our main decoding strategy, thus we design several experiments to conduct the parameter tuning on k .",
"We let k to be 1, 5, 10, 20, 50, 500 respectively.",
"We also provide the beam-search (beam=5) results for comparing and reference.",
"The parameter tuning results are depicted in Figure",
"3. From the results we can observe that large k can increase the diversity of the results significantly.",
"But the Rhyme accuracy and the sentence integrity will drop simultaneously.",
"Therefore, in the experiments we let k = 32 to obtain a trade-off between the diversity and the general quality.",
"For human evaluation, we just conduct the judging on the results generated by our final model SongNet.",
"From the result we can observe that the results on corpus SongCi is much better than the ones on corpus Sonnet, which is because the corpus scale is different.",
"And the the small scale also lead to dramatically dropping on all the metrics.",
"Table 5 depicts several generated cases for SongCi and Sonnet respectively.",
"For SongCi, the formats (CiPai) are all cold-start samples which are not in the training set or even newly defined.",
"Our model can still generate high quality results on the aspects of format , rhyme as well as integrity .",
"However, for corpus Sonnet, even though the model can generate 14 lines text, the quality is not as good as SongCi due to the insufficient training-set (only 100 samples).",
"We will address this interesting and challenging few-shot issue in the future .",
"In addition, we mentioned that our model has the ability of refining and polishing given the format C which contains some fixed text information.",
"The examples of the generated results under this setting are shown in Table 6, which show that our model SongNet can generate satisfying results especially on SongCi.",
"We propose to tackle a challenging task called rigid formats controlled text generation.",
"A pre-training and fine-tuning framework SongNet is designed to address the problem.",
"Sets of symbols are tailor-designed to improve the modeling performance for format, rhyme, and sentence integrity.",
"Extensive experiments conducted on two collected corpora demonstrate that our framework generates significantly better results in terms of both automatic metrics and human evaluations given arbitrary cold start formats."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"objective",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"objective",
"objective",
"result",
"result",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective"
] |
[
"The problem of detecting psychological stress in online posts, and more broadly, of detecting people in distress or in need of help, is a sensitive application for which the ability to interpret models is vital.",
"Here, we present work exploring the use of a semantically related task, emotion detection, for equally competent but more explainable and human-like psychological stress detection as compared to a black-box model.",
"In particular, we explore the use of multi-task learning as well as emotion-based language model fine-tuning.",
"With our emotion-infused models, we see comparable results to state-of-the-art BERT.",
"Our analysis of the words used for prediction show that our emotion-infused models mirror psychological components of stress.",
"As crises have begun to multiply worldwide, including the COVID-19 pandemic and the resulting economic downturn, psychological stress has risen dramatically 1 .",
"The problem of detecting psychological stress, and more broadly, of detecting people in distress and in need of help, is a sensitive application; therefore, the ability to interpret the results, in order to understand why, is vital.",
"The consequences of blindly trusting a black-box model and mislabeling users' stress levels could be serious in a deployed application such as a therapeutic chatbot, where some users may not receive the immediate help they need.",
"Furthermore, models that make decisions based on psychology theory about factors that impact stress will be easier for humans to understand, and their mistakes will be more obvious.",
"Researchers have recently begun to study psychological stress, but in this work, we propose a new focus on examining the information our models use to make decisions and finding ways to incorporate psychological factors, like emotion, into them.",
"To approach the problem of stress detection, which has much less labeled data than many popular classification tasks, we first note that stress has been shown to interact with emotion (Lazarus, 2006; Thoern et al., 2016; Levenson, 2019), a task that has far more publicly available labeled data.",
"For example, individuals who are stressed are likely to express emotions such as fear, sadness, or anger and unlikely to express emotions such as happiness.",
"Traditional multi-task learning would normally be helpful in this situation, but there are no currently available datasets labeled with both stress and emotion.",
"Even if there were, it would be bene-ficial to incorporate external information without re-labeling new datasets for each new combination of useful tasks.",
"Here, we present work exploring how to use semantically related taskshere, emotion detectionto create emotion-infused models capable of equally competent, but explainable, psychological stress detection as compared to a black-box model.",
"In particular, we explore the use of multi-task learning as well as emotion-infused language model fine-tuning, two existing frameworks which we examine through the lens of interpetability.",
"Our code for this work is available at github.",
"com/eturcan/emotion-infused .",
"Our contributions in this work are as follows:",
"(i) consideration of factors suggested by psychological theory in deep learning methods for predicting stress, with a focus on emotion;",
"(ii) an exploration of three different approaches to emotion-infused models, with experimental results showing comparable results to the state-of-the-art in all cases; and",
"(iii) a framework for interpreting our models to show the impact of emotion and other factors in our models.",
"Researchers who use natural language approaches for stress detection often rely on external resources such as diagnostic questionnaires (e.g., Guntuku",
"et al. (2018)) or techniques like pattern matching (patterns such as I am stressed, e.g., Winata et al. (2018); Lin et al. (2017)) to assign labels.",
"Much of the work that has been done on psychological stress detection focuses either on establishing baseline models with little advancement in computational modeling, or on using external information about the text (e.g., author, time of posting, number of replies), which is usually, but not always available and may differ in meaning or importance across platforms and domains.",
"There has also been a substantial amount of work on detecting related mental health concerns such as anxiety (e.g., Shen and Rudzicz (2017); Gruda and Hasan (2019); Jiang et al. (2020)), but these are distinct from the generalized experience of stress.",
"The most similar work to ours is Turcan and McKeown (2019), our prior work publishing a dataset of psychological stress collected from the social media website Reddit and labeled by crowd workers, and presenting baselines with several basic non-neural and BERT-based models on this data.",
"We use this dataset in our current work; however, we focus on exploring interpretable frameworks for this sensitive task and connecting the stress detection task concretely with emotion detection.",
"The models we propose in this work rely on two types of enhancements to the neural representation learned by models like BERT: multi-task learning and pre-training or fine-tuning.",
"Multi-task learning is an increasingly popular framework in which some parameters in a model are shared between or used to inform multiple different tasks.",
"Hard parameter sharing (Caruana, 1993), the variant we employ, uses some set of parameters as a shared base representation and then allows each task to have some private parameters on top and perform their own separate predictions.",
"Multi-task learning has been successfully applied to many domains across NLP (Sun et al., 2019; Kiperwasser and Ballesteros, 2018; Liu et al., 2019); we are especially interested in instances where it has improved semantic and emotion-related tasks, such as Xu et al. (2018), who perform emotion detection with a suite of secondary semantic tasks including personality classification.",
"Pre-training and fine-tuning are another type of transfer learning where multiple tasks are trained in sequence rather than at the same time.",
"Pre-trained language models are perhaps the most widely used example, where a large neural language model can Dataset Size Dreaddit 3,553 GoEmotions A,E,S 58K GoEmotions FSJ 4,136 Vent 1.6M Table 1: The datasets we use in this work and their relative sizes (in terms of total number of data points).",
"be fine-tuned for many different tasks (Devlin et al., 2019).",
"Additionally, continuing to pre-train the language model itself on language from the target domain has been shown to improve performance (Howard and Ruder, 2018; Chakrabarty et al., 2019; Gururangan et al., 2020) (also note Chronopoulou et al. (2019), who perform this task at the same time as the target task, in a form of multi-task learning).",
"This methodology has been successfully extended to other domains, in which a model is first fine-tuned on some large, broadly useful task and then further fine-tuned for a smaller target task (e.g., Felbo et al. (2017), who first fine-tuned on emoji detection and then fine-tuned on target semantic tasks including emotion and sentiment detection).",
"It should be noted that the psychological stress is much better studied in settings where researchers have access to some physiological signals (e.g., Zuo et al. (2012); Allen et al. (2014); Al-Shargie et al. (2016); Kumar et al. (2020); Jaiswal et al. (2020)).",
"This work is not as relevant to our task, since we have only text data available when detecting stress from online posts.",
"A comparison of all the datasets we use in this work can be seen in Table 1.",
"The primary dataset we use for this work is Dreaddit (Turcan and McKeown, 2019), a dataset of 3,553 segments of Reddit posts from various support communities where the authors believe posters are likely to express stress.",
"The stress detection problem as expressed in this dataset is a binary classification problem, with crowdsourced annotations aggregated as the majority vote from five annotators for each data point.",
"We note that this paper frames the stress classification problem in terms of the author and the timei.e., a post is labeled stressful only if the poster themselves is currently expressing stress.",
"Because this dataset is small for training a deep learning model, we also experiment with larger datasets to provide auxiliary information.",
"We select the GoEmotions dataset (Demszky et al., 2020), which consists of 58,009 Reddit comments labeled by crowd workers with one or more of 27 emotions (or Neutral), for its larger size and genre similarity to Dreaddit.",
"In this paper, we refer to the dataset in this form as GoEmotions all or GoEmotions A .",
"The authors also published two relabelings of this dataset, achieved by agglomerative clustering: one where labels are clustered together into the Ekman 6 basic emotions (anger, disgust, fear, joy, sadness, surprise, neutral) (Ek-man, 1992) (GoEmotions Ekman/E ), and one into simple polarity (positive, negative, ambiguous, neutral) (GoEmotions sentiment/S ).",
"We run our experiments with each version of this dataset.",
"We also explore the use of another social media website, Vent.",
"Vent is a platform more similar to Twitter or Tumblr than Reddit, where users post vents of any length and tag them as they like, and other users react to them or post comments.",
"The benefit of Vent for this purpose is that posters self-identify some emotion they are feeling from a large list of pre-made emotions.",
"The data we use is collected by Malko et al. (2021) 2 .",
"We select Vent data that has been labeled with fear or sadness, which we hypothesize to be related to stress, as well as joy, for a contrast.",
"We note that this dataset is strictly single-class, whereas GoEmotions may have more than one emotion label per data point.",
"In all, there are 1.6M vents in our dataset, much larger than Dreaddit or GoEmotions; we randomly sample this data in a stratified manner to create a training, development, and test set with an 80/10/10 ratio.",
"To examine the effects of domain similarity, we also select a subset of GoEmotions with the corresponding emotion labels we subsample the existing all dataset to select only data points originally labeled with fear, joy, or sadness, for a final set of 4,136 data points (3,342 of which are the train set).",
"We call this subset GoEmotions FSJ , and we compare it against Vent to see whether genre similarity or data size is more important in this multitask setting.",
"We experiment with three types of emotion-infused models; that is, we present three different ways to incorporate emotion information into our stress detection models, divided into multi-task learning and fine-tuning.",
"2 Due to license and ethics policy restrictions, we currently do not make this data publicly available.",
"Our first multi-task models, which we refer to as Multi Alt , are simply two single-task models sharing the same base BERT representation layers.",
"The models are alternating in that we train them with two datasets with two different sets of labelsi.e., we train the stress task with the Dreaddit data and the emotion task with the GoEmotions or Vent data.",
"We refer to the variants with a subscript, i.e., Multi AltGoEmotions A (i.e., GoEmotions with all emo-tions), Multi AltGoEmotions E (i.e., the Ekman GoEmotions relabeling), Multi AltVent (i.e., the Vent data), etc.",
"The Multi Alt models can be seen in Figure 1a.",
"One loss step for these models consists of only one dataset and task, so they are trained with the negative log-likelihood (NLL) loss for single-label tasks (Dreaddit, Vent, GoEmotions FSJ ) and the binary cross-entropy (BCE) loss for multi-label tasks (GoEmotions A,E,S ).",
"We also experiment with a multi-task learning setup where we perform the two tasks at the same time on the same input data .",
"We call this architecture Multi.",
"However, because the Dreaddit data is labeled only with stress, we first separately train BERT models on the various versions of GoEmotions and use them to predict emotion labels for Dreaddit.",
"We then take these emotion labels to be silver data and train on them alongside stress.",
"The Multi model can be seen in Figure 1b.",
"Since stress detection is our main task in this work, we focus on this task where we have gold labels for stress, but note that it will be interesting in future work to experiment with other task settings, such as whether stress detection can improve emotion classification.",
"In these models, the losses of the stress task and the emotion task are summed together for each batch using a tunable weight parameter, i.e., L = L stress + (1 ) L emotion .",
"We experiment with models in which we first endow the BERT representation with knowledge of the emotion task by fine-tuning and then apply it to stress detection (as in Phang et al. (2018)).",
"We perform a sequential version of the Multi Alt models, in which we fine-tune a pre-trained BERT language model on another task, and then extract the language model parameters to initialize a BERT model that we continue to fine-tune",
"on Dreaddit.",
"We denote these models as, e.g., Fine-Tune GoEmotions A (cid:1) Dreaddit for a model that was first trained on GoEmotions all and then on Dreaddit (for space, we will abbreviate Fine-Tune as FT).",
"These fine-tuning models can be seen in Figure 1c.",
"These models are trained with the NLL and BCE losses as in the Multi Alt models.",
"We present a re-implementation of the same BERT-based fine-tuning model used in Turcan and McKeown (2019), where this model performed best on Dreaddit.",
"We report this as an average of 3 runs with distinct random seeds, and our results are, on average, lower than the single model reported, but with high variance.",
"Because of this, we assume that the previously reported performance is from the high end of this variance and use our average score as our baseline in this work.",
"This model is a pre-trained BERT language model (released as bert-base-uncased by Wolf et al. (2019); we use this same pre-trained language model as the basis for all our models) followed by a dropout layer and a dense classification layer.",
"We also report a recurrent neural network (RNN) model, which uses either a long short-term memory network (LSTM) (Hochreiter and Schmidhuber, 1997) or a gated recurrent unit (GRU) (Cho et al., 2014) in place of the transformer from BERT and is otherwise the same.",
"These models are trained with the NLL and BCE losses as with the Multi Alt models.",
"We train all of our models with minibatch gradient descent using the Adam optimizer (Kingma and Ba, 2015) with a batch size of 16, given GPU space constraints.",
"We perform gradient clipping to 1.0 to prevent exploding gradients.",
"When training any model, we perform early stopping based on the F1 score on the Dreaddit development set and select the model parameters from the epoch that achieved the best development score for our final evaluated model.",
"We tune hyperparameters for all our models using Bayesian Optimization from the Python library ax 3 .",
"All models train the initial learning rate of the Adam optimizer and the dropout probability before the final classification layer; the Multi models also tune the loss weight parameter , and we also note that the RNN model tunes additional parameters such as the type of RNN, hidden dimension, etc.",
"For all models, we tune parameters based on the F1 score on the Dreaddit development set; we train an ensemble of three models with three different, fixed random seeds and average their performance for a given parameter setting.",
"We report the mean and standard deviation of three models, with three 3 https://github.com/facebook/Ax Model Binary F1 Accuracy RNN 67.58 1.22 68.86 1.10 BERT 78.88 1.09 79.11 1.32 Multi AltGE A 79.02 0.35 79.72 0.69 Multi AltGE E 80.24 1.39 81.07 1.13 Multi AltGE S 79.46 1.05 79.86 0.50 Multi AltGE FSJ 79.17 0.61 78.69 1.86 Multi AltVent 80.34 1.39 79.67 2.03 Multi Dr S 78.97 0.24 78.55 0.07 Multi Dr FSJ 78.90 0.59 78.55 0.07 Table 2: Results of our multitask models.",
"different random seeds, trained with the best hyperparameters.",
"More details about hyperparameter tuning can be found in the appendix.",
"We report the results of our multi-task models in Table 2 4 .",
"In general, our Multi Alt models perform similarly, and outperform the Multi models; we assume this is due to the introduction of noise in labeling the silver emotion data.",
"Of these models, Multi AltVent performs best.",
"With regards to GoEmotions, the 28-way classification of GoEmotions A naturally leads to lower numerical performance than the tasks with smaller numbers of classes, and we expect that GoEmotions S may group too many distinctly labeled emotions together under the same emotion labels; it seems GoEmotions E is the happy medium for this model.",
"We also note that the Multi AltVent and Multi AltGoEmotions E models perform equally well, which indicates that the genre mismatch is not an issue for this problem, or that Vent has a similar enough genre to Reddit that it does not affect the results.",
"Somewhat surprisingly, Multi AltGoEmotions FSJ does not do as well as 4 We did compute statistical significance by calculating the majority vote of each of the models' 3 runs and using the approximate randomization test, but no model is significantly different from BERT.",
"Multi AltVent ; however, the GoEmotions data is much smaller than Vent, especially when subsampled to select specific emotions.",
"We further report the results of our fine-tuning models in Table 3.",
"Because we expect that genre similarity should play a larger role when the secondary task can offer no direct training signal during the primary task fine-tuning, we evaluate on GoEmotions here and not Vent.",
"Here, we observe that our best model, Fine-Tune GoEmotions FSJ (cid:1) Dreaddit , scores at least one standard deviation above BERT.",
"We see higher increases in performance for the simpler classification problems in GoEmotions S and GoEmotions FSJ and worsened performance for GoEmotions A , suggesting that in the sequential paradigm, more complex tasks are not able to interact appropriately with the main task and instead interfere.",
"We also report the performance of the fine-tuning BERT models we trained on GoEmotions in order to label Dreaddit with emotion in Table 4; these results track well with the fine-tuning results reported by Demszky et al. (2020).",
"Because these models are intermediates used for labeling, we report the F1 scores of the single model we actually used for labeling, although we tuned their parameters with an average of 3 different instances as with all other models.",
"Many-way classification problems have much more opportunity for error and noise in an already-noisy process of labeling unlabeled data, so we use only the two best-performing GoEmotions models, which are those trained on the fewest-label datasets, GoEmotions S and GoEmotions FSJ , for our Multi models.",
"Overall, the inclusion of emotion information results in modest improvements, even though not statistically significant, as compared to BERT.",
"However, our true goal in this work is to analyze the explainability of all of these models, to which we turn next.",
"We perform three different analyses to probe our trained models and discover what information they learn to use.",
"For our Multi Alt models, we investigate the usefulness of the emotion prediction layers in explaining stress classifications, and for all models, we use Local Interpretable Model-agnostic Explanations (LIME) (Ribeiro et al., 2016) to show that our emotion-infused models rely on meaningfully different types of words than BERT in order to make their predictions.",
"We perform an analysis of our Multi Alt models to see what information they learn about emotion 5",
"We take the development sets of each of the datasets (Dreaddit and GoEmotions) and predict their labels under the other task (i.e., emotion for Dreaddit and vice-versa).",
"We report the correlation of these predicted labels with the gold labels in Table 5 6 .",
"In this case, the GoEmotions FSJ variant is a single-label three-way classification problem, so we report the correlation ratio (Fisher, 1938).",
"The other GoEmotions variants are multi-label, so we report the coefficient of determination R 2 (Cohen et al., 2015).",
"We further present breakdowns of the correlations per emotion category for the polarity and 5 We did perform an equivalent analysis on the Multi models, which shows similar trends, but as Multi Alt shows better performance, we omit it for space.",
"6 We also note the possibility that different combinations of emotions are relevant to stress; however, not enough of our data is labeled with multiple emotion labels (4% of Dreaddit's silver labels from GoEmotions S , 9% of GoEmotions E ) to test this hypothesis in this work.",
"FSJ subsets of GoEmotions in Table 6 and include the All and Ekman sets as well as the Vent data in the appendix.",
"We observe that our multi-task models generally learn a moderate correlation between the stress labels and the emotion labels; they learn that negative emotions like fear and sadness are linked to stress and neutral or positive emotions are linked to non-stress, which makes intuitive sense.",
"These emotion predictions can help explain the stress clas-sifier's predictions; imagine, for example, showing a patient or clinician that the patient's social media shows a strong pattern of fear and anger as a more detailed explanation for places a stress classifier detects stress.",
"From a machine learning perspective, this correlation also suggests the potential for using emotion data as distantly-labeled stress data to supplement the small extant stress datasets.",
"We also investigate the types of information each model is using to make its decisions.",
"In this section, we use the Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2015), a hand-crafted lexicon which collects words belonging to psychologically meaningful categories like positive emotion and cognitive processes, to categorize the information our different models use to predict stress.",
"We first analyze the unigrams our various models use to perform stress classification using LIME.",
"LIME accepts an input from our development set, perturbs it in the bag-of-unigrams space, and runs one of our classifiers on each perturbation to calculate the importance of various unigrams; through LIWC BERT Multi AltGE E Multi AltVent Multi Dr FSJFTGEFSJ (cid:1) Dr Affective Processes 19% 22% 19% 16% 22% Positive Emotion 8% 10% 9% 9% 12% Anger 31% 40% 30% 25% 31% Cognitive Processes 16% 17% 17% 17% 17% Certainty 8% 13% 12% 16% 11% Perceptual Processes 17% 15% 14% 14% 15% Biological Processes 15% 19% 17% 16% 17% Achievement 17% 19% 19% 13% 17% Table 7: A comparison of how often several of our models rely on words from several LIWC categories to make their decisions, according to LIME.",
"this process, we acquire the 10 unigrams with the highest magnitude output by LIME for each development example and consider them explanations.",
"We thus have 2,760 individual unigram explanations for the entire development set to analyze.",
"We then use the word lists from LIWC 2015's 72 psychological categories to see what types of words each classifier tends to use to make decisions of stress vs. non-stress.",
"An abbreviated list of results showing our best models from each category is shown in Table 7 7 .",
"We observe small but consistent effects suggesting that, in comparison to the basic BERT model, our emotion-enhanced models broadly learn to use the following information: Affective information .",
"Most emotion-infused models except for Multi learn to use affective information, which includes both positive and negative emotion words, more often.",
"We see the largest increase in anger, one of the emotions we had identified as relevant to stress, for Multi AltGoEmotions E , which makes intuitive sense because anger is one of the Ekman six basic emotions and thus, is explicitly predicted by this model.",
"Cognitive processes .",
"All models show some increase in using words related to cognitive processes as compared to BERT; however, its subcategory Certainty, which includes words about absoluteness such as never , obvious , and clearly , shows larger changes.",
"For example, Multi Dreaddit FSJ uses Certainty twice as often as BERT.",
"These cognitive words seem to target the mental aspects of stress.",
"Rumination and a focus on absoluteness are known signs of anxiety disorders, an extreme form of chronic stress (Nolen-Hoeksema et al., 2008; Miranda and Mennin, 2007).",
"Additional differences .",
"We observe other, 7 More detail on the full table is available in the appendix.",
"smaller patterns among LIWC usage for these models.",
"For example, the Multi Alt models use the most achievement-oriented words (although most models show modest increases), suggesting that this information, which includes words about success and failure, is relevant to emotion and to stress.",
"This makes sense, since failing to achieve (e.g., failing a class) can be a major stressor.",
"We also see larger proportions of biological process words used by all emotion-infused models.",
"We suggest this is because Dreaddit includes posts taken from Reddit communities about anxiety and PTSD, where posters are likely to describe their physical and mental symptoms while seeking help.",
"We then investigate the data itself for highly significant words using the measure of relative salience proposed by Mohammad (2012), RelativeSalience ( w | T 1 , T 2 ) = f 1 N 1 f 2 N 2 .",
"That is, it measures the importance of a token w in two different corpora T 1 , T 2 by subtracting their two relative frequencies (where f 1 , f 2 are the counts of token w in each corpus and N 1 , N 2 are the total tokens in each corpus).",
"We compute this measure for all words in the Dreaddit training data, taking our two corpora to be the subsets labeled stress and not-stress.",
"We take the top 200 unigrams for each label (stress as opposed to non-stress and vice-versa) and provide some examples in Table 8 with the full list of words available in the appendix.",
"We examine the words and divide them into related groups in order to understand what types of information should theoretically be most important to classifying the data.",
"For example, we see that different sets of function words are actually among the most important for both classes, with words like conjunctions typically appearing more indicative of stress Category Example Words Stress Function Words and, but, how, like, no, not, or, where, why Negative Sentiment awful, bad, cry, fear, hate, stress, stupid Helplessness alone, can't, nothing, nowhere, trying Non-Stress Function Words a, for, if, some, the, was, who, will, would Positive Sentiment amazing, best, good, great, hope, nice Support email, helped, support, thank, together, we Table 8: Some examples of words identified by relative salience on the Dreaddit training data as indicative of stress or non-stress.",
"(which echoes Turcan and McKeown (2019)'s finding that stressful data is typically longer with more clauses), while non-stress includes words expressing future-thinking like if , will , and would .",
"We also naturally find negative words for stress and positive words for non-stress, as well as a dichotomy of isolation and helplessness for stress vs. support and community for non-stress which is supported by psychological literature (Grant et al., 2009).",
"We then look at the intersection between relative salience and LIME explanations, counting how many LIME explanations are highly salient words for stress or non-stress; abbreviated results are shown in Table 9 and the full table is available in the appendix.",
"We see that our emotion-infused models learn to rely more often on words identified as indicative of non-stress, the minority class, instead of stress, the majority class.",
"We note that the presented models do sometimes make some new errors when incorporating emotional information, and that while these methods successfully incorporate such information with no feature crafting, some further innovation may be needed in order to use this information optimally.",
"For example, we reproduce an example from our development set, with profanity censored: And everyone was passive aggressive.",
"The manager tried to peg down my salary multiple times like a f**king haggler at a market.",
"Anyway, I decided to go get some antidepressants and the bottle fell out of my pocket, a coworker noticed and reported it to my boss.",
"Who smiled and asked if there was anything I'd like to tell her.",
"The passive aggressive s**t really got to me, and then I realized that I was being illegally paid.",
"The annotators for Dreaddit label this post not stress, presumably because there is not enough context for how the poster feels about this story presently, and the poster conveys more anger than anything else.",
"The LIME explanations for the BERT model, which labels this correctly, include some profanity, but largely focus on function words.",
"However, all four of our Multi AltGoEmotions models misclassify this example as stressed and rely on words like aggressive (from passive aggressive ) and the profanity to do so.",
"Meanwhile, the emotion classifiers of our Multi AltGoEmotions models are misled by words like smiled and label this example joy or positive .",
"This is a difficult example; without noticing that the event happened in the past, it is easy to assume the poster is presently stressed.",
"We believe examples like this require some grounding for example, an understanding of what passive aggressive means and some representation of the time-line involved, that language models simply cannot express in the traditional classification setup.",
"We also reproduce an anonymized example where our emotion-infused models improve upon BERT: She comes crying to me and formulates a plan to break up.",
"She talks to <name> about their issues and her will to leave him wilts.",
"She stays with him.",
"Rinse and repeat, except it gets worse over time.",
"How can I break the cycle, or help her break the cycle?",
"BERT misclassifies this example, where the author is stressed about a friend's situation, as non-stressful, relying on words like break and help , while our Multi AltGoEmotions models successfully use the word crying to predict stress.",
"We notice that crying or worse is the highest-ranked explanation for most of our emotion-infused models.",
"These results are promising for the development of models that focus on information that humans consider intuitive.",
"In this work, we present a suite of emotion-enhanced models that incorporate emotional information in various ways to enhance the task of binary stress prediction.",
"All three types of our models achieve comparable performance to a state-of-the-art fine-tuning BERT baseline, and, more importantly, we show that they result in more explainable models.",
"We also introduce a new framework for model interpretation using LIME and show that our emotion-enhanced multi-task models offer a new dimension of interpetability by using the predictions of auxiliary tasks to explain the primary task.",
"In our future work, we hope to expand these analyses to tasks in other domains and devise model architectures that can make more direct use of multitask learning to make and explain their predictions.",
"Our intended use of stress detection is to help those in distress.",
"We envision systems such as therapeutic chatbots or assistants that can understand users' emotions and identify those in need so that a person can intervene.",
"We would urge any user of stress detection technology to carefully control who may use the system.",
"Currently, the presented models may fail in two ways: they may either misclassify stress, or they may use the wrong information to make their predictions.",
"Obviously, there is some potential harm to a person who is truly in need if a system based on this work fails to detect them, and it is possible that a person who is not truly in need may be irritated or offended if someone reaches out to them because of a mistake.",
"In terms of explanations, we note that previous work has shown that focusing on incorrect rationales can unfairly target some groups of people (Zhong et al., 2019), although in this work we see that function words truly differ across the stressed and non-stressed populations and we do not observe any language that we know to be representative of minority groups in our explanations.",
"We emphasize our intention that emotional systems such as this be used responsibly, with a human in the loopfor example, a guidance counselor who can look at the predicted labels and offered explanations for their students' stress levels and decide whether or not they seem sensible.",
"We note that because most of our data was collected from Reddit, a website with a known overall demographic skew (towards young, white, American men 8 ), our conclusions about what stress looks like and how to detect it cannot necessarily be applied to broader groups of people.",
"We also note that we have no way to determine the demographic information of the specific posters in any of our datasets and whether they differ from the overall Reddit statistics.",
"We hope that we, and other researchers, can find ways to consider the specific ways in which minority groups express stress as well.",
"We thank our reviewers, as well as the members of our Natural Language Processing research group at Columbia University, for their insightful and constructive comments."
] | [
"abstain",
"objective",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"method",
"objective",
"objective",
"other",
"method",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other"
] |
[
"Evidence retrieval is a critical stage of question answering (QA), necessary not only to improve performance, but also to explain the decisions of the corresponding QA method.",
"We introduce a simple, fast, and unsupervised iterative evidence retrieval method, which relies on three ideas:",
"(a) an unsupervised alignment approach to soft-align questions and answers with justification sentences using only GloVe embeddings,",
"(b) an iterative process that reformulates queries focusing on terms that are not covered by existing justifications, which",
"(c) a stopping criterion that terminates retrieval when the terms in the given question and candidate answers are covered by the retrieved justifications.",
"Despite its simplicity, our approach outperforms all the previous methods (includ-ing supervised methods) on the evidence selection task on two datasets: MultiRC and QASC.",
"When these evidence sentences are fed into a RoBERTa answer classification component, we achieve state-of-the-art QA performance on these two datasets.",
"Explainability in machine learning (ML) remains a critical unsolved challenge that slows the adoption of ML in real-world applications (Biran and Cotton, 2017; Gilpin et al., 2018; Alvarez-Melis and Jaakkola, 2017; Arras et al., 2017).",
"Question answering (QA) is one of the challenging natural language processing (NLP) tasks that benefits from explainability.",
"In particular, multihop QA requires the aggregation of multiple evidence facts in order to answer complex natural language questions (Yang et al., 2018).",
"Several multi-hop QA datasets have been proposed recently (Yang et al., 2018; Khashabi et al., 2018a; Welbl et al., 2018; Dua et al., 2019; Chen and Durrett, 2019; Khot et al., 2019a; Sun et al., 2019b; Jansen and Ustalov, 2019; Rajpurkar et al., 2018).",
"While several neural methods have achieved state-of-the-art results on these datasets (Devlin et al., 2019; Liu et al., 2019; Yang et al., 2019), we argue that many of these directions lack a human-understandable explanation of their inference process, which is necessary to transition these approaches into real-world applications.",
"This is especially critical for multi-hop, multiple choice QA (MCQA) where:",
"(a) the answer text may not come from an actual knowledge base passage, and",
"(b) reasoning is required to link the candidate answers to the given question (Yadav et al., 2019b).",
"Figure 1 shows one such multi-hop example from a MCQA dataset.",
"In this paper we introduce a simple a lignment-based i terative r etriever ( AIR ) 1 , which retrieves high-quality evidence sentences from unstructured knowledge bases.",
"We demonstrate that these evidence sentences are useful not only to explain the required reasoning steps that answer a question, but they also considerably improve the performance of the QA system itself.",
"Unlike several previous works that depend on supervised methods for the retrieval of justification sentences (deployed mostly in settings that rely on small sets of candidate texts, e.g., HotPotQA, MultiRC), AIR is completely unsupervised and scales easily from QA tasks that use small sets of candidate evidence texts to ones that rely on large knowledge bases (e.g., QASC (Khot et al., 2019a)).",
"AIR retrieves justification sentences through a simple iterative process.",
"In each iteration, AIR uses an alignment model to find justification sentences that are closest in embedding space to the current query (Kim et al., 2017; Yadav et al., 2018), which is initialized with the question and candidate answer text.",
"After each iteration, AIR adjusts its query to focus on the missing information (Khot et al., 2019b) in the current set of justifications.",
"AIR also conditionally expands the query using the justifications retrieved in the previous steps.",
"In particular, our key contributions are: (1) We develop a simple, fast, and unsupervised iterative evidence retrieval method, which achieves state-of-the-art results on justification selection on two multi-hop QA datasets: MultiRC (Khashabi et al., 2018a) and QASC (Khot et al., 2019a).",
"Notably, our simple unsupervised approach that relies solely on GloVe embeddings (Pennington et al., 2014) outperforms three transformer-based supervised state-of-the-art methods: BERT (Devlin et al., 2019), XLnet (Yang et al., 2019) and RoBERTa (Liu et al., 2019) on the justification selection task.",
"Further, when the retrieved justifications are fed into a QA component based on RoBERTa (Liu et al., 2019), we obtain the best QA performance on the development sets of both MultiRC and QASC.",
"2 (2) AIR can be trivially extended to capture parallel evidence chains by running multiple instances of AIR in parallel starting from different initial evidence sentences.",
"We show that aggregating multiple parallel evidences further improves the QA performance over the vanilla AIR by 3.7% EM0 on the MultiRC and 5.2% accuracy on QASC datasets (both absolute percentages on development sets).",
"Thus, with 5 parallel evidences from AIR we obtain 36.3% EM0 on MultiRC and 81.0% accuracy on QASC hidden test sets (on their respective leaderboards).",
"To our knowledge from published works, these results are the new state-of-the-art QA results on these two datasets.",
"These scores are also accompanied by new state-of-the-art performance on evidence retrieval on both the datasets, which emphasizes the interpretability of AIR .",
"(3) We demonstrate that AIR 's iterative process that focuses on missing information is more robust to semantic drift.",
"We show that even the supervised RoBERTa-based retriever trained to retrieve evidences iteratively, suffers substantial drops in performance with retrieval from consecutive hops.",
"Our work falls under the revitalized direction that focuses on the interpretability of QA systems, where the machine's inference process is explained to the end user in natural language evidence text (Qi et al., 2019; Yang et al., 2018; Wang et al., 2019b; Yadav et al., 2019b; Bauer et al., 2018).",
"Several 2 In settings where external labeled resources are not used.",
"Question: Exposure to oxygen and water can cause iron to (A) decrease strength (B) melt (C) uncontrollable burning (D) thermal expansion (E) turn orange on the surface (F) vibrate (G) extremes of temperature (H) levitate Gold justification sentences: 1. when a metal rusts , that metal becomes orange on the surface 2. Iron rusts in the presence of oxygen and water.",
"Parallel evidence chain 1: 1. Dissolved oxygen in water usually causes the oxidation of iron.",
"2. When iron combines with oxygen it turns orange.",
"Parallel evidence chain 2: 1. By preventing the exposure of the metal surface to oxygen, oxidation is prevented.",
"2. When iron oxidizes, it rusts.",
"datasets in support of interpretable QA have been proposed recently.",
"For example, datasets such as HotPotQA, MultiRC, QASC, Worldtree Corpus, etc., (Yang et al., 2018; Khashabi et al., 2018a; Khot et al., 2019a; Jansen and Ustalov, 2019) provide annotated evidence sentences enabling the automated evaluation of interpretability via evidence text selection.",
"QA approaches that focus on interpretability can be broadly classified into three main categories: supervised , which require annotated justifications at training time, latent , which extract justification sentences through latent variable methods driven by answer quality, and, lastly, unsupervised ones, which use unsupervised algorithms for evidence extraction.",
"In the first class of supervised approaches, a supervised classifier is normally trained to identify correct justification sentences driven by a query (Nie et al., 2019; Tu et al., 2019; Banerjee, 2019).",
"Many systems tend to utilize a multi-task learning setting to learn both answer extraction and justification selection with the same network (Min et al., 2018; Gravina et al., 2018).",
"Although these approaches have achieved impressive performance, they rely on annotated justification sentences, which may not be always available.",
"Few approaches have used distant supervision methods (Lin et al., 2018; Wang et al., 2019b) to create noisy training data for evidence retrieval but these usually underperform due to noisy labels.",
"In the latent approaches for selecting justifications, reinforcement learning (Geva and Berant, 2018; Choi et al., 2017) and PageRank (Surdeanu et al., 2008) have been widely used to select justification sentences without explicit training data.",
"While these directions do not require annotated justifications, they tend to need large amounts of question/correct answer pairs to facilitate the iden-tification of latent justifications.",
"In unsupervised approaches, many QA systems have relied on structured knowledge base (KB) QA.",
"For example, several previous works have used ConceptNet (Speer et al., 2017) to keep the QA process interpretable (Khashabi et al., 2018b; Sydorova et al., 2019).",
"However, the construction of such structured knowledge bases is expensive, and may need frequent updates.",
"Instead, in this work we focus on justification selection from textual (or unstructured) KBs, which are inexpensive to build and can be applied in several domains.",
"In the same category of unsupervised approaches, conventional information retrieval (IR) methods such as BM25 (Chen et al., 2017) have also been widely used to retrieve independent individual sentences.",
"As shown by (Khot et al., 2019a; Qi et al., 2019), and our table 2, these techniques do not work well for complex multi-hop questions, which require knowledge aggregation from multiple related justifications.",
"Some unsupervised methods extract groups of justification sentences (Chen et al., 2019; Yadav et al., 2019b) but these methods are exponentially expensive in the retrieval step.",
"Contrary to all of these, AIR proposes a simpler and more efficient method for chaining justification sentences.",
"Recently, many supervised iterative justification retrieval approaches for QA have been proposed (Qi et al., 2019; Feldman and El-Yaniv, 2019; Banerjee, 2019; Das et al., 2018).",
"While these were shown to achieve good evidence selection performance for complex questions when compared to earlier approaches that relied on just the original query (Chen et al., 2017; Yang et al., 2018), they all require supervision.",
"As opposed to all these iterative-retrieval methods and previously discussed directions, our proposed approach AIR is completely unsupervised, i.e., it does not require annotated justifications.",
"Further, unlike many of the supervised iterative approaches (Feldman and El-Yaniv, 2019; Sun et al., 2019a) that perform query reformulation in a continuous representation space, AIR employs a simpler and more interpretable query reformulation strategy that relies on explicit terms from the query and the previously retrieved justification.",
"Lastly, none of the previous iterative retrieval approaches address the problem of semantic drift, whereas AIR accounts for drift by controlling the query reformulation as explained in section 3.1.",
"As shown in fig.",
"2, the proposed QA approach consists of two components:",
"(a) an unsupervised, iterative component that retrieves chains of justification sentences given a query; and",
"(b) an answer classification component that classifies a candidate answer as correct or not, given the original question and the previously retrieved justifications.",
"We detail these components in the next two sub-sections.",
"AIR iteratively builds justification chains given a query.",
"AIR starts by initializing the query with the concatenated question and candidate answer text 3 .",
"Then, AIR iteratively repeats the following two steps:",
"(a) It retrieves the most salient justification sentence given the current query using an alignment-IR approach(Yadav et al., 2019a).",
"The candidate justification sentences come from dataset-specific KBs.",
"For example, in MultiRC, we use as candidates all the sentences from the paragraph associated with the given question.",
"In QASC, which has a large KB 4 of 17.4 million sentences), similar to Khot et al. (2019a) candidates are retrieved using the Heuristic+IR method which returns 80 candidate sentences for each candidate answer from the provided QASC KB.",
"(b) it adjusts the query to focus on the missing information, i.e., the keywords that are not covered by the current evidence chain.",
"AIR also dynamically adds new terms to the query from the previously retrieved justifications to nudge multi-hop retrieval.",
"These two iterative steps repeat until a parameter-free termination condition is reached.",
"Alignment: To compute the similarity score between a given query and a sentence from KB, AIR 3 Note that this work can be trivially adapted to reading comprehension tasks.",
"In such tasks (e.g., SQuAD (Rajpurkar et al., 2018)), the initial query would contain just the question text.",
"4 In large KB-based QA, AIR first uses an off-the-shelf Lucene BM25(Robertson et al., 2009) to retrieve a pool of candidate justification sentences from which the evidence chains are constructed.",
"uses a vanilla unsupervised alignment method of Yadav et al. (2019a) which uses only GloVe embeddings (Pennington et al., 2014).",
"5 The alignment method computes the cosine similarity between the word embeddings of each token in the query and each token in the given KB sentence, resulting in a matrix of cosine similarity scores.",
"For each query token, the algorithm select the most similar token in the evidence text using max-pooling.",
"At the end, the element-wise dot product between this max-pooled vector of cosine-similarity scores and the vector containing the IDF values of the query tokens is calculated to produce the overall alignment score s for the given query Q and the supporting paragraph P j : 5 Alignment based on BERT embeddings marginally outperformed the one based on GloVe embeddings, but BERT embeddings were much more expensive to generate.",
"s ( Q, P j ) = | Q | (cid:88) i =1 idf ( q i ) align ( q i , P j ) (1) align ( q i , P j ) = | P j | max k =1 cosSim ( q i , p k ) (2) where q i and p k are the i th and k th terms of the query ( Q ) and evidence sentence ( P j ) respectively.",
"Remainder terms ( Q r ): Query reformulation in AIR is driven by the remainder terms, which are the set of query terms not yet covered in the justification set of i sentences (retrieved from the first i iterations of the retrieval process): Q r ( i ) = t ( Q ) (cid:91) s k S i t ( s k ) (3) where t ( Q ) represents the unique set of query terms, t ( s k ) represents the unique terms of the k th justification, and S i represents the set of i justification sentences.",
"Note that we use soft matching of alignment for the inclusion operation: we consider a query term to be included in the set of terms in the justifications if its cosine similarity with a justification term is larger than a similarity threshold M (we use M =0.95 for all our experiments see section 5.2), thus ensuring that the two terms are similar in the embedding space.",
"Coverage ( Q c ): measures the coverage of the query keywords by the retrieved chain of justifications S : Q c ( i ) = | (cid:83) s k S i t ( Q ) t ( s k ) | | t ( Q ) | (4) where | t ( Q ) | denotes the size of unique query terms.",
"Query reformulation: In each iteration j , AIR reformulates the query Q ( j ) to include only the terms not yet covered by the current justification chain, Q r ( j 1) .",
"See, for example, the second hop in fig.",
"2. To mitigate ambiguous queries, the query is expanded with the terms from all the previously retrieved justification sentences only if the number of uncovered terms is less than T (we used T = 2 for MultiRC and T = 4 for QASC (see section 5.2).",
"See, for example, the third hop in fig.",
"2, in which the query is expanded with the terms of all the previously retrieved justification sentences.",
"Formally: Q ( j ) = (cid:110) Q r ( j 1) , if | Q r ( j 1) | > TQ r ( j 1) + ( t ( s j 1 ) t ( Q )) , otherwise (5) where j is the current iteration index.",
"Stopping criteria: AIR stops its iterative evidence retrieval process when either of the following conditions is true:",
"(a) no new query terms are discovered in the last justification retrieved, i.e., Q r ( i 1) == Q r ( i ) , or",
"(b) all query terms are covered by justifications, i.e., Q c = 1 .",
"AIR 's justification chains can be fed into any supervised answer classification method.",
"For all experiments in this paper, we used RoBERTa (Liu et al., 2019), a state-of-the-art transformer-based method.",
"In particular, for MultiRC, we concatenate the query (composed from question and candidate answer text) with the evidence text, with the [SEP] token between the two texts.",
"A sigmoid is used over the [CLS] representation to train a binary classification task 6 (correct answer or not).",
"For QASC, we fine-tune RoBERTa as a multiple-choice QA 7 (MCQA) (Wolf et al., 2019) classifier with 8 choices using a softmax layer(similar to (Khot et al., 2019a)) instead of the sigmoid.",
"The input text consists of eight queries (from eight candidate answers) and their corresponding eight evidence texts.",
"Unlike the case of MultiRC, it is possible to train a MCQA classifier for QASC because every question has only 1 correct answer.",
"We had also tried the binary classification approach for QASC but it resulted in nearly 5% lower performance for majority of the experiments in table 2. In QA tasks that rely on large KBs there may exist multiple chains of evidence that support a correct answer.",
"This is particularly relevant in QASC, whose KB contains 17.2M facts.",
"8 Figure 1 shows an example of this situation.",
"To utilize this type of redundancy in answer classification, we extend AIR to extract parallel evidence chains .",
"That is, to extract N parallel chains, we run AIR N times, ensuring that the first justification sentences in each chain are different (in practice, we start a new chain for each justification in the top N retrieved sentences in the first hop).",
"After retrieving N parallel evidence chains, we take the union of all the individual justification sentences to create the supporting evidence text for that candidate answer.",
"Multi-sentence reading comprehension (Mul-tiRC) , which is a reading comprehension dataset provided in the form of multiple-choice QA task (Khashabi et al., 2018a).",
"Every question is based on a paragraph, which contains the gold justification sentences for each question.",
"We use every sentence of the paragraph as candidate justifications for a given question.",
"Here we use the original 6 We used RoBERTa base with maximum sequence length of 512, batch size = 8, learning rate of 1e-5, and 5 number of epochs.",
"RoBERTa-base always returned consistent performance on MultiRC experiments; many runs from RoBERTalarge failed to train (as explained by (Wolf et al., 2019)), and generated near random performance.",
"7 We used similar hyperparameters as in the MultiRC experiments, but instead used RoBERTa-large, with maximum sequence length of 128.",
"8 The dataset creators make a similar observation (Khot et al., 2019a).",
"F1 m F1 a EM0 Evidence selection",
"MultiRC dataset, 9 which includes the gold annotations for evidence text, unlike the version available on SuperGlue (Wang et al., 2019a).",
"Question Answering using Sentence Composition (QASC) , a large KB-based multiple-choice QA dataset (Khot et al., 2019a).",
"Each question is provided with 8 answer candidates, out of which 4 candidates are hard adversarial choices.",
"Every question is annotated with a fixed set of two justification sentences for answering the question.",
"The 9 https://cogcomp.seas.upenn.edu/ multirc/ justification sentences are to be retrieved from a KB having 17.2 million facts.",
"As shown in the example of fig.",
"1 and also highlighted by (Khot et al., 2019a), multiple evidence text are possible for a given question in QASC where the annotated gold justification sentences explain it more precisely.",
"We report overall question answering performance as well as evidence selection performance in table 1 for MultiRC, and table 2 for QASC 10 .",
"For MultiRC, we considered three baselines.",
"The first baseline is where we feed all passage sentences to the RoBERTa classifier (row 11 in table 1).",
"The second baseline uses the alignment method of (Kim et al., 2017) to retrieve the top k sentences ( k = 2 , 5 ).",
"Since AIR uses the same alignment approach for retrieving justifications in each iteration, the comparison to this second baseline highlights the gains from our iterative process with query reformulation.",
"The third baseline uses a supervised RoBERTa classifier trained to select the gold justifications for every query (rows 1621 in table 1).",
"Lastly, we also developed a RoBERTa-based iterative retriever by concatenating the query with the retrieved justification in the previous step.",
"We retrain the RoBERTa iterative retriever in every step, using the new query in each step.",
"We considered two baselines for QASC.",
"The first baseline does not include any justifications (row 7 in table 2).",
"The second baseline uses the top k sentences retrieved by the alignment method (row (812 in table 2).",
"For evidence selection, we report precision, recall, and F1 scores on MultiRC (similar to (Wang et al., 2019b; Yadav et al., 2019b)).",
"For QASC, we report Recall@10, similar to the dataset authors (Khot et al., 2019a).",
"We draw several observation from the evidence selection results: (1) AIR vs. unsupervised methods AIR outperforms all the unsupervised baselines and previous works in both MultiRC (row 9-15 vs. row 23 in table 1) and QASC(rows 0-6 vs. row 18).",
"Thus, highlighting strengths of AIR over the standard IR baselines.",
"AIR achieves 5.4% better F1 score compared to the best parametric alignment baseline (row 12 in table 1), which highlights the importance of the iterative approach over the vanilla alignment in AIR .",
"Similarly, rows (4 and 5) of table 2 also highlight this importance in QASC.",
"(2) AIR vs. supervised methods Surprisingly, AIR also outperforms the supervised RoBERTa-retriver in every setting(rows 1621 in table 1).",
"Note that the performance of this supervised retrieval method drops considerably when trained on passages from a specific domain (row 19 in table 1), which highlights the domain sensitivity of supervised retrieval methods.",
"In contrast, AIR is unsupervised and generalize better as it is not tuned to any specific domain.",
"AIR also achieves better performance than supervised RoBERTa-iterative-retriever (row 21 in table 1) which simply concatenates the retrieved justification to the query after every iteration and further trains to retrieve the next justification.",
"The RoBERTa-iterative-retriever achieves similar performance as that of the simple RoBERTa-retriever (row 16 vs. 21) which suggests that supervised iterative retrievers marginally exploit the information from query expansion.",
"On the other hand, controlled query reformulation of AIR leads to 5.4% improvement as explained in the previous point.",
"All in all, AIR achieves state-of-the-art results for evidence retrieval on both MultiRC (row 23 in table 1) and QASC (row 18 of table 2).",
"(3) Soft-matching of AIR the alignment-based AIR is 10.7% F1 better than AIR that relies on lexical matching (rather than the soft matching) on MultiRC (row 22 vs. 23), which emphasizes the advantage of alignment methods over conventional lexical match approaches.",
"For overall QA performance, we report the standard performance measures ( F 1 a , F 1 m , and EM 0 ) in MultiRC (Khashabi et al., 2018a), and accuracy for QASC (Khot et al., 2019a).",
"The results in tables 1 and 2 highlight: (1) State-of-the-art performance: Development set On both MultiRC and QASC, RoBERTa fine-tuned using the AIR retrieved evidence chains (row 23 in table 1 and row 14 in table 2) outperforms all the previous approaches and the baseline methods.",
"This indicates that the evidence texts retrieved by AIR not only provide better explanations, but also contribute considerably in achieving the best QA performance.",
"Test set On the official hidden test set, RoBERTa fine-tuned on 5 parallel evidences from AIR achieves new state-of-the-art QA results, outperforming previous state-of-the-art methods by 7.8% accuracy on QASC (row 21 vs. 20), and 10.2% EM0 on MultiRC (row 35 vs. 34).",
"leads to substantial improvements, particularly in QASC (single justification (row 4 and 8) vs. evidence chains (row 5 and row 9) in table table 2).",
"Overall, the chain of evidence text retrieved by AIR enables knowledge aggregation resulting in the improvement of QA performances.",
"(3) Gains from parallel evidences Further, knowledge aggregation from parallel evidence chains lead to another 3.7% EM0 improvement on MultiRC (row 27), and 5.6% on QASC over the single AIR evidence chain (row 18).",
"To our knowledge, these are new state-of-the-art results in both the datasets.",
"To further understand the retrieval process of AIR we implemented several analyses.",
"To understand the importance of modeling missing information in query reformulation, we analyzed a simple variant of AIR in which, rather the focusing on missing information, we simply concatenate the complete justification sentence to the query after each hop.",
"To expose semantic drift, we retrieve a specified number of justification sentences.",
"As seen in table 3, now the AIR (lexical)uncontrolled and AIR uncontrolled perform worse than both BM25 and the alignment method.",
"This highlights that the focus on missing information during query reformulation is an important deterrent of semantic drift.",
"We repeated the same experiment with the supervised RoBERTa retriever (trained iteratively for 2 steps) and the original parameter-free AIR , which decides its number of hops using the stopping conditions.",
"Again, we observe similar performance drops in both: the RoBERTa retriever drops from 62.3% to 57.6% and AIR drops to 55.4%.",
"We evaluate the sensitivity of AIR to the 2 hyper parameters: the threshold ( Q r ) for query expansion, and the cosine similarity threshold M in computation of alignment.",
"As shown in table 5, evidence selection performance of AIR drops with the lower values of M but the drops are small, suggesting that AIR is robust to different M values.",
"Similarly, there is a drop in performance for MultiRC with the increase in the Q r threshold used for query expansion, hinting to the occurrence of semantic drift for higher values of Q r (table 4).",
"This is because the candidate justifications are coming from a relatively small numbers of paragraphs in MultiRC; thus even shorter queries ( = 2 words) can retrieve relevant justifications.",
"On the other hand, the number of candidate justifications in QASC is much higher, which requires longer queries for disambiguation ( > = 4 words).",
"To verify if the MultiRC training data is sufficient to train a supervised justification retrieval method, we trained justification selection classifiers based on BERT, XLNet, and RoBERTa on increasing proportions of the MultiRC training data (table 6).",
"This analysis indicates that all three classifiers approach their best performance at around 5% of the training data.",
"This indicates that, while these supervised methods converge quickly, they are unlikely to outperform AIR , an unsupervised method, even if more training data were available.",
"We introduced a simple, unsupervised approach for evidence retrieval for question answering.",
"Our approach combines three ideas:",
"(a) an unsupervised alignment approach to soft-align questions and answers with justification sentences using GloVe embeddings,",
"(b) an iterative process that reformulates queries focusing on terms that are not covered by existing justifications, and",
"(c) a simple stopping condition that concludes the iterative process when all terms in the given question and candidate answers are covered by the retrieved justifications.",
"Overall, despite its simplicity, unsupervised nature, and its sole reliance on GloVe embeddings, our approach outperforms all previous methods (includ-ing supervised ones) on the evidence selection task on two datasets: MultiRC and QASC.",
"When these evidence sentences are fed into a RoBERTa answer classification component, we achieve the best QA performance on these two datasets.",
"Further, we show that considerable improvements can be obtained by aggregating knowledge from parallel evidence chains retrieved by our method.",
"In addition of improving QA, we hypothesize that these simple unsupervised components of AIR will benefit future work on supervised neural iterative retrieval approaches by improving their query reformulation algorithms and termination criteria.",
"We thank Tushar Khot (AI2) and Daniel Khashabhi (AI2) for helping us with the dataset and evaluation resources.",
"This work was supported by the Defense Advanced Research Projects Agency (DARPA) under the World Modelers program, grant number W911NF1810014.",
"Mihai Surdeanu declares a fi-nancial interest in lum.ai.",
"This interest has been properly disclosed to the University of Arizona Institutional Review Committee and is managed in accordance with its conflict of interest policies."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"objective",
"abstain",
"objective",
"result",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"objective",
"other",
"other",
"other",
"other"
] |
[
"In this paper, we propose SkipBERT to accelerate BERT inference by skipping the computation of shallow layers.",
"To achieve this, our approach encodes small text chunks into independent representations, which are then materialized to approximate the shallow representation of BERT.",
"Since the use of such approximation is inexpensive compared with transformer calculations, we leverage it to replace the shallow layers of BERT to skip their runtime overhead.",
"With off-the-shelf early exit mechanisms, we also skip redundant computation from the highest few layers to further improve inference efficiency.",
"Results on GLUE show that our approach can reduce latency by 65% without sacrificing performance.",
"By using only two-layer transformer calculations, we can still maintain 95% accuracy of BERT.",
"1 1 Introduction Pre-trained language models, such as ELMo (Pe-ters et al., 2018), GPT (Radford et al., 2018), BERT (Devlin et al., 2019), XLNet (Yang et al., 2019), and RoBERTa (Liu et al., 2019), have yielded significant improvements to NLP tasks.",
"Despite the gain in accuracy, these models have significant demands in computation and inference time, limiting their use in resource-constrained or latency-sensitive applications.",
"Therefore, it is desirable to reduce the computational overhead of these models while retaining acceptable accuracy.",
"Knowledge distillation (KD, Hinton et al. 2015) facilitates the transfer of knowledge embedded in pre-trained language models into small student models (Sanh et al., 2019; Sun et al., 2019; Jiao et al., 2020), which usually reduces the redundant parameters of BERT in a uniform manner.",
"Early exit mechanisms (Xin et al., 2020; Zhou et al., Corresponding author 1 Source code is available at https://github.com/ LorrinWWW/SkipBERT .",
"2020; Liu et al., 2020) then use an adaptive number of transformer layers during inference, aiming to reduce redundant calculations from the highest few layers.",
"However, since they build the sequence representation from scratch for each forward pass, they require a certain number of lower layers to capture basic syntactic and semantic information, making it difficult to further reduce inference costs.",
"This naturally raises a question: Can we reduce the computation at the lower transformer layers?",
"In this paper, we propose SkipBERT, a novel scheme that skips the computation at the shallow transformer layers of BERT.",
"As revealed by Jawa-har et al. (2019); Rogers et al. (2020), the lower layers of BERT mainly focus on short-distance context, while the higher layers are able to capture long-range dependencies.",
"Therefore, it is reasonable to assume that, at lower layers, even if distant tokens are masked, the representation for each token will not vary dramatically.",
"Here, by sweeping over the input text, we get short chunks (n-grams) and use their representations to approximate the hidden states of BERT's lower layers.",
"We then precompute and store representations of text chunks in a precomputed lookup table (PLOT).",
"Thus, during inference we only need to access PLOT to get the representations of short chunks, which is inexpen-7287 sive compared with transformer computation.",
"Fig. 1 compares the inference procedure between vanilla BERT and our proposed SkipBERT.",
"In BERT, the input text needs to be processed by a large number of transformer layers in turn, leading to high latency in inference.",
"In comparison, SkipBERT precomputes the hidden states of lower transformer layers, which are accessed via table lookups, rather than computed in inference-time.",
"Moreover, SkipBERT exhibits effective compatibility with early exit mechanisms: Since the initial sequence representation in our work is partially contextualized (thanks to PLOT) rather than individual word embeddings, SkipBERT allows exiting from a relatively earlier layer than typical BERT variants, while maintaining good accuracy.",
"We empirically verify this in Section 4.5.",
"Therefore, our approach can skip the calculations of lower and higher layers for the same input, thereby further improving the inference speed.",
"We present SkipBERT to avoid computation at BERT's lower layers during inference.",
"Instead, we construct PLOT and use it to approximate their hidden states.",
"We incorporate early exit mechanisms as an enhancement to skip redundant computation, leading to further network acceleration.",
"We conduct extensive experiments on GLUE.",
"Compared with BERT, SkipBERT is capable of accelerating inference by up to 65% without compromising GLUE score, or accelerating by 82% while retaining 95% accuracy.",
"Knowledge Distillation (Hinton et al., 2015) provides an effective way to transfer the knowledge embedded in a teacher network to a student network.",
"The student network is usually more lightweight than the teacher network and thus more computationally efficient.",
"The student network can be structurally identical to the teacher but contains fewer layers or hidden units, e.g. BERT-PKD (Sun et al., 2019), DistilBERT (Sanh et al., 2019), TinyBERT (Jiao et al., 2020), MiniLM (Wang et al., 2020), and BERT-EMD (Li et al., 2020).",
"Meanwhile, some work adopts specifically designed networks, e.g. SqueezeBERT (Iandola et al., 2020) and MobileBERT (Sun et al., 2020), to reduce the computation per layer.",
"Input-adaptive inference allows models to choose different computational paths according to the input during inference.",
"In this way, simpler input samples usually require less calculation to make predictions.",
"Recently, DeeBERT (Xin et al., 2020) adapts confidence-based BranchyNet (Teer-apittayanon et al., 2016), which uses entropy as an early-exit criterion.",
"FastBERT (Liu et al., 2020) uses self-distillation to train the branch classifiers.",
"RightTool (Schwartz et al., 2020) leverages the same early-exit criterion as in the Shallow-Deep Network (Kaya et al., 2019), i.e., softmax scores of predictions.",
"PABEE (Zhou et al., 2020) stops inference when the intermediate predictions of the internal classifiers remain unchanged consecutively.",
"Precomputation has also been studied in information retrieval, where documents are assumed to be stored at local database so their representation can be precomputed (Gao et al., 2020).",
"However, this method may not be suitable for other tasks where the input text is unknown before inference.",
"During training, SkipBERT consists of two groups of transformer layers, local transformer layers for encoding short-distance context, and global transformer layers for leveraging the full context.",
"Once pre-training finishes, our approach will replace local transformer layers with PLOT, which stores the hidden states of local transformer layers; we also enhance global transformer layers with early exit mechanisms to further accelerate inference speed.",
"Fig. 2 presents the overview of our system.",
"As shown in Fig. 3, we sweep over the input text to get three-token chunks (tri-grams, X i = [ x i 1 , x i , x i +1 ] ), which will also be taken as the index entries of PLOT later.",
"We let cross-border tokens be padding tokens, i.e. x 1 = x n = x [PAD] .",
"We show in Section 4.8.2 that using longer text chunks (e.g. 5-grams) will improve accuracy since they can bring more context information than trigrams.",
"However, the number of 5-grams is too large to be enumerated and stored, and thus it is hard to use them in actual applications.",
"Fig. 3 illustrates our procedure to leverage local context.",
"By mapping each word to a d -dimensional embedding, we denote the chunk embeddings by X i R 3 d .",
"For local transformer layers, we inject position embeddings P R 3 d , and define the initial chunk representations as follows: H (0) i = LN ( X i + P ) (1) where LN ( ) is layer normalization.",
"We use L loc transformer layers to leverage the local context of each text chunk.",
"For layer 0 m < L loc , we have: H ( m +1) i = Transformer ( m ) ( H ( m ) i ) .",
"Note that since each chunk is short enough, it would be possible to precompute these representations before inference.",
"More importantly, these representations are good approximations of those produced by the respective shallow layers of BERT.",
"Thus, given a tri-gram, the embedding produced from ( L loc 1) -th layer is taken as its respective data entry stored in PLOT.",
"We also precompute bi-grams and uni-grams following the same procedure of tri-grams.",
"When a lookup of tri-gram fails (out-of-vocabulary, OOV), the system will resort to bi-grams or uni-grams as an alternative.",
"Specifically, we randomly replace % of trigrams by bi-grams during training.",
"On the one hand, such random replacement allows the model to encounter bi-grams during training, so as to better handle OOVs in inference; on the other hand, it can also be considered a variant of Dropout (Srivas-tava et al., 2014), which drops tokens rather than Transformer Layer Transformer Layer Transformer Layer Transformer Layer Transformer Layer Transformer Layer Transformer Layer Transformer Layer Transformer Layer ... ...",
"hidden units, thereby improving the robustness of our approach.",
"Section 4.7 shows = 10% works well with different OOV rates.",
"We also show in Section 4.8.2 that even bi-grams have a clear advantage over the baseline, which can be seen as an extreme case when all tri-gram lookups fail.",
"Now we get a list of contextualized chunk embeddings.",
"Here we aggregate them to form a feature sequence corresponding to the original input text.",
"Each token occurs at three consecutive tri-grams, as shown in Fig. 3.",
"By calculating a weighted sum of embeddings that correspond to the same token, we can leverage its context of five tokens: h i = (cid:88) j = 1 , 0 , 1 H ( L loc ) i + j, 1 j Gate ( H ( L loc ) i + j, 1 j ) , (3) where Gate ( ) is a sigmoid-based gating mechanism such that Gate ( x ) = ( v G x + b G ) , where v G is a learnable vector and b G is a learnable scalar.",
"Note that these embeddings do not have a sense of the order of the sequence.",
"So we need to inject position and segment embeddings before sending them to the subsequent transformer layers: h (0) i = (cid:40) LN ( h i + p i + s A ) , if x i A, LN ( h i + p i + s B ) , if x i B, (4) where p i and s A/B are position and segment embeddings respectively as in Devlin et al. (2019).",
"We denote by h (0) = [ h (0) i ] 0 i<n the aggregated sequence representation.",
"We use L glo transformer layers to further contextualize it.",
"For layer 0 m < L glo , we have: h ( m +1) = Transformer ( m + L loc ) ( h ( m ) ) .",
"Since we focus on text classification and regression tasks, we use the representation corresponding to token x [CLS] to compute logit scores:",
"z = Classifier ( h ( L glo ) [CLS] ) (6) where Classifier ( ) is a two-layer feedforward neural network.",
"When an early exit mechanism is activated, we compute logit scores for each global transformer layer as follows: z ( m ) = Classifier ( m ) ( h ( m ) [CLS] ) (7) We adopt a simple confidence-based early exit mechanism, i.e., once the prediction's maximum logit score is higher than a pre-defined threshold, the result will be returned without passing through the next transformer layers.",
"We mainly adopt the two-stage learning procedure proposed in TinyBERT (Jiao et al., 2020).",
"It includes general distillation (GD) conducted on large-scale unlabeled corpora, and task-specific distillation (TD) to learn from fine-tuned BERT.",
"General Distillation We perform distillation on the hidden states and attention scores.",
"We compute loss on the chunk aggregation layer and global transformer layer.",
"The local transformer layers are trained with supervision signals from upper layers.",
"The loss is defined as follows: LGD = L att + L hid (8) and we define L att and L hid as the mean-squared error (MSE) of attention scores and hidden states between the teacher (T) and student (S): L att = L glo (cid:88) m =1 MSE ( a ( m ) S , a ( g att ( m )) T ) (9) L hid = L glo (cid:88) m =0 MSE ( h ( m ) SW hid , h ( g hid ( m )) T ) (10) where a ( m +1) and h ( m +1) represent the attention score matrix and hidden states of the m -th transformer layer; h (0) is the outputs of chunk aggregation layer; W hid is a learnable matrix to transform the hidden states of the student into the same space as the teacher; g att ( ) and g hid ( ) define the layer mapping function between the student and teacher.",
"For attention-based distillation, we use the uniform mapping strategy to leverage the heterogeneous attention patterns across different layers.",
"For hidden states-based distillation, we use top mapping strategy since the initial sequence representation (outputs of chunk aggregation) are already partially contextualized.",
"The detailed illustration of layer mapping is presented at Appendix E. Task-Specific Distillation We start from the generally distilled SkipBERT, and use fine-tuned BERT as the teacher for task-specific distillation.",
"The loss is defined as follows: LTD = ( L att + L hid ) + L pred (11) where is a factor to control the loss weight; L pred is the prediction loss that will be defined below.",
"For classification, the loss function L pred is calculated via cross entropy: L pred = CE ( z S /, z T / ) (12) where z S are the logits predicted by the student; z T are the logits predicted by the teacher; is the temperature to smooth the probability distribution to facilitate distillation training.",
"For regression, the loss is instead calculated by MSE, i.e., L pred = MSE ( z S , z T ) .",
"Early Exit Specifically, when SkipBERT enables early exit mechanisms, we need to train internal classifiers to predict based on the hidden states of their respective layers.",
"Overall, we train the model to minimize a weighted average loss as follows: L pred = (cid:80) L glo m =1 m L ( m ) pred (cid:80) L glo m =1 m (13) where L ( m ) pred is the loss between the predictions of the teacher and the m -th intermediate classifier of the student.",
"Considering that the local transformer layers mostly capture generalized knowledge, which do not vary significantly across different tasks, we do not update the local transformer layers during fine-tuning.",
"Therefore, once general distillation is finished, we can compute their hidden states to construct PLOT.",
"To ensure fast response, PLOT should ideally be loaded in the server's RAM during inference.",
"However, such a table could be too large to fully fit into 7290 Model MACs Latency GLUE CoLA SST-2 MRPC STS-B QQP MNLI QNLI RTE WNLI Score (8.5k) (67k) (3.5k) (5.7k) (364k) (393k) (108k) (2.5k) (0.6k) BERT 12 10.9G 100% 78.3 52.1 93.5 88.9/84.8 87.1/85.8 71.2/89.2 84.6/83.4 90.5 66.4 65.1 BERT 6 -PKD 5.4G -49% -43.5 92.0 85.0 -/81.6 70.7/-81.5/81.0 89.0 65.5 DistilBERT 6 5.4G -49% -49.0 92.5 86.9 -/81.3 70.1/-82.6/81.3 88.9 58.4 BERT-of-Theseus 6 5.4G -49% 77.1 47.8 92.2 87.6/83.2 85.6/84.1 71.6/89.3 82.4/82.1 89.6 66.2 65.1 TinyBERT 6 v2 5.4G -49% -46.1 92.6 88.0/--/83.9 71.3/-84.4/83.1 89.8 69.7 BERT-EMD 6 5.4G -49% 78.7 47.5 93.3 89.8/86.4 87.6/86.8 72.0/89.3 84.7 /83.5 90.7 71.7 65.1 SkipBERT 6+6 5.4G -49% 78.9 52.7 93.3 88.9/85.0 87.0/85.8 71.9/89.2 84.3/ 84.2 90.6 70.6 65.1 w/ exit 3.7G -65% 78.3 50.8 91.9 88.6/84.8 86.7/85.4 71.8/89.0 83.8/83.8 90.2 69.8 65.1 BERT mini4 0.4G -66% 65.8 0.0 85.9 81.1/71.8 75.4/73.3 66.4/86.2 74.8/74.3 84.1 57.9 62.3 BERT small4 1.6G -66% 71.2 27.8 89.7 83.4/76.2 78.8/77.0 68.1/87.0 77.6/77.0 86.4 61.8 62.3 DistilBERT 4 3.6G -66% -32.8 91.4 82.4/--/76.1 68.5/-78.9/78.0 85.2 54.1 BERT 4 -PKD 3.6G -66% -24.8 89.4 82.6/--/79.8 70.2/-79.9/79.3 85.1 62.3 TinyBERT 4 v2 0.6G -66% -25.3 90.0 85.4/--/80.4 68.9/-81.2 80.3 86.2 63.9 BERT-EMD 4 0.6G -66% 73.6 25.6 91.0 87.6/82.4 83.6/82.3 69.3/87.9 82.1 /80.6 87.2 66.2 65.1 SkipBERT 6+4 0.6G -66% 75.6 39.8 91.3 87.7 /82.7 84.1/82.8 70.4/88.3 82.0/ 81.6 88.5 66.1 65.1 w/ exit 0.5G -74% 75.1 36.0 91.2 87.6/ 83.5 84.1/82.8 70.4/88.3 81.9/81.3 88.5 64.8 65.1 SkipBERT 6+2 1.8G -82% 74.0 36.0 90.9 85.9/80.5 82.0/80.6 70.2/88.6 80.2/79.9 86.6 63.6 65.1 Table 1: Results on the GLUE benchmark.",
"RAM.",
"Hence we propose to adopt memory-mapped files ( mmap ), which allows for file access via the virtual memory mechanism.",
"By using mmap , the frequently used chunk embeddings reside in RAM for fast lookup, while the rare chunks can be stored on SSD, and will be loaded to RAM only when the system demand-pages them.",
"Appendix D presents a simple implementation of PLOT.",
"We use the corpora of Wikipedia 2 and BooksCor-pus 3 (Zhu et al., 2015) to perform general distillation.",
"For task-specific distillation, we mainly evaluate SkipBERT and compare it with other baselines on the GLUE benchmark (Wang et al., 2018).",
"Appendix F provides some details.",
"We denote by SkipBERT 6+6 the scheme with 6 local transformer layers (converted to PLOT) and 6 global transformer layers, each having a hidden size of 768 and intermediate size of 3072.",
"For direct comparisons with 4-layer baselines, we instantiate SkipBERT 6+4 with 4 thin global transformer layers (hidden size of 312 and intermediate size of 1200).",
"We also instantiate SkipBERT 6+2 with only 2 https://dumps.wikimedia.org/ 3 https://yknzhu.wixsite.com/mbweb 2 global transformer layers to further reduce the latency.",
"Training For general distillation, we randomly initialize SkipBERT, and pre-train it with Lamb optimizer (You et al., 2019).",
"We use linear learning rate decay with the peak learning rate of 1e-3 and a batch size of 2048 for around 80k steps, including 4000 warm-up steps.",
"For task-specific distillation, under the supervision of a fine-tuned BERT, we use AdamW (Kingma and Ba, 2015) to train 20 epochs with a learning rate of 2e-5.",
"We slightly tune the hyper-parameters across different tasks, and the details can be found in Appendix B. We do not use any data augmentation strategies.",
"Inference Following prior work, we evaluate latency by performing inference on a per-instance basis, i.e. the batch size for inference is set to 1.",
"This is a common latency-sensitive scenario when processing individual requests from different users.",
"We note that latency on modern GPUs is not sensitive to the hidden size, but mainly depends on the number of sequential operations, i.e. the number of network layers.",
"We report the median performance over 5 runs with different random seeds.",
"We submitted our model predictions to the official GLUE evaluation server 4 to obtain results on the",
"test set, as summarized in Table 1.",
"We present the results of TinyBERT v2 reported by Li et al. (2020) as the v2 model employs more training corpora than v1, and they eliminate the data augmentation strategy for a fair comparison.",
"By comparing with baselines (we compare with 6-layer models and 4-layer models separately), we can see that SkipBERT outperforms all compared approaches in terms of GLUE score.",
"Compared with TinyBERT, as we mainly follow their distillation process, our approach shows clear advantages on all tasks.",
"BERT-EMD employs a more sophisticated task-specific distillation process based on general-distilled TinyBERT, and further improves the overall performance.",
"Nevertheless, SkipBERT still maintains an advantage in the overall score.",
"Specifically, SkipBERT 6+4 has a similar inference speed to the 4-layer baselines, but achieves higher accuracy on most tasks.",
"We consider that a 4-layer model is somewhat too shallow to capture complex dependencies from scratch.",
"In contrast, SkipBERT effectively compensates by adding more layers in effect, even though their computation is skipped by PLOT search during inference.",
"These layers are useful to capture the basic linguistic information, thereby reducing the burden on subsequent layers.",
"Moreover, our method can further reduce the latency with only a slight loss in accuracy.",
"SkipBERT 6+2 which performs only two-layer transformer calculations maintains accuracy comparable to 4-layer models.",
"For the 6-layer track, TinyBERT and BERT-EMD both achieve performance comparable to the teacher model.",
"However, SkipBERT 6+6 also shows competitive results, especially for the challenging CoLA task (predicting linguistic acceptability judg-ments), on which previous methods do not work well.",
"The local transformer layers of SkipBERT can effectively capture the short-distance grammatical knowledge, e.g. subject-verb-object word order and verbal argument structure, etc., which is considered crucial to CoLA (Warstadt et al., 2019).",
"The early exit mechanism, tagged by w/ exit in Table 1, provides a flexible way to tune the speed-accuracy tradeoff.",
"With early exit enabled, both SkipBERT 6+6 and SkipBERT 6+4 achieve further improvements on inference speed with only a mi-nor decrease in accuracy.",
"More exploration will be done in Section 4.5.",
"We also investigate the effectiveness of SkipBERT on the reading comprehension task, SQuAD v1.1 (Rajpurkar et al., 2016a).",
"Following previous work, we treat this task as sequence labeling and predict the possibility of each token as the start or end of answer span.",
"Table 2 shows that SkipBERT outperforms all the baselines with large margins.",
"This experiment shows that our approach also works well for relatively complicated task forms.",
"Here we investigate the compatibility between early exit mechanisms and SkipBERT.",
"We draw the accuracy-latency curve by tuning the early exit threshold.",
"The goal is to enlarge the area under the curve so the model can maintain good accuracy when the average exit layer is small.",
"Fig. 4 compares the results of TinyBERT 4 v2 and SkipBERT 6+4 , both using the same early exit mech-7292 Operation MACs Latency d = 768 / 312 d = 768 / 312 Retrieve Text Chunks 155 s / 114 s Chunks to IDs -31.4 s Retrieve from mmap * -22.8 s / 18.4 s Send to GPU 101 s / 64.1 s Aggregate Text Chunks -48.4 s / 42.7 s Each Transformer Layer 906M / 146M 1.22 ms / 1.18 ms Each Classifier 591K / 98K 116 s / 112 s Table 3: Breakdown of computation and average latency of text with 128 tokens (including padding to-kens).",
"anism.",
"We observe that SkipBERT consistently outperforms TinyBERT on both MRPC and SST-2.",
"Specifically, the curve of SkipBERT is flatter than that of TinyBERT, which indicates that even if SkipBERT is forced to exit at a relatively shallow layer, it can still maintain a desirable accuracy.",
"Compared with baselines, our approach starts inference based on PLOT search results rather than from scratch, so even at a lower layer, the representation is well-learned for making predictions.",
"Table 3 presents the breakdown of computation and average latency of SkipBERT.",
"Detailed hardware information can be found at Appendix A. We can observe that the transformer layers account for the majority of inference time.",
"We note that there may be some variation in the latency of retrieving data from mmap , depending on the cache memory managed by the operating system.",
"Fig. 5 presents the latency distribution of retrieving chunks contained in a text sequence with OOV: 0% 11.2% 14.5% 24.7% PLOT size: 168G 59.2G 12.1G = 0% 88.0/91.4 87.0/90.6 86.3/90.1 85.8/89.6 = 5% 88.2/91.4 88.2/91.7 88.0/91.3 87.3/91.0 = 10% 88.2/91.6 88.0/91.4 88.0/91.4 88.0/91.2 = 20% 88.0/91.4 87.7/91.2 88.0/91.3 88.0/91.2 Table 4: Influence of space costs on the model accuracy.",
"128 tokens under different RAM sizes.",
"We perform experiments in Docker containers to limit the RAM size; more results can be found in Appendix G. The upper half of Fig. 5 shows that with enough RAM, the system can directly collect chunk embeddings from RAM, yielding latencies clustered around 20 s.",
"Meanwhile, with a smaller RAM as shown in the lower half of Fig. 5, most of the latency is still around 20 s but a small portion of items take several hundred s due to cache misses.",
"We also observe that the long tail of latency is distributed in several clusters, mainly due to I/O queuing.",
"However, even under heavy I/O load, retrieving data from mmap takes less time than the computation of a single transformer layer.",
"The previous sections prioritize accuracy and ef-ficiency by sacrificing space.",
"Reducing the space costs (by dropping less frequent chunks) allows users to use more economical hardware, but it will lead to OOV issues which may compromise accuracy.",
"Here we only count OOV for tri-grams, since OOVs for bi-grams rarely occur ( < 0.5%) and have little impact on the final performance.",
"We collect tri-grams on news corpora 5 and training sets of GLUE to construct PLOT.",
"Table 4 shows results by reducing the space costs.",
"= 0% means that the model does not see any bi-grams during training.",
"In this case, if the model encounters a tri-gram lookup failure and reverts to bi-grams, the performance will suffer to some extent.",
"When we randomly replace % of tri-grams with bi-grams during training, the model becomes more robust to OOVs, and can even slightly improve accuracy.",
"We find = 10% works well for all cases, and thus we use it as the default value.",
"tage even if the OOV rate is at a moderate level.",
"As we will see later at Section 4.8.2, if we only use bi-gram embeddings, i.e. the OOV rate is 100%, our approach is still better than the baseline that does not apply PLOT.",
"To understand why the backoff strategy, namely to replace tri-grams with bi-grams for OOVs, does not hurt accuracy, we investigate the similarity between them.",
"As shown in Fig. 6, most of them are similar, confirming the feasibility of our backoff strategy; but there is also a long tail where bi-grams cannot well compensate for the missing tri-grams.",
"Fig. 6 also shows some examples with different similarities.",
"Generally, auxiliary tokens that do not contain much meaning by themselves tend to rely more on context.",
"Meanwhile, tokens rich in semantics, e.g. noun phrases, do not vary much in embedding under different ranges of context.",
"We conduct an ablation study in this subsection.",
"We only pre-train SkipBERT on the Wikipedia corpus for 1 epoch for fast validation.",
"We also prepare a generally distilled small BERT 4 (the model architecture is identical to TinyBERT 4 ) with the same setup and corpus as a baseline.",
"We report the results on the development set.",
"Table 5 compares the results with different num-bers of local transformer layers.",
"BERT 4 is a base-Model r.f. CoLA MRPC MNLI BERT 4 -23.7 85.0/89.6 79.8/79.6 SkipBERT 6+4 w/ 1-gram 1 23.5 85.3/89.5 79.8/79.5 w/ 2-gram 3 29.1 86.3/90.1 80.8/81.0 w/ 3-gram ctr.",
"line that does not employ any skipping mechanism.",
"We can see that all settings that use additional local transformer layers have better performance than BERT 4 , indicating the effectiveness of our approach.",
"In general, the performance increases when we gradually enlarge the number of local transformer layers.",
"CoLA benefits most from the local transformer layers due to better modeling of short-distance context.",
"However, when it reaches a certain number of local transformer layers, the improvement becomes minimal.",
"We believe that since each token only has the context of five tokens, too many layers may increase the risk of overfitting, which harms the performance.",
"Thus we adopt 6 layers as our default setting.",
"We also construct variants that replace local transformer layers with a single-layer FFN or CNN, which are computationally lightweight and thus may not need precomputation.",
"However, their accuracy improvement against BERT 4 is very limited, which shows that even for short-distance context, using a relatively complex and deep network is beneficial to the final performance.",
"We investigate the effect of short-distance context leveraged in the local transformer layers of SkipBERT.",
"Table 6 presents the comparisons of using different ranges of short-context in local transformer layers.",
"1-grams are equivalent to conventional word embeddings, and the performance is similar to the baseline.",
"When using 2-grams, SkipBERT obtains notable improvements since each token can now access its direct neighbors in local transformer layers.",
"3-grams and 5-grams bring consistent improvements to all tasks.",
"Generally, the results are improved when we broaden the receptive field of local transformer layers, showing that 7294 more contexts are always beneficial.",
"However, due to its large number, it would be hard to enumerate n-grams with n > 3 .",
"It might require certain pruning or compression strategies, which we leave as future work.",
"In addition, we also study the effect of the weighted sum used in chunk aggregation, Eq.",
"(3).",
"We add comparison against a variant that only uses the embedding of the central token of each chunk, denoted ctr. only.",
"Table 6 shows that the weighted sum brings improvements over all tasks for the 3-gram setting.",
"However, it is not as effective for the 5-gram setting on MRPC and MNLI.",
"We believe using a weighted sum for five chunks may confuse important semantics and thus affect the accuracy; while for 3-gram setting, this problem is not as serious, and using a weighted sum for neighbor chunks can bring more context information to improve the accuracy.",
"4.8.3 Effect of Distillation Objective Model CoLA MRPC MNLI SkipBERT 6+4 32.9 86.3/90.4 80.9/81.1 w/o GD-att 29.9 81.9/87.2 81.3/81.6 w/o GD-hid 28.7 85.5/90.0 80.9/81.1 w/o TD-att 31.3 85.8/89.6 80.1/80.1 w/o TD-hid 30.5 85.8/89.9 80.4/80.1 Table 7: Effect of different distillation objectives.",
"We here show the effects of different distillation objectives.",
"We try to eliminate attention-based or hidden state-based distillation.",
"Results in Table 7 indicate that all distillation objectives are helpful both in the general distillation and task-specific distillation process.",
"In general distillation, both attention and hidden states-based distillation are critical to the final performance of relatively small datasets, e.g. CoLA and MRPC.",
"But for large-scale datasets, e.g. MNLI, removing attention based distillation even improves the performance, which may imply that the student model can benefit more from a fine-tuned teacher model as long as the downstream task has enough data.",
"In the task-specific distillation, the two distillation objectives are marginally helpful for CoLA and MRPC, while acting more importantly for MNLI.",
"The original TinyBERT uses a data augmentation strategy for all tasks during fine-tuning, which significantly enlarges the training set and makes the effect of task-specific distillation more significant.",
"In this paper, we proposed SkipBERT, a straightforward yet effective approach to skip the computation of BERT's shallow layers.",
"We used representations of short text chunks to approximate BERT's shallow representation, and stored them in PLOT for fast retrieval during inference.",
"Empirical results showed that SkipBERT could achieve performance comparable to BERT while significantly reducing inference time.",
"In the future, we would like to leverage discontinuous text chunks to further improve the accuracy and inference speed.",
"We will also try to reduce storage requirements with appropriate pruning and compression strategies.",
"This work was supported by the Key Research and Development Program of Zhejiang Province of China (No. 2021C01009), NSF of China Grant No. 62050099, and the Fundamental Research Funds for the Central Universities."
] | [
"objective",
"result",
"method",
"result",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"result",
"method",
"method",
"abstain",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"method",
"other"
] |
[
"The connection between the maximum spanning tree in a directed graph and the best dependency tree of a sentence has been exploited by the NLP community.",
"However, for many dependency parsing schemes, an important detail of this approach is that the spanning tree must have exactly one edge emanating from the root.",
"While work has been done to efficiently solve this problem for finding the one-best dependency tree, no research has attempted to extend this solution to finding the K -best dependency trees.",
"This is arguably a more important extension as a larger proportion of decoded trees will not be subject to the root constraint of dependency trees.",
"Indeed, we show that the rate of root constraint violations increases by an average of 13 times when decoding with K =50 as opposed to K =1 .",
"In this paper, we provide a simplification of the K -best spanning tree algorithm of Camerini et al. (1980).",
"Our simplification allows us to obtain a constant time speed-up over the original algorithm.",
"Furthermore, we present a novel extension of the algorithm for decoding the K -best dependency trees of a graph which are subject to a root constraint.",
"1 1 Introduction Non-projective, graph-based dependency parsers are widespread in the NLP literature.",
"(McDonald et al., 2005; Dozat and Manning, 2017; Qi et al., 2020).",
"However, despite the prevalence of K -best dependency parsing for other parsing formalisms often in the context of re-ranking (Collins and Koo, 2005; Sangati et al., 2009; Zhu et al., 2015; Do and Rehbein, 2020) and other areas of NLP (Shen et al., 2004; Huang and Chiang, 2005; Pauls and Klein, 2009; Zhang et al., 2009), we have only found three works that consider K -best non-projective 1 Our implementation is available at https://github.",
"dependency parsing (Hall, 2007; Hall et al., 2007; Agic, 2012).",
"All three papers utilize the K -best spanning tree algorithm of Camerini et al. (1980).",
"Despite the general utility of K -best methods in NLP, we suspect that the relative lack of interest in K -best non-projective dependency parsing is due to the implementation complexity and nuances of Camerini et al. (1980)'s algorithm.",
"2 We make a few changes to Camerini et al. (1980)'s algorithm, which result in both a simpler algorithm and simpler proof of correctness.",
"3 Firstly, both algorithms follow the key property that we can find the second-best tree of a graph by removing a single edge from the graph (The-orem 1); this property is used iteratively to enumerate the K -best trees in order.",
"Our approach to finding the second-best tree (see 3) is faster because of it performs half as many of the expensive cycle-contraction operations (see 2).",
"Overall, this change is responsible for our 1 .",
"39 x speed-up 2 In fact, an anonymous reviewer called it one of the most feared' algorithms in dependency parsing. 3 While our algorithm is by no means simple , an anonymous reviewer called it a big step in that direction. (see 4).",
"Secondly, their proof of correctness is based on reasoning about a complicated ordering on the edges in the K th tree (Camerini et al., 1980, Section 4); our proof side-steps the complicated ordering by directly reasoning over the ancestry relations of the K th tree.",
"Consequently, our proofs of correctness are considerably simpler and shorter.",
"Throughout the paper, we provide the statements of all lemmas and theorems in the main text, but defer all proofs to the appendix.",
"In addition to simplifying Camerini et al. (1980)'s algorithm, we offer a novel extension.",
"For many dependency parsing schemes such as the Universal Dependency (UD) scheme (Nivre et al., 2018), there is a restriction on dependency trees to only have one edge emanate from the root.",
"4 Finding the maximally weighted spanning tree that obeys this constraint was considered by Gabow and Tarjan (1984) who extended the O ( N 2 ) maximum spanning tree algorithm of Tarjan (1977); Camerini et al. (1979).",
"However, no algorithm exists for K best decoding of dependency trees subject to a root constraint.",
"As such, we provide the first K -best algorithm that returns dependency trees that obey the root constraint.",
"To motivate the practical necessity of our extension, consider Fig. 1. Fig. 1 shows the percentage of trees that violate the root constraint when doing one-best and 50 -best decoding for 63 languages from the UD treebank (Nivre et al., 2018) using the pre-trained model of Qi et al. (2020).",
"5 , 6 We find that decoding without the root constraint has a much more extreme effect when decoding the 50 -best than the one-best.",
"Specifically, we observe that on average, the number of violations of the root constraint increased by 13 times, with the worst increase being 44 times.",
"The results thus suggest that finding K -best trees that obey the root constraint from a non-projective dependency parser requires a specialist algorithm.",
"We provide a more detailed results table in App.",
"A, including root constraint violation rates for K =5 , K =10 , and K =20 .",
"Furthermore, we note that the K -best algorithm may also be used for marginalization of latent variables (Correia et al., 2020) and for constructing parsers with global scoring functions (Lee et al., 2016).",
"4 There are certain exceptions to this such as the Prague Treebank (Bejcek et al., 2013).",
"5 Zmigrod et al. (2020) conduct a similar experiment for only the one-best tree.",
"6 We note that Qi et al. (2020) do apply the root constraint for one-best decoding, albeit with a sub-optimal algorithm.",
"We consider the study of rooted directed weighted graphs , which we will abbreviate to simply graphs .",
"7 A graph is given by G = ( , N , E ) where N is a set of N + 1 nodes with a designated root node N and E is a set of directed weighted edges.",
"Each edge e = ( i (cid:65) j ) E has a weight w ( e ) R + .",
"We assume that self-loops are not allowed in the graph (i.e., ( i (cid:65) i ) (cid:54) E ).",
"Additionally, we assume our graph is not a multi-graph, therefore, there can exist at most one edge from node i to node j .",
"8 When it is clear from context, we abuse notation and use j G and e G for j N and e E respectively.",
"When discussing runtimes, we will assume a fully connected graph ( |E| = N 2 ).",
"9 An arborescence (henceforth called a tree ) of G is a subgraph d = ( , N , E (cid:48) ) such that E (cid:48) E and the following is true: 1. For all j N (cid:114) { } , |{ ( (cid:65) j ) E (cid:48) }| = 1 .",
"Other definitions of trees can also include that there is at least one edge emanating from the root.",
"However, this condition is immediately satisfied by the above two conditions.",
"A dependency tree 7 As we use the algorithm in Zmigrod et al. (2020) as our base algorithm, we borrow their notation wherever convenient.",
"8 We make this assumption for simplicity, the algorithms presented here will also work with multi-graphs.",
"This might be desirable for decoding labeled dependency trees.",
"However, we note that in most graph-based parsers such as Qi et al. (2020) and Ma and Hovy (2017), dependency labels are extracted after the unlabeled tree has been decoded.",
"9 We make this assumption as in the context of dependency parsing, we generate scores for each possible edge.",
"Furthermore, (Tarjan, 1977) prove that the runtime of finding the best tree for dense graphs is O ( N 2 ) .",
"This is O ( |E| log N ) in the non-dense case.",
"The set of all trees and dependency trees in a graph are given by A ( G ) and D ( G ) respectively.",
"The weight of a tree is given by the sum of its edge weights 10 w ( d ) = (cid:88) e d w ( e ) (1) This paper concerns finding the K highest-weighted (henceforce called K -best ) tree or dependency tree, these are denoted by G ( K ) and G [ K ] respectively.",
"Tarjan (1977); Camerini et al. (1979) provided the details for an O ( N 2 ) algorithm for decoding the one-best tree.",
"This algorithm was extended by Gabow and Tarjan (1984) to find the best dependency tree in O ( N 2 ) time.",
"We borrow the algorithm (and notation) of Zmigrod et al. (2020), who provide an exposition and proofs of these algorithms in the context of non-projective dependency parsing.",
"The pseudocode for finding G (1) and G [1] is given in Fig. 3. We briefly describe the key components of the algorithm.",
"11 The greedy graph of G is denoted by (cid:65) G = ( , N , E (cid:48) ) where E (cid:48) contains the highest weighted incoming edge to each non-root node.",
"Therefore, if (cid:65) G has no cycles, then (cid:65) G = G (1) .",
"A cycle C in (cid:65) G is called a critical cycle .",
"If we encounter a critical cycle in the algorithm, we contract the graph by the critical cycle.",
"A graph contraction , G /C , by a cycle C replaces the nodes in C by a mega-node c such that the nodes of G /C are N (cid:114) C { c } .",
"Furthermore, for each edge e = ( i (cid:65) j ) G : 1. If i (cid:54) C and j C , then e (cid:48) = ( i (cid:65) c ) G /C such that w ( e (cid:48) ) = w ( e ) + w (cid:16) (cid:65) C j (cid:17) where C j is the subgraph of C rooted at j .",
"2. If i C and j (cid:54) C , then e (cid:48) = ( c (cid:65) j ) G /C such that w ( e (cid:48) ) = w ( e ) .",
"3. If i (cid:54) C and j (cid:54) C , then e G /C .",
"4. If i C and j C , then there is no edge related to ( i (cid:65) j ) in G /C .",
"There also exists a bookkeeping function such 10 For inference, the weight of a trees often decomposes multiplicatively rather than additively over the edges.",
"One can take the exponent (or logarithm) of the original edge weights to make the weights distribute additively (or multiplicative).",
"11 For a more complete and detailed description as well as a proof of correctness, please refer to the original manuscripts.",
"that for all e (cid:48) G /C , ( e (cid:48) ) G .",
"This bookkeeping function returns the edge in the original graph that led to the creation of the edge in the contracted graph using one of the constructions above.",
"Finding G (1) is then the task of finding a contracted graph G (cid:48) such that (cid:65) G (cid:48) = G (cid:48) (1) .",
"Once this is done, we can stitch back the cycles we contracted.",
"If G (cid:48) = G /C , for any d A ( G /C ) , d (cid:35) C A ( G ) is the tree made with edges ( d ) ( applied to each edge d ) and (cid:65) C j where C j is the subgraph of the nodes in C rooted at node j and ( e ) = ( i (cid:65) j ) for e = ( i (cid:65) c ) d .",
"The contraction weighting scheme means that w ( d ) = w ( d (cid:35) C ) (Geor-giadis, 2003).",
"Therefore, G (1) = ( G (cid:48) (1) (cid:35) C ) (1) .",
"The strategy for finding G [1] is to find the contracted graph for G (1) and attempt to remove edges emanating from the root.",
"This was first proposed by Gabow and Tarjan (1984).",
"When we consider removing an edge emanating from the root, we are doing this in a possibly contracted graph, and so an edge ( (cid:65) j ) may exist multiple times in the graph.",
"We denote G \\\\ e to be the graph G with all edges with the same end-points as e removed.",
"Fig. 2 gives an example of a graph G , its best tree G (1) , and its best dependency tree G [1] .",
"The runtime complexity of finding G (1) or G [1] is O ( N 2 ) for dense graphs by using efficient priority queues and sorting algorithms (Tarjan, 1977; Gabow and Tarjan, 1984).",
"We assume this runtime 2 3 4 1",
"In the following two sections, we provide a simpli-fied reformulation of Camerini et al. (1980) to find the K -best trees.",
"The simplifications additionally provide a constant time speed-up over Camerini et al. (1980)'s algorithm.",
"We discuss the differences throughout our exposition.",
"The underlying concept behind finding the K best tree, is that G ( K ) is the second best tree G (cid:48) (2) of some subgraph G (cid:48) G .",
"In order to explore the space of subgraphs, we introduce the concept of edge inclusion and exclusion graphs.",
"Definition 1 (Edge inclusion and exclusion) .",
"For any graph G and edge e G , the edge-inclusion graph G + e G is the graph such that for any d A ( G + e ) , e d .",
"Similarly, the edge-exclusion graph G e G is the graph such that for any d A ( G e ) , e (cid:54) d .",
"When we discuss finding the K -best dependency trees in 5, we implicitly change the above definition to use D ( G + e ) and D ( G e ) instead of A ( G + e ) and A ( G e ) respectively.",
"In this section, we will specifically focus on finding G (2) , we extend this to finding the G ( k ) in 4.",
"Finding G (2) relies on the following fundamental theorem.",
"Theorem 1 states that we can find G (2) by identifying an edge e G (1) such that G (2) = ( G e ) (1) .",
"We next show an efficient method for identifying this edge, as well as the weight of G (2) without actually having to find G (2) .",
"Definition 2 (Blue and red edges) .",
"For any graph 1: def next ( G ) : 2: if (cid:65) G has a cycle C : (cid:46) Recursive case 3: d, (cid:104) w, e (cid:105) next (cid:0) G /C (cid:1) 4: d (cid:48) d (cid:35) C 5: e (cid:48) argmin e (cid:48)(cid:48) C d (cid:48) w G,d (cid:48) ( e (cid:48)(cid:48) ) 6: w (cid:48) w ( d (cid:48) ) w G,d (cid:48) ( e ) 7: return d (cid:48) , max( (cid:104) w, ( e ) (cid:105) , (cid:104) w (cid:48) , e (cid:48) (cid:105) ) 8: else (cid:46) Base case 9: e argmin e (cid:48) (cid:65) G w G ( e (cid:48) ) 10: w w (cid:16) (cid:65) G (cid:17) w G ( e ) 11: return (cid:65) G, (cid:104) w, e (cid:105) Figure 5: Algorithm for finding G (1) , the best edge e to delete to find G (2) , and w (cid:0) G (2) (cid:1) .",
"G , tree d A ( G ) , and edge e = ( i (cid:65) j ) d , the set of blue edges b ( G, e, d ) and red edges r ( G, e, d ) are defined by 12 b ( G, e, d ) def = { e (cid:48) =( i (cid:48) (cid:65) j ) | w (cid:0) e (cid:48) (cid:1) w ( e ) , d (cid:114) { e } { e (cid:48) } A ( G ) } (2) r ( G, e, d ) def = { e (cid:48) =( i (cid:48) (cid:65) j ) | e (cid:48) (cid:54) b ( G, e, d ) } (3) An example of blue and red edges are given in Fig. 4. Lemma 1. For any graph G , if G (1) = (cid:65) G , then for some e G (1) and e (cid:48) b ( G, e, G (1) ) G (2) = G (1) (cid:114) { e } { e (cid:48) } (8) Lemma 1 can be understood more clearly by following the worked example in Fig. 4. The moral of Lemma 1 is that in the base case where there are no critical cycles, we only need to examine the blue edges of the greedy graph to find the second best tree.",
"Furthermore, our second best tree will only differ from our best tree by exactly one blue edge.",
"Camerini et al. (1980) make use of the concepts of the blue and red edge sets, but rather than consider a base case as Lemma 1, they propose an ordering in which to visit the edges of the graph.",
"This results in several properties about the possible orderings, 12 We can also define b ( G, e, d ) as ( i (cid:48) (cid:65) j ) b ( G, e, d ) i (cid:48) is an ancestor of j in d and r ( G, e, d ) as ( i (cid:48) (cid:65) j ) r ( G, e, d ) i (cid:48) is a descendant of j in d .",
"This equivalence exists as we can only swap an incoming edge to j in d without introducing a cycle if the new edge emanates from an ancestor of j .",
"The exposition using ancestors and descendants is more similar to the exposition originally presented by Camerini et al. (1980).",
"Definition 3 (Swap cost) .",
"For any graph G , tree d A ( G ) , and edge e d , the swap cost denotes the minimum change to a tree weight to replace e by a single edge in d .",
"It is given by w G,d ( e ) = min e (cid:48) b ( G,e,d ) (cid:0) w ( e ) w (cid:0) e (cid:48) (cid:1)(cid:1) (4) We will shorthand w G ( e ) to mean w G,G (1) ( e ) .",
"Corollary 1. For any graph G , if G (1) = (cid:65) G , then G (2) = ( G e ) (1) where e is given by e = argmin e (cid:48) G (1) w G (cid:0) e (cid:48) (cid:1) (5) Furthermore, w (cid:0) G (2) (cid:1) = w (cid:0) G (1) (cid:1) w G ( e ) .",
"Corollary 1 provides us a procedure for finding the best edge to remove to find G (2) as well as its weight in the base case of G having no critical cycles.",
"We next illustrate what must be done in the recursive case when a critical cycle exists.",
"Lemma 2. For any G with a critical cycle C , either G (2) = ( G /C ) (2) (cid:35) C (with w (cid:0) G (2) (cid:1) = w (cid:0) ( G /C ) (2) (cid:1) ) or G (2) = ( G e ) (1) (with w (cid:0) G (2) (cid:1) = w (cid:0) G (1) (cid:1) w G ( e ) ) for some e C G (1) .",
"Combining Corollary 1 and Lemma 2, we can directly modify opt to find the weight of G (2) and the edge we must remove to obtain it.",
"We detail this algorithm as next in Fig. 5. Theorem 2. For any graph G , executing next ( G ) returns G (1) and (cid:104) w, e (cid:105) such that G (2) = ( G e ) (1) and w (cid:0) G (2) (cid:1) = w .",
"Runtime analysis.",
"We know that without lines 5, 6, 9 and 10, next is identical to opt and so will run in O ( N 2 ) .",
"We call w at most N + 2 times during a full call of next : N times from lines 5 and 9 combined, once from Line 6, and once from Line 10. To find w , we first need to find the set of blue edges, which can be done in O ( N ) by computing the reachability graph.",
"Then, we need another O ( N ) to find the minimising value.",
"Therefore, next does O ( N 2 ) extra work than opt and so retains the runtime of O ( N 2 ) .",
"Camerini et al. (1980) require G (1) to be known ahead of time.",
"This results in having to run the original algorithm in O ( N 2 ) time and then having to do the same amount of work as next because they must still contract the graph.",
"Therefore, next has a constant-time speed-up over its counterpart in Camerini et al. (1979).",
"In the previous section, we found an efficient method for finding G (2) .",
"We now utilize this method to efficiently find the K -best trees.",
"Lemma 3. For any graph G and K > 1 , there exists a subgraph G (cid:48) G and 1 l < K such that G ( l ) = G (cid:48) (1) and G ( K ) = G (cid:48) (2) .",
"Lemma 3 suggests that we can find the K -best trees by only examining the second best trees of subgraphs of G .",
"This idea is formalized as algorithm kbest in Fig. 7.",
"A walk-through of the exploration space using kbest for our example graph in Fig. 2 is shown in Fig. 6.",
"start of the algorithm, then every subsequent iteration we make two calls to next .",
"As we have K 1 iterations , the runtime of kbest is O ( KN 2 ) .",
"The first call to next in each iteration finds the K th best tree as well as an edge to remove.",
"Camerini et al. (1980) make one call to of opt and two calls to next which only finds the weight-edge pair of our algorithm.",
"Therefore, kbest has a constant time speed-up on the original algorithm.",
"13 A short experiment.",
"We empirically measure the constant time speed-up between kbest and the original algorithm of Camerini et al. (1980).",
"We take the English UD test set (as used for Fig. 1) and find the 10 , 20 , and 50 best spanning trees using both algorithms.",
"14 We give the results of the experiment in Tab.",
"1. 15 We note that on average kbest leads to a 1 .",
"39 times speed-up.",
"This is 13 In practice, we maintain a set of edges to include and exclude to save space.",
"14 Implementations for both versions can be found in our code release (see footnote 1) 15 The experiment was conducted using an Intel(R) Core(TM) i7-7500U processor with 16GB RAM.",
"lower than we anticipated as we have to make half as many calls to next than the original algorithm.",
"However, in the original next of Camerini et al. (1980), we do not require to stitch together the tree, which may explain the slightly smaller speed-up.",
"In this section, we present a novel extension to the algorithm presented thus far, that allows us to efficiently find the K -best dependency trees.",
"Recall that we consider dependency trees to be spanning trees with a root constraint such that only one edge may emanate from .",
"Navely, we can use kbest where we initialize the queue with ( G + e ) (1) for each e = ( (cid:65) j ) G .",
"However, this adds a O ( N 3 ) component to our runtime as we have to call opt N times.",
"Instead, our algorithm maintains the O ( KN 2 ) runtime as the regular K -best algorithm.",
"We begin by noting that we can find second best dependency tree, by finding either the best dependency tree with a different root edge or the second best tree with the same root edge.",
"Lemma 4. For any graph G and edge e = ( (cid:65) j ) G [1] , G [2] = ( G e ) [1] or G [2] = ( G + e ) [2] .",
"Lemma 5. For any graph G and K > 1 , if e = ( (cid:65) j ) G [ K ] , then either e is not in any of the K 1 -best trees or there exists a subgraph G (cid:48) G and 1 l < K such that G [ l ] = G (cid:48) [1] , e G (cid:48) [1] and G [ K ] = G (cid:48) [2] .",
"Lemma 5 suggests that we can find the K -best dependency trees, by examining the second best dependency trees of subgraphs of G or finding the best dependency tree with a unique root edge.",
"This 1: def kbest dep ( G, K ) : 2: G [1] opt ( G ) 3: yield G [1] 4: e outgoing edge from in G [1] 5: (cid:104) , (cid:104) w, e (cid:105)(cid:105) next ( G + e ) 6: d opt ( G e ) 7: Q priority queue([ (cid:104) w ( d ) , e , G (cid:105) ]) 8: Q. push( (cid:104) w, e, G + e (cid:105) ) 9: for k = 2 , . . . , K : 10: if Q. empty() : return 11: (cid:104) w, e, G (cid:48) (cid:105) Q. pop() 12: if e does not emanate from : 13: G [ k ] , (cid:104) w (cid:48) , e (cid:48) (cid:105) next ( G (cid:48) e ) 14: Q. push( (cid:104) w (cid:48) , e (cid:48) , G (cid:48) e (cid:105) ) 15: (cid:104) , (cid:104) w (cid:48)(cid:48) , e (cid:48)(cid:48) (cid:105)(cid:105) next ( G (cid:48) + e ) 16: Q. push( (cid:104) w (cid:48)(cid:48) , e (cid:48)(cid:48) , G (cid:48) + e (cid:105) ) 17: else 18: G [ k ] opt ( G (cid:48) ) 19: e outgoing edge from in G [ k ] 20: d opt ( G (cid:48) e ) 21: Q. push( (cid:104) w ( d ) , e , G (cid:48) e (cid:105) ) 22: (cid:104) , (cid:104) w (cid:48) , e (cid:48) (cid:105)(cid:105) next ( G (cid:48) + e ) 23: Q. push( (cid:104) w (cid:48) , e (cid:48) , G + e (cid:105) ) 24: yield G ( k ) Figure 9: K -best dependency tree enumeration algorithm.",
"idea is formalized as algorithm kbest dep in Fig. 9. A walk-through of the exploration space using kbest dep for our example graph in Fig. 2 is shown in Fig. 8.",
"Runtime analysis.",
"At the start of the algorithm, we call opt twice and next once.",
"Then, at each iteration we either make two calls two next , or two calls to opt and one call to next .",
"As both algorithms have a runtime of O ( N 2 ) , each iteration has a runtime of O ( N 2 ) .",
"Therefore, running K iterations gives a runtime of O ( KN 2 ) .",
"In this paper, we provided a simplification to Camerini et al. (1980)'s O ( KN 2 ) K -best spanning trees algorithm.",
"Furthermore, we provided a novel extension to the algorithm that decodes the K -best dependency trees in O ( KN 2 ) .",
"We motivated the need for this new algorithm as using regular K -best decoding yields up to 36% trees which violation the root constraint.",
"This is a substantial (up to 44 times) increase in the violation rate from decoding the one-best tree, and thus such an algorithm is even more important than in the one-best case.",
"We hope that this paper encourages future research in K -best dependency parsing.",
"We would like to thank the reviewers for their valuable feedback and suggestions to improve this work.",
"The first author is supported by the University of Cambridge School of Technology Vice-Chancellor's Scholarship as well as by the University of Cambridge Department of Computer Science and Technology's EPSRC.",
"We do not foresee how the more efficient algorithms presented this work exacerbate any existing ethical concerns with NLP systems."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"result",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"method",
"other",
"other",
"method"
] |
[
"For over a decade, machine learning has been used to extract opinion-holder-target structures from text to answer the question Who expressed what kind of sentiment towards what?",
".",
"Recent neural approaches do not outperform the state-of-the-art feature-based models for Opinion Role Labeling (ORL).",
"We suspect this is due to the scarcity of labeled training data and address this issue using different multi-task learning (MTL) techniques with a related task which has substantially more data, i.e. Semantic Role Labeling (SRL).",
"We show that two MTL models improve significantly over the single-task model for labeling of both holders and targets, on the development and the test sets.",
"We found that the vanilla MTL model which makes predictions using only shared ORL and SRL features, performs the best.",
"With deeper analysis we determine what works and what might be done to make further improvements for ORL.",
"Fine-Grained Opinion Analysis (FGOA) aims to:",
"(i) detect opinion expressions (O) that convey attitudes such as sentiments, agreements, beliefs or intentions (like feared in example (1)),",
"(ii) measure their intensity (e.g. strong),",
"(iii) identify their holders (H), i.e. entities that express an attitude (e.g. it ),",
"(iv) identify their targets (T), i.e. entities or propositions at which the attitude is directed (e.g. violence ) and",
"(v) classify their target-dependent attitude (e.g. negative sentiment) 1 .",
"1 Examples are drawn from MPQA (Wiebe et al., 2005).",
"holders and targets), the task is usually approached with sequence labeling techniques and the BIO encoding scheme (Choi et al., 2006; Yang and Cardie, 2013; Katiyar and Cardie, 2016).",
"Initially pipeline models were proposed which first predict opinion expressions and then, given an opinion, label its opinion roles , i.e. holders and targets (Kim and Hovy, 2006; Johansson and Mos-chitti, 2013).",
"Pipeline models have been substituted with so-called joint models that simultaneously identify all opinion entities, and predict which opinion role is related to which opinion (Choi et al., 2006; Yang and Cardie, 2013; Katiyar and Cardie, 2016).",
"Recently an LSTM-based joint model was proposed (Katiyar and Cardie, 2016) that unlike the prior work (Choi et al., 2006; Yang and Cardie, 2013) does not depend on external resources (such as syntactic parsers or named entity recognizers).",
"The neural variant does not outperform the feature-based CRF model (Yang and Cardie, 2013) in Opinion Role Labeling (ORL).",
"Both the neural and the CRF joint models achieve about 55% F1 score for predicting which targets relate to which opinions in MPQA.",
"Thus, these models are not yet ready to answer the question this line of research is usually motivated with: Who expressed what kind of sentiment towards what?",
".",
"Our goal is to investigate the limitations of neural models in solving different subtasks of FGOA on MPQA and to gain a better understanding of what is solved and what is next.",
"We suspect that one of the fundamental obstacles for neural models trained on MPQA is its small size.",
"One way to address scarcity of labeled data is to use multi-task learning (MTL) with appropriate auxiliary tasks.",
"A promising auxiliary task candidate for ORL is Semantic Role Labeling (SRL), the task of predicting predicate-argument structure of a sentence, which answers the question Who did what to whom, where and when?",
".",
"Table 1 illustrates the output of the SRL demo 2 for example (1), following the PropBank SRL scheme (Palmer et al., 2005) 3 .",
"SRL4ORL.",
"The semantic roles of the predicate fear (marked blue bold) correspond to the opinion roles H and T, according to MPQA.",
"For this reason, the output of SRL systems has been commonly used for feature-based FGOA models (Kim and Hovy, 2006; Johansson and Moschitti, 2013; Choi et al., 2006; Yang and Cardie, 2013).",
"Additionally, a considerable amount of training data is available for training SRL models (Table 2 in Sec. 3), which made neural SRL models successful (Zhou and Xu, 2015; Yang and Mitchell, 2017).",
"Obstacles.",
"Although SRL is similar in nature to ORL, it cannot solve ORL for all cases (Rup-penhofer et al., 2008).",
"In example (2) holder and target of the predicate please correspond to A1, A0 semantic roles respectively, wheres for the predicate fear in (1) holder and target correspond to A0, A1 respectively.",
"We took into account this observation when deciding on an appropriate MTL model by splitting its parameters into shared and task-specific ones (i.e. hard-parameter sharing).",
"(2) [I] A 1 H am very [ pleased ] O pos that [the Council has now approved the Kyoto Protocol thus enabling the EU to proceed with its ratification] A 0 T .",
"A further obstacle for properly exploiting SRL training data with MTL could be specificities, inconsistency and incompleteness of the MPQA annotations.",
"In example (3), Rice expressed his negative sentiment towards the three countries in question by setting the criteria which states something negative about those countries: they are repressive and grave human rights violators [...] .",
"In this case, the model should not pick any local semantic role for the target.",
"In examples (45), the same opinion expression concerned realizes different scopes for the target.",
"A model which exploits SRL knowledge could be biased to always label targets as complete SRL role constituents, as in example (5).",
"(4) Rice told us [the administration] H was [ concerned ] O neg that [Iraq] T would take advantage of the 9/11 attacks.",
"(5) [The Chinese government] H is deeply [ concerned ] O neg about [the sudden deterioration in the Middle East situation] T , Tang said.",
"Regarding incompleteness, prior work (Kati-yar and Cardie, 2016) has shown that their model makes reasonable predictions in sentences which do not have annotations at all, e.g. [mothers] H [ care ] O for [their young] T , in: From the fact that mothers care for their young, we can not deduce that they ought to do so, Hume argued.",
"The examples above show that incorporating SRL knowledge via multi-task learning is a reasonable way to improve ORL, but at the same time they alert us that given the specificities of MPQA and ORL annotations in general, it is not obvious whether MTL can overcome divergences in the annotation schemes of opinion and semantic role labeling.",
"We investigate this research question by adopting one of the recent successful architectures for SRL (Zhou and Xu, 2015) and experiment with different multi-task learning frameworks.",
"Our contributions are:",
"(i) we adapt a recently proposed neural SRL model for ORL,",
"(ii) we enhance the model using different MTL techniques with SRL to tackle the problem of scarcity of labeled data for ORL,",
"(iii) we show that most of the MTL models improve the single-task model for labeling of both holders and targets on development and test sets, and two of them make yield significant improvements,",
"(iv) by deeper analysis we provide a better understanding of what is solved and where to head next for neural ORL.",
"Neural multi-task learning (MTL) receives a lot of attention and new MTL architectures emerge regularly.",
"Yet there is no clear consensus which MTL architecture to use in which conditions.",
"We experiment with well-received architectures that could adapt to different cases of ORL from Section 1.",
"As a general neural architecture for singleand multi-task learning we use the recently proposed SRL model (Zhou and Xu, 2015) (Z&X-STL) which successfully labels semantic roles without any syntactic guidance.",
"This model consists of a stack of bi-directional LSTMs and a CRF which makes the final prediction.",
"The inputs to the first LSTM are not only token embeddings but three additional features: embedding of the predicate, embedding of the context of the predicate and an indicator feature (1 if the current token is in the predicate context, 0 otherwise).",
"Thus, every sentence is processed as many times as there are predicates in it.",
"Adapting this model for labeling of opinion roles is straightforward, the only difference being that opinion expressions can be multiwords and only two opinion roles are assigned.",
"MTL techniques aim to learn several tasks jointly by leveraging knowledge from all tasks.",
"In the context of neural networks, MTL is commonly used in such a way that it is predefined which layers have tied parameters and which are task-specific (i.e. hard-parameter sharing).",
"There are various ways of defining which parameters should be shared and how to train them.",
"Fully-shared (FS) MTL model.",
"A fully-shared model (Fig. 1) shares all parameters of the general model except the output layer.",
"Each task has a task-specific output layer which makes the prediction based on the representation produced by the final LSTM.",
"When training on a mini-batch of a certain task, parameters of the output layer of the Figure 3: (Adversarial) state-private ((A)SP) MTL.",
"other tasks are not updated.",
"This model should be effective for constructions with a clear mapping between opinion and semantic roles such as { H 7 A0, T 7 A1 } as in example (1) (Sec. 1).",
"Hierarchical MTL (H-MTL) model.",
"For NLP applications, often some given (high-level) task is supposed to benefit from another (low-level) task more than the other way around, e.g. parsing from POS tagging.",
"This intuition lead to designing hierarchical MTL models (Sgaard and Goldberg, 2016; Hashimoto et al., 2017) in which predictions for low-level tasks are not made on the basis of the representation produced at the final LSTM, but on the representation produced by a lower-layer LSTM (Fig. 2).",
"Task-specific layers atop shared layers could potentially give the model more power to distinguish or ignore certain semantic roles.",
"If so, this MTL model is more suitable for examples like (2) and (3) (Sec. 1).",
"Shared-private (SP) MTL model.",
"In the state-private model, in addition to the stack of shared LSTMs, each task has a stack of task-specific LSTMs (Liu et al., 2017) (Fig. 3).",
"Representations at the outermost shared LSTM and the task-specific LSTM are concatenated and passed to the task-specific output layer.",
"The ORL representation produced independently from SRL gives the model the ability to utilize the shared and entirely task-specific information.",
"For labeling of targets, it is expected that for examples (1) & (5) the model relies mostly on the shared representation, for examples (2) & (4) on both shared and ORL-specific representations, and for example (3) solely on the ORL-specific representation.",
"Adversarial shared-private (ASP) model.",
"The limitation of the SP model is that it does not prevent the shared layers from capturing task-specific features.",
"To ensure this, ASP extends the 585 task train size dev size test size |Y| CoNLL'05 SRL 90750 3248 6071 106 MPQA (4-CV) ORL 3141.25 1055 1036.75 7 MPQA (10-CV) ORL 3516.3 1326 349.3 7 Table 2: Datasets w/ nb.",
"SP model with a task discriminator (Liu et al., 2017).",
"The task discriminator (Fig. 3, marked red) predicts to which task the current batch of data belongs, based on the representation produced by the shared LSTMs.",
"If the shared LSTMs are task-invariant, the discriminator should perform badly.",
"Thus, we update the shared parameters to maximize the discriminator's cross-entropy loss.",
"At the same time we want the discriminator to challenge the shared LSTMs, so we update the discriminator's parameters to minimize its cross-entropy loss.",
"This minmax optimization is known as adversarial training and recently it gained a lot of attention for NLP applications (Liu et al., 2017; Chen et al., 2017; Kim et al., 2017; Qin et al., 2017; Wu et al., 2017; Gui et al., 2017; Li et al., 2017; Zhang et al., 2017; Joty et al., 2017).",
"For SRL we use the newswire CoNLL-2005 shared task dataset (Carreras and M`arquez, 2005), annotated with PropBank predicate-argument structures.",
"Sections 2-21 of the WSJ corpus (Char-niak et al., 2000) are used for training and section 24 as dev set.",
"The test set consists of section 23 of WSJ and 3 sections of the Brown corpus.",
"For ORL we use the manually annotated MPQA 2.0.",
"corpus (Wiebe et al., 2005; Wilson, 2008).",
"It mostly contains news documents, but also travel guides, transcripts of spoken conversations, e-mails, fundraising letters, textbook chapters and translations of Arabic source texts.",
"For both tasks we adopt evaluation metrics from prior work.",
"For SRL, precision is defined as the proportion of semantic roles predicted by a system 4 Examples how to use our scripts can be found at https://github.com/amarasovic/ naacl-mpqa-srl4orl/blob/master/generate_mpqa_jsons.py .",
"which are correct, recall is the proportion of gold roles which are predicted by a system, F1 score is the harmonic mean of precision and recall.",
"In case of ORL, we report 10-fold CV 5 and repeated 4-fold CV with binary F1 score and proportional F1 score , for holders and targets separately.",
"Binary precision is defined as the proportion of predicted holders (targets) that overlap with the gold holder (target), binary recall is the proportion of gold holders (targets) for which the model predicts an overlapping holder (target).",
"Proportional recall measures the proportion of the overlap between a gold holder (target) and an overlapping predicted holder (target), proportional precision measures the proportion of the overlap between a predicted holder (target) and an overlapping gold holder (target).",
"F1 scores are the harmonic means of the corresponding precision and recall.",
"We evaluate our models using two evaluation settings.",
"First, we follow Katiyar and Cardie (2016) which set aside 132 documents for development and used the remaining 350 documents for 10-fold CV.",
"However, in the 10-fold CV setting, the size of the tests sets is 3 times smaller than the dev set size (Table 2, row 3), and, consequently, results in high-variance estimates on the test sets.",
"Therefore we additionally evaluate our models with 4-fold CV.",
"We set aside 100 documents for development and use 25% of the remaining documents for testing.",
"The resulting test sets are comparable in size to the dev set (Table 2, row 2).",
"We run 4-fold CV twice with two different random seeds.",
"We do not tune hyperparameters (HPs), but follow suggestions proposed in the thorough HP study for sequence labeling tasks (Reimers and Gurevych, 2017).",
"HPs can be found in the Supplementary Material.",
"We evaluate all models after every (cid:6) train size batch size (cid:7) iteration on the ORL dev set and save them if they achieve a higher arithmetic mean of proportional F1 scores of holders and targets on the ORL dev set.",
"The saved models are used for testing.",
"We report the mean of F1 scores over 10 folds and the standard deviation (appears as a subscript) of all models in Table",
"3. We report the mean of F1 scores over 4 folds and 2 different seed (8 evalua-5 We used the same splits as the prior work (Katiyar and Cardie, 2016).",
"We thank the authors for providing the splits.",
"We mark significant difference between MTL models and the single-task (Z&X-STL) model, observed using a Kolmogorov-Smirnov signifi-cance test ( p < 0 . 05 ) (Massey Jr, 1951), with in superscript and between the FS-MTL model and other MTL models with 3 .",
"STL vs. MTL.",
"In the 10-fold CV evaluation setting (Table 3), the FS-MTL and the H-MTL models improve over the Z&X-STL model in all evaluation measures, for both holders and targets.",
"When evaluated in the repeated 4-fold CV setting (Table 4), all MTL models improve over the Z&X-STL model in all evaluation measures, for both holders and targets.",
"The FS-MTL and the H-MTL models improve significantly in all evaluation measures, for both holders and targets, on both dev and test sets, when evaluated with repeated 4-fold CV.",
"With 10-fold CV the improvements are also significant, except for targets on the test set.",
"This is probably due to the small size of the test sets (Table 2, row 3), which results in a high-variance estimate.",
"Indeed, standard deviations on the 10-fold CV test sets are always much higher compared to the dev set or to the test sets of 4-fold CV.",
"It is not surprising that larger improvements are visible in the labeling of holders.",
"They are usually short, less ambiguous and often presented with the A0 semantic role, whereas annotating targets is a challenging task even for humans.",
"6 Larger improvements are visible for proportional F1 score than for binary F1 score.",
"That is, more data and SRL knowledge helps the model to better annotate the scope of opinion roles.",
"Comparing MTL models.",
"In Section 2 we introduced MTL models with task-specific LSTM layers hypothesizing that these layers should give MTL models more power to adapt to a variety of potentially problematic cases that we illustrated in the Introduction.",
"However, our results show that the FS-MTL model performs significantly better or comparable to MTL models that include task-specific layers.",
"Reimers and Gurevych (2017) show that MTL is especially sensitive to the selection of HPs.",
"Thus, a firm and solid comparison of the different MTL models requires thorough HP optimization, to properly control the number of parameters and regularization of the models.",
"We leave HP optimization for future work.",
"Our aim in this section is to analyze what the proposed models are good at, in which ways MTL improves over the single-task ORL model and what could be done to achieve further progress.",
"models on the ORL dev set using 4-fold CV repeated twice with different seeds (8 evaluation tri-als).",
"We say that a model predicts a role of a given opinion expression correctly if the model predicts a role that overlaps with the correct role in at least 6 out of 8 evaluation trials.",
"If a model predicts a role that overlaps with the correct role in at most 2 out of 8 trials, we say that the model predicts the role incorrectly .",
"The requirement on 6-8 (in)correct predictions reduces the risk of analyzing inconsistent predictions and enables us to draw firmer conclusions.",
"We analyze the following scenarios:",
"(i) both the FS-MTL model and the Z&X-STL model make correct predictions (Tables 56)",
"(ii) the FS-MTL model makes a correct prediction, while the Z&X-STL makes an incorrect prediction (Tables 1112)",
"(iii) both models make wrong predictions (Tables 910) In the following, we categorize predictions in case",
"as hard cases .",
"In Tables 56 and 912, the opinion expression is bolded, the correct role is italicized, predictions of the FS-MTL model are colored blue (subscript FS ), predictions of the Z&X-STL model are colored yellow (subscript ZX ) and green marks predictions where both models agree.",
"For simplicity, we show only holders or targets, although the models predict both roles jointly.",
"What works well?",
"There are 668/1055 instances in the dev set for which both models predict holders correctly, and 663/1055 for targets.",
"Examples 15 in Table 5 suggest that holders that can be properly labeled by both models ( easy 588 1 It would be entirely improper if , in its defense of Israel FS , the United States continues to exert pressure on [...] .",
"cases) are subjects of their governing heads or A0 roles.",
"The statistics in Table 7 (col. 1, rows 2 3) supports this observation.",
"7 In contrast, holders that both models predict incorrectly ( hard cases) are less frequently subjects or A0 roles (col. 2, rows 23).",
"Also, easy holders are close to the corresponding opinion expression: the average distance is 1 .",
"54 tokens (Table 7, row 4), contrary to the hard holders with the average distance of 7 .",
"56 .",
"Examples 15 in Table 6 suggest that targets that can be properly labeled by both models are objects of their governing heads or A1 roles.",
"Table 8, row 3, shows that the majority of the easy targets are indeed A1 roles, in contrast to the hard targets.",
"Similar to holders, the easy targets are in average 7 tokens closer to the opinion expression.",
"What to do for further improvement?",
"There are 165/1055 instances in the dev set for which 7 The statistics is calculated using the output of mate-tools (Bjorkelund et al., 2010).",
"both models predict holders incorrectly, and 176 for targets.",
"As we have seen so far, many holders that are subjects or A0 roles, and targets that are A1 roles, are properly labeled by both models.",
"However, a considerable amount of such holders and targets are not correctly predicted (Table 78, col. 2, rows 23).",
"Thus our models do not work flawlessly for all such cases.",
"A distinguishing property of the hard cases is the distance of the role from the opinion.",
"Thus, future work should advance the model' s ability to capture long-range dependencies.",
"Examples in Table 9 demonstrate that holders, harder to label with our models, occur with the corresponding opinions in more complicated syntactic constructions.",
"In the first example, the FS-MTL model does not recognize the possessive and is possibly biased towards picking the country ( Is-real ), which occurs immediately after the opinion.",
"In the second example, the opinion expression is 589 1 Yoshihisa Murasawa , a management consultant for Booz-Allen & Hamilton Japan Inc.",
"Z&X-STL model predicts incorrectly in 6/8 trials.",
"a nominal predicate and the holder is its object.",
"The sentence is in passive voice but the models probably interpret it in the active voice and thus make the wrong prediction.",
"In the third example, the opinion expression is the head of the relative clause that modifies the holder.",
"These examples raise the following questions: would improved consistency with syntax lead to improvements for ORL and could we train a dependency parsing model with SRL and ORL to help the models handle syntactically harder cases?",
"Example 4 shows that holders specific to the MPQA annotation schema are hard to label as they require inference skills: from the department said , we can defeasibly infer that it is the department who expects [this cost] to stand at just $ 400/year [...] .",
"To handle such cases, it would be worth trying training our models jointly with models for recognizing textual entailment.",
"Examples 67 illustrate that some gap in performance stems from difficulties in processing MPQA.",
"Example 5 has no gold holder, but the models make plausible predictions.",
"For example 6, FS-MTL predicts the discontinuous holder they ... all , while MPQA allows only contiguous entities.",
"Therefor our evaluation scripts interpret they and all as two separate holders and deem all as incorrect, resulting with lower precision.",
"Finally, for example 7 our models make plausible predictions.",
"However, the gold holder is always the entity from the coreference cluster that is the closest to the opinion.",
"8 The evaluation scripts needs to be extended such that predicting any entity from the coreference cluster is considered to be correct.",
"To conclude, to better evaluate future developments, it would be worth curating MPQA instances with missing roles and extending evaluation to account for coreferent holders and discontinuous roles.",
"dif-8 We followed the prior work (Katiyar and Cardie, 2016).",
"ficulties in labeling targets originate from similar reasons as for holders.",
"Examples 13 demonstrate complex syntactic constructions, examples 46 MPQA-specific annotations that require inference and example 7 exemplifies a missing target.",
"How does MTL help?",
"There are 18/1055 instances in the dev set for which the FS model predicts the holder correctly and the Z&X-STL model does not, and 19/1055 for targets.",
"For holders, for 9 out 18 of such examples, the Z&X-STL model does not predict anything (as in examples 25 in Table 11).",
"From examples 1 5 we notice that SRL data helps to handle more complex syntactic constructions.",
"From examples 57 we observed that using MTL with SRL helps to handle cases when more than one person or organization is present in the close neighborhood of the opinion.",
"For targets, for 11 out of 18 cases the Z&X-STL model does not predict anything as in examples 12 in Table 11.",
"We conclude that the greatest improvements from the FS-MTL model comes from having far fewer missing roles.",
"FGOA.",
"Closest to our work are Yang and Cardie (2013) (Y&C) and Katiyar and Cardie (2016) (K&C).",
"They as well label both holders and targets in MPQA.",
"By contrast, our focus is on the task of ORL.",
"We thus refrain from predicting opinion expressions first, to ensure a reproducible evaluation setup on a fixed set of gold opinion expressions.",
"The MTL models we develop in this work will, however, be the basis for the full task in a later stage.",
"Because of these differences, direct comparison to Y&C and K&C is not possible.",
"However, if we compare our results we notice a big gap that demonstrates that opinion expression extraction is the import step in FGOA.",
"Similar to K&C, Liu et al. (2015) jointly labels opinion expressions and their targets in reviews.",
"Some work focuses entirely on labeling of opinion expressions (Yang and Cardie, 2014; Irsoy and Cardie, 2014).",
"Other work looks into specific subcategories of ORL: opinion role induction for verbal predicates (Wiegand and Ruppenhofer, 2015), categorization of opinion words into actor and speaker view (Wiegand et al., 2016b), opinion roles extraction on opinion compounds (Wiegand et al., 2016a).",
"Wiegand and Ruppenhofer (2015) report 72 .",
"54 binary F1 score for labeling of holders in MPQA (results for targets are not reported).",
"Neural SRL.",
"New neural SRL models have emerged (He et al., 2017; Yang and Mitchell, 2017; Marcheggiani and Titov, 2017) since we started this work.",
"In future work we can improve our models with such new proposals.",
"Auxiliary tasks for MTL.",
"Other work investigates under which conditions MTL is effective.",
"Martnez Alonso and Plank (2017) show that the best auxiliary tasks have low kurtosis of labels (usually a small label set) and high entropy (la-bels occur uniformly).",
"We show that the best MTL model for ORL is the model which uses shared layers only.",
"Thus it seems reasonable to consider only a small and uniform SRL label set { A0, A1 } .",
"Bingel and Sgaard (2017) show that MTL works when the main task has a flattening learning curve, but the auxiliary task curve is still steep.",
"We notice such behavior in our learning curves.",
"We address the problem of scarcity of annotated training data for labeling of opinion holders and targets (ORL) using multi-task learning (MTL) with Semantic Role Labeling (SRL).",
"We adapted a recently proposed neural SRL model for ORL and enhanced it with different MTL techniques.",
"Two MTL models achieve significant improvements with all evaluation measures, for both holders and targets, on both dev and test set, when evaluated with repeated 4-fold CV.",
"We recommend evaluation with comparable dev and test set sizes for future work, as this enables more reliable evaluation.",
"With deeper analysis we show that future developments should improve the ability of the models to capture long-range dependencies, investigate if consistency with syntax can improve ORL, and consider other auxiliary tasks such as dependency parsing or recognizing textual entailment.",
"We emphasize that future improvements can be measured more reliably if opinion expressions with missing roles are curated and if the evaluation considers all mentions in opinion role coreference chains as well as discontinuous roles.",
"This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant No.",
"GRK 1994/1."
] | [
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"result",
"abstain",
"objective",
"objective",
"method",
"result",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"objective",
"objective",
"objective",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"method",
"objective",
"abstain",
"method",
"objective",
"objective",
"other",
"other"
] |
[
"In lexicalist linguistic theories, argument structure is assumed to be predictable from the meaning of verbs.",
"As a result, the verb is the primary determinant of the meaning of a clause.",
"In contrast, construction grammarians propose that argument structure is encoded in constructions (or form-meaning pairs) that are distinct from verbs.",
"Decades of psycholinguistic research have produced substantial empirical evidence in favor of the construction view.",
"Here we adapt several psycholinguistic studies to probe for the existence of argument structure constructions (ASCs) in Transformer-based language models (LMs).",
"First, using a sentence sorting experiment, we find that sentences sharing the same construction are closer in embedding space than sentences sharing the same verb.",
"Furthermore, LMs increasingly prefer grouping by construction with more input data, mirroring the behaviour of non-native language learners.",
"Second, in a Jabberwocky priming-based experiment, we find that LMs associate ASCs with meaning, even in semantically nonsensical sentences.",
"Our work offers the first evidence for ASCs in LMs and highlights the potential to devise novel probing methods grounded in psycholinguistic research.",
"Pretrained Transformer-based language models (LMs) such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019b) have recently achieved impressive results on many natural language tasks, spawning a new interdisciplinary field of aligning LMs with linguistic theory and probing the linguistic capabilities of LMs (Linzen and Baroni, 2021).",
"Most probing work so far has investigated the linguistic knowledge of LMs on phenomena such as agreement, binding, licensing, and movement (Warstadt et al., 2020a; Hu et al., 2020) Transitive Bob cut the bread S V OS acts on O Ditransitive Bob cut Joe the bread S V O1 O2 S transfers O2 to O1 Caused motion Bob cut the bread into the pan S V O Path S causes O to move via Path Resultative Bob cut the bread apart S V O State S causes O to become State Figure 1: Four argument structure constructions (ASCs) used by Bencini and Goldberg (2000), with example sentences (top right).",
"with a particular focus on determining whether a sentence is linguistically acceptable (Schtze, 1996).",
"Relatively little work has attempted to determine whether the linguistic knowledge induced by LMs is more similar to a formal grammar of the sort postulated by mainstream generative linguistics (Chomsky, 1965, 1981, 1995), or to a network of form-meaning pairs as advocated by construction grammar (Goldberg, 1995, 2006).",
"One area where construction grammar disagrees with many generative theories of language is in the analysis of the argument structure of verbs, that is, the specification of the number of arguments that a verb takes, their semantic relation to the verb, and their syntactic form (Levin and Rappaport Hovav, 2005).",
"Lexicalist theories were long dominant in generative grammar (Chomsky, 1981; Kaplan and Bresnan, 1982; Pollard and Sag, 1987).",
"In lexicalist theories, argument structure is assumed to be encoded in the lexical entry of the verb: for 7410 example, the verb visit is lexically specified as being transitive and as requiring a noun phrase object (Chomsky, 1986).",
"In contrast, construction grammar suggests that argument structure is encoded in form-meaning pairs known as argument structure constructions (ASCs, Figure 1), which are distinct from verbs.",
"The argument structure of a verb is determined by pairing it with an ASC (Goldberg, 1995).",
"To date, a substantial body of psycholinguistic work has provided evidence for the psychological reality of ASCs in sentence sorting (Bencini and Goldberg, 2000; Gries and Wulff, 2005), priming (Ziegler et al., 2019), and novel verb experiments (Kaschak and Glenberg, 2000; Johnson and Goldberg, 2013).",
"Here we connect basic research in ASCs with neural probing by adapting several psycholinguistic studies to Transformer-based LMs and show evidence for the neural reality of ASCs.",
"Our first case study is based on sentence sorting (Bencini and Goldberg, 2000); we discover that in English, German, Italian, and Spanish, LMs consider sentences that share the same construction to be more semantically similar than sentences sharing the main verb.",
"Furthermore, this preference for constructional meaning only manifests in larger LMs (trained with more data), whereas smaller LMs rely on the main verb, an easily accessible surface feature.",
"Human experiments with non-native speakers found a similarly increased preference for constructional meaning in more proficient speakers (Liang, 2002; Baicchi and Della Putta, 2019), suggesting commonalities in language acquisition between LMs and humans.",
"Our second case study is based on nonsense Jabberwocky sentences that nevertheless convey meaning when they are arranged in constructional templates (Johnson and Goldberg, 2013).",
"We adapt the original priming experiment to LMs and show that RoBERTa is able to derive meaning from ASCs, even without any lexical cues.",
"This finding offers counter-evidence to earlier claims that LMs are relatively insensitive to word order when constructing sentence meaning (Yu and Ettinger, 2020; Sinha et al., 2021).",
"Our source code and data are available at: https://github.com/SPOClab-ca/ neural-reality-constructions .",
"Construction grammar is a family of linguistic theories proposing that all linguistic knowledge consists of constructions : pairings between form and meaning where some aspects of form or meaning are not predictable from their parts (Fillmore et al., 1988; Kay and Fillmore, 1999; Goldberg, 1995, 2006).",
"Common examples include idiomatic expressions such as under the weather (meaning to feel un-well), but many linguistic patterns are constructions, including morphemes (e.g., -ify ), words (e.g., apple ), and abstract patterns like the ditransitive and passive.",
"In contrast to lexicalist theories of argument structure, construction grammar rejects the dichotomy between syntax and lexicon.",
"In contrast to transformational grammar, it rejects any distinction between surface and underlying structure.",
"We focus on a specific family of constructions for which there is an ample body of psycholinguistic evidence: argument structure constructions (ASCs).",
"ASCs are constructions that specify the argument structure of a verb (Goldberg, 1995).",
"In the lexicalist, verb-centered view, argument structure is a lexical property of the verb, and the main verb of a sentence determines the form and meaning of the sentence (Chomsky, 1981; Kaplan and Bresnan, 1982; Pollard and Sag, 1987; Levin and Rappaport Hovav, 1995).",
"For example, sneeze is intransitive (allowing no direct object) and hit is transitive (requiring one direct object).",
"However, lexicalist theories encounter difficulties with sentences like he sneezed the napkin off the table since intransitive verbs are not permitted to have object arguments.",
"Rather than assuming multiple implausible senses for the verb sneeze with different argument structures, Goldberg (1995) proposed that ASCs operate on an arbitrary verb, altering its argument structure while at the same time modifying its meaning.",
"For example, the caused-motion ASC adds a direct object and a path argument to the verb sneeze , with the semantics of causing the object to move along the path.",
"Other ASCs include the transitive, ditransitive, and resultative (Figure 1), which specify the argument structure of a verb and interact with its meaning in different ways.",
"Sentence sorting.",
"Several psycholinguistic studies have found evidence for argument structure con-7411 Transitive Ditransitive Caused-motion Resultative Throw Anita threw the hammer.",
"structions using experimental methods.",
"Among these, Bencini and Goldberg (2000) used a sentence sorting task to determine whether the verb or construction in a sentence was the main determinant of sentence meaning.",
"17 participants were given 16 index cards with sentences containing 4 verbs ( throw, get, slice , and take ) and 4 constructions ( transitive, ditransitive, caused-motion , and resultative ) and were instructed to sort them into 4 piles by overall sentence meaning (Table 1).",
"The experimenters measured the deviation to a purely verb-based or construction-based sort, and found that on average, the piles were closer to a construction sort.",
"Non-native sentence sorting.",
"The same set of experimental stimuli was used with L2 (non-native) English speakers.",
"Gries and Wulff (2005) ran the experiment with 22 German native speakers, who preferred the construction-based sort over the verb-based sort, showing that constructional knowledge is not limited to native speakers.",
"Liang (2002) ran the experiment on Chinese native speakers of 3 different English levels (46 beginner, 31 intermediate, and 33 advanced), and found that beginners preferred a verb-based sort, while advanced learners produced construction-based sorts similar to native speakers (Figure 2).",
"Likewise, Baicchi and Della Putta (2019) found the same result in Italian native speakers with B1 and B2 English proficiency levels.",
"Overall, these studies show evidence for ASCs in the mental representations of native and L2 English speakers alike, and furthermore, preference for constructional over verb sorting increases with increasing English proficiency.",
"Multilingual sentence sorting.",
"Similar sentence sorting experiments have been conducted in other languages, with varying results.",
"Kirsch (2019) ran a sentence sorting experiment in German with 40 participants and found that they mainly sorted by verb but rarely by construction.",
"Baicchi and Della Putta (2019) ran an experiment with non-native learners of Italian (15 participants of B1 level and 10 participants of B2 level): both groups preferred the constructional sort, and similar to Liang (2002), the B2 learners sorted more by construction than the B1 learners.",
"Vzquez (2004) ran an experiment in Spanish with 16 participants, and found approximately equal proportions of constructions and verb sort.",
"In Italian and Spanish, some different constructions were substituted as not all of the English constructions had an equivalent in these languages; see the appendix for the complete set of stimuli in each language.",
"Priming.",
"Another line of psycholinguistic evidence comes from priming studies.",
"Priming refers to the condition where exposure to a (prior) stimulus influences the response to a later stimulus (Pickering and Ferreira, 2008).",
"Bock and Loebell (1990) found that participants were more likely to produce sentences of a given syntactic structure when primed with a sentence of the same structure; Ziegler et al. (2019) argued that Bock and Loebell (1990) did not adequately control for lexical overlap, and instead, they showed that the construction must be shared for the priming effect to occur, not just shared abstract syntax.",
"Novel verbs.",
"Even with unfamiliar words, there is evidence that constructions are associated with meaning.",
"Kaschak and Glenberg (2000) constructed sentences with novel denominal verbs and found that participants were more likely to interpret a transfer event when the denominal verb was used in a ditransitive sentence ( Tom crutched Lyn an apple ) than a transitive one ( Tom crutched an apple ).",
"Johnson and Goldberg (2013) used a Jabberwocky priming task to show that abstract constructional templates are associated with meaning.",
"Participants were primed with a nonsense sentence of a given construction (e.g., He daxed her the norp for the ditransitive construction), followed by a lexical decision task of quickly deciding if a string of 7412 characters was a real English word or a non-word.",
"The word in the decision task was semantically congruent with the construction ( gave ) or incongruent ( made ); furthermore, they experimented with target words that were high-frequency ( gave ), low-frequency ( handed ), or semantically related but not associated with the construction ( transferred ).",
"They found priming effects (faster lexical decision times) in all three conditions, with the strongest effect for the high-frequency condition, followed by the low-frequency and the semantically nonas-sociate conditions.",
"We adapt several of these psycholinguistic studies to LMs: the sentence sorting experiments in Case study 1, and the Jabberwocky priming experiment in Case study 2.",
"We choose these studies because their designs allow for thousands of stimuli sentences to be generated automatically using templates, avoiding issues caused by small sample sizes from manually constructed sentences.",
"Many studies have probed for various aspects of syntax in LSTMs and Transformer-based LMs.",
"Linzen et al. (2016) tested LSTMs on their ability to capture subject-verb agreement, using templates to generate test data.",
"This idea was extended by BLiMP (Warstadt et al., 2020a), a suite encompassing 67 linguistic phenomena, including filler-gap effects, NPI licensing, and ellipsis; Hu et al. (2020) released a similar test suite.",
"Template generation is a convenient method to construct stimuli exhibiting specific linguistic properties, but alternative approaches include CoLA (Warstadt et al., 2019), which compiled an acceptability benchmark of sentences drawn from linguistic publications, and Gulordava et al. (2018), who perturbed natural sentences to study LMs' knowledge of agreement on nonsense sentences.",
"We refer to Linzen and Baroni (2021) for a comprehensive review of the linguistic probing literature.",
"So far, relatively few papers approached LM probing from a construction grammar perspective.",
"Madabushi et al. (2020) probed for BERT's knowledge of constructions via a sentence pair classi-fication task of predicting whether two sentences share the same construction.",
"Their probe was based on data from Dunn (2017), who used an unsupervised algorithm to extract plausible constructions from corpora based on association strength.",
"However, the linguistic validity of these automatically induced constructions is uncertain, and there is currently no human-labelled wide-coverage construction grammar dataset in any language suitable for probing.",
"Other computational work focused on a few specific constructions, such as identifying caused-motion constructions in corpora (Hwang and Palmer, 2015) and annotating constructions related to causal language (Dunietz et al., 2015).",
"Lebani and Lenci (2016) is the most similar to our work: they probed distributional vector space models for ASCs based on the Jabberwocky priming experiment by Johnson and Goldberg (2013).",
"Some recent probing studies adapted methods and data from psycholinguistic research, treating LMs as psycholinguistic participants.",
"Using a cloze completion task, Ettinger (2020) found that BERT was less sensitive than humans at commonsense inferences and detecting role reversals, and fails completely at understanding negation.",
"Michaelov and Bergen (2020) compared LM surprisals with the N400 (a measure of human language processing difficulty) across a wide range of conditions; Li et al. (2021) used psycholinguistic stimuli and found that LMs exhibit different layerwise surprisal patterns for morphosyntactic, semantic, and commonsense anomalies.",
"Wilcox et al. (2021) compared LM and human sensitivities to syntactic violations using a maze task to collect human reaction times.",
"Prasad et al. (2019); Misra et al. (2020) investigated whether LMs are sensitive to priming effects like humans.",
"The advantage of psycholinguistic data is that they are carefully constructed by expert linguists to test theories of language processing in humans; however, their small sample size makes it challenging to make statistically meaningful conclusions when the (oft-sparse) experimental stimuli are used to probe a language model.",
"This section describes our adaptation of the sentence sorting experiments to Transformer LMs.",
"Models.",
"To simulate varying non-native English proficiency levels, we use MiniBERTa models (Warstadt et al., 2020b), trained with 1M, 10M, 1 Bencini and Goldberg (2000) ran the sentence sorting experiment twice, so we take the average of the two runs.",
"100M, and 1B tokens.",
"We also use the base RoBERTa model (Liu et al., 2019b), trained with 30B tokens.",
"In other languages, there are no available pretrained checkpoints with varying amounts of pretraining data, so we use the mBERT model (Devlin et al., 2019) and a monolingual Transformer LM in each language.",
"2 We obtain sentence embeddings for our models by taking the average of their contextual token embeddings at the second-to-last layer (i.e., layer 11 for base RoBERTa).",
"We use the second-to-last because the last layer is more specialized for the LM pretraining objective and less suitable for sentence embeddings (Liu et al., 2019a).",
"Template generation.",
"We use templates to generate stimuli similar to the 4x4 design in the Bencini and Goldberg (2000) experiment.",
"To ensure an adequate sample size, we run multiple empirical trials.",
"In each trial, we sample 4 random distinct verbs from a pool of 10 verbs that are compatible with all 4 constructions ( cut, hit, get, kick, pull, punch, push, slice, tear, throw ).",
"We then randomly fill in the slots for proper names, objects, and complements for each sentence according to its verb, such that the sentence is semantically coherent, and there is no lexical overlap among the sentences of any construction.",
"Table 3 in the ap-2 We use monolingual German and Italian models from https://github.com/dbmdz/berts , and the monolingual Spanish model from Caete et al. (2020).",
"pendix shows a set of template-generated sentences.",
"In English, we generate 1000 sets of stimuli using this procedure; for other languages, we use the original stimuli from their respective publications.",
"Evaluation.",
"Similar to the human experiments, we group the sentence embeddings into 4 clusters (not necessarily of the same size) using agglomerative clustering by Euclidean distance (Pedregosa et al., 2011).",
"We then compute the deviation to a pure construction and pure verb sort using the Hungarian algorithm for optimal bipartite matching.",
"This measures the minimal number of cluster assignment changes necessary to reach a pure construction or verb sort, ranging from 0 to 12.",
"Thus, lower construction deviation indicates that constructional information is more salient in the LM's embeddings.",
"Figure 2 shows the LM sentence sorting results for English.",
"All differences are statistically significant ( p < . 001 ).",
"The smallest 1M MiniBERTa model is the only LM to prefer verb over construction sorting, and as the amount of pretraining data grows, the LMs increasingly prefer sorting by construction instead of by verb.",
"This closely mirrors the trend observed in the human experiments.",
"The results for multilingual sorting are shown in Figure 3.",
"Both mBERT and the monolingual LMs consistently prefer constructional sorting over verb 7414 0 2 4 6 8 10 12 mBERT mono mBERT mono mBERT mono CDev VDev German Italian Spanish Human data Language models 0 2 4 6 8 10 12 German Italian B1 Italian B2 Spanish V a l ue Figure 3: Multilingual sentence sorting results for German (Kirsch, 2019), Italian (Baicchi and Della Putta, 2019), and Spanish (Vzquez, 2004).",
"Our results show that RoBERTa can generalize meaning from abstract constructions without lexical overlap.",
"Only larger LMs and English speakers of more advanced proficiency are able to make this generalization, while smaller LMs and less proficient speakers derive meaning more from surface features like lexical content.",
"This finding agrees with Warstadt et al. (2020b), who found that larger LMs have an inductive bias towards linguistic generalizations, while smaller LMs have an inductive bias towards surface generalizations; this may explain the success of large LMs on downstream tasks.",
"A small quantity of data (10M tokens) is sufficient for LMs to prefer the constructional sort, indicating that ASCs are relatively easy to learn: roughly on par with other types of linguistic knowledge, and requiring less data than commonsense knowledge (Zhang et al., 2021; Liu et al., 2021).",
"We note some limitations in these results, and reasons to avoid drawing unreasonably strong conclusions from them.",
"Human sentence sorting experiments can be influenced by minor differences in the experimental setup: Bencini and Goldberg (2000) obtained significantly different results in two runs that only differed on the precise wording of instructions.",
"In the German experiment (Kirsch, 2019), the author hypothesized that the participants were influenced by a different experiment that they had completed before the sentence sorting one.",
"Given this experimental variation, we cannot attribute differences across languages to differences in their linguistic typology.",
"Although LMs do not suffer from the same experimental variation, we cannot conclude statistical significance from the multilingual experiments, where only one set of stimuli is available in each language.",
"We next adapt the Jabberwocky priming experiment from Johnson and Goldberg (2013) to LMs, and make several changes to the original setup to better assess the capabilities of LMs.",
"Priming is a standard experimental paradigm in psycholinguistic research, but it is not directly applicable to LMs: existing methods simulate priming either by applying additional fine-tuning (Prasad et al., 2019), or by concatenating sentences that typically do not co-occur in natural text (Misra et al., 2020).",
"Therefore, we instead propose a method to probe LMs for the same linguistic information using only distance measurements on their contextual embeddings.",
"Template generation.",
"We generate sentences for the four constructions randomly using the templates in Table 2.",
"Instead of filling nonce words like norp into the templates as in the original study, we take an approach similar to Gulordava et al. (2018) and generate 5000 sentences for each construction 7415 She traded her the epicenter gave made put took Figure 4: In our adapted Jabberwocky experiment, we measure the Euclidean distance from the Jabberwocky verb ( traded ) to the 4 prototype verbs, of which 1 is congruent ( (cid:51) ) with the construction of the sentence, and 3 are incongruent ( (cid:55) ).",
"by randomly filling real words of the appropriate part-of-speech into construction templates (Table 2).",
"This gives nonsense sentences like She traded her the epicenter ; we refer to these random words as Jabberwocky words .",
"By using real words, we avoid any potential instability from feeding tokens into the model that it has never seen during pretraining.",
"We obtain a set of singular nouns, past tense verbs, and adjectives from the Penn Treebank (Marcus et al., 1993), excluding words with fewer than 10 occurrences.",
"Verb embeddings.",
"Our probing strategy is based on the assumption that the contextual embedding for a verb captures its meaning in context.",
"Therefore, if LMs associate ASCs with meaning, we should expect the contextual embedding for the Jabberwocky verb to contain the meaning of the construction.",
"Specifically, we measure the Euclidean distance to a prototype verb for each construction (Figure 4).",
"These are verbs that Johnson and Goldberg (2013) selected whose mean-Congruent Incongruent High frequency Low frequency 11.0 11.5 12.0 12.5 13.0 Congruent Incongruent E u c li dean d i s t an c e Figure 5: Euclidean distance between Jabberwocky and prototype verbs for congruent and incongruent conditions.",
"ing closely resembles the construction's meaning: gave , made , put , and took for the ditransitive, resultative, caused-motion, and removal constructions, respectively.",
"3 We also run the same setup using lower frequency prototype verbs from the same study: handed , turned , placed , and removed .",
"4 As a control, we measure the Euclidean distance to the prototype verbs of the other three unrelated constructions.",
"The prototype verb embeddings are generated by taking the average across their contextual embeddings across a 4M-word subset of the British National Corpus (BNC; Leech (1992)).",
"We use the second-to-last layer of RoBERTa-base, and in cases where a verb is split into multiple subwords, we take the embedding of the first subword token as the verb embedding.",
"We find that the Euclidean distance between the prototype and Jabberwocky verb embeddings is significantly lower ( p < . 001 ) when the verb is congruent with the construction than when they are incongruent, and this is observed for both high and low-frequency prototype verbs (Figure 5).",
"Examining the individual constructions and verbs (Figure 6), we note that in the high-frequency scenario, the lowest distance prototype verb is always the congruent one, for all four constructions.",
"In the low-frequency scenario, the result is less consis-3 The reader may notice that the four constructions here are slightly different from Bencini and Goldberg (2000): the transitive construction is replaced with the removal construction in Johnson and Goldberg (2013).",
"4 Johnson and Goldberg (2013) also included a third experimental condition using four verbs that are semantically related but not associated with the construction, but one of the verbs is very low-frequency ( ousted ), so we exclude this condition in our experiment.",
"tent: the congruent verb is not always the lowest distance one, although it is always still at most the second-lowest distance out of the four.",
"The main result holds for both high and low-frequency scenarios, but the correct prototype verb is associated more consistently in the high-frequency case.",
"This agrees with Wei et al. (2021), who found that LMs have greater difficulty learning the linguistic properties of less frequent words.",
"We also note that the Euclidean distances are higher overall in the low-frequency scenario, which is consistent with previous work that found lower frequency words to occupy a peripheral region of the embedding space (Li et al., 2021).",
"In any experiment, one must be careful to ensure that the observed patterns are due to the phenomenon under investigation rather than confounding factors.",
"We discuss potential confounds arising from lexical overlap, anisotropy of contextual embeddings, and neighboring words.",
"Lexical overlap .",
"The randomized experiment design ensures that the Jabberwocky words cannot be lexically biased towards any construction, since each verb is equally likely to occur in every construction.",
"Technically, the lexical content in the four constructions are not identical: i.e., words such as from (occurring only in the removal construction) or on (in the caused-motion construction) may provide hints to the sentence meaning.",
"However, the ditransitive and resultative constructions do not contain any such informative words, yet RoBERTa still associates the correct prototype verb for these constructions, so we consider it unlikely to be relying solely on lexical overlap.",
"There is substantial evidence that RoBERTa is able to associate abstract constructional templates with their meaning without lexical cues.",
"This result is perhaps surprising, given that previous work found that LMs are relatively insensitive to word order in compositional phrases (Yu and Ettinger, 2020) and downstream inference tasks (Sinha et al., 2021; Pham et al., 2021), where their performance can be largely attributed to lexical overlap.",
"Anisotropy .",
"Recent probing work have found that contextual embeddings suffer from anisotropy, where embeddings lie in a narrow cone and have much higher cosine similarity than expected if they were directionally uniform (Ethayarajh, 2019).",
"Furthermore, a small number of dimensions dominate geometric measures such as Euclidean and cosine distance, resulting in a degradation of representation quality (Kovaleva et al., 2021; Timkey and van Schijndel, 2021).",
"Since our experiments rely heavily on Euclidean distance, anisotropy is a significant concern.",
"Following Timkey and van Schijndel (2021), we perform standardization by subtracting the mean vector and dividing each dimension by its standard deviation, where the mean and standard deviation for each dimension is computed from a sample of the BNC.",
"We observe little difference after standardization: in both the high and low frequency scenarios, the Euclidean distances are lower for the congruent than the incongruent conditions, by a similar margin compared to the original experiment without standardization.",
"We also run standardization on the first case study, and find that the 7417 results remain essentially unchanged: smaller LMs still prefer verb sorting while larger LMs prefer construction sorting.",
"Thus, neither of our experiments appear to be affected by anisotropy.",
"Neighboring words.",
"A final confounding fac-tor is our assumption that RoBERTa's contextual embeddings represent word meaning, when in reality, they contain a mixture of syntactic and semantic information.",
"Contextual embeddings are known to contain syntax trees (Hewitt and Manning, 2019) and linguistic information about neighboring words in a sentence (Klafka and Ettinger, 2020); although previous work did not consider ASCs, it is plausible that our verb embeddings leak information about the sentence's construction in a similar manner.",
"If this were the case, the prototype verb embedding for gave would contain not only the semantics of transfer that we intended, but also information about its usual syntactic form 5 of S gave NP1 NP2 , and both would be captured by our Euclidean distance measurement.",
"Controlling for this syntactic confound is difficult one could alternatively probe for transfer semantics without syntactic confounds using a natural language inference setup (e.g., whether the sentence entails the statement NP1 received NP2 ), but we leave further exploration of this idea to future work.",
"We find evidence for argument structure constructions in Transformer language models from two separate angles: sentence sorting and Jabberwocky construction experiments.",
"Our work extends the existing body of literature on LM probing by taking a constructionist instead of generative approach to linguistic probing.",
"Our sentence sorting experiments identified a striking resemblance between hu-mans' and LMs' internal language representations as LMs are exposed to increasing quantities of data, despite the differences between neural language models and the human brain.",
"Our two studies suggest that LMs are able to derive meaning from abstract constructional templates with minimal lexical overlap.",
"Both sets of experiments were inspired by psycholinguistic studies, which we adapted to fit the capabilities of LMs this illustrates the potential for future work on grounding LM probing methodologies in psycholinguistic research.",
"YX is funded through an NSERC Discovery Grant, a SSHRC Insight Grant, and an Ontario ERA award."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"result",
"objective",
"method",
"abstain",
"abstain",
"other"
] |
[
"Morphological analysis (MA) and lexical normalization (LN) are both important tasks for Japanese user-generated text (UGT).",
"To evaluate and compare different MA/LN systems, we have constructed a publicly available Japanese UGT corpus.",
"Our corpus comprises 929 sentences annotated with morphological and normalization information, along with category information we classified for frequent UGT-specific phenomena.",
"Experiments on the corpus demonstrated the low performance of existing MA/LN methods for non-general words and non-standard forms, indicating that the corpus would be a challenging benchmark for further research on UGT.",
"Japanese morphological analysis (MA) is a fundamental and important task that involves word segmentation, part-of-speech (POS) tagging and lemmatization because the Japanese language has no explicit word delimiters.",
"Although MA methods for well-formed text (Kudo et al., 2004; Neu-big et al., 2011) have been actively developed taking advantage of the existing annotated corpora of newswire domains, they perform poorly on user-generated text (UGT), such as social media posts and blogs.",
"Additionally, because of the frequent occurrence of informal words, lexical normalization (LN), which identifies standard word forms, is another important task in UGT.",
"Several studies have been devoted to both tasks in Japanese UGT (Sasano et al., 2013; Kaji and Kitsuregawa, 2014; Saito et al., 2014, 2017) to achieve the robust performance for noisy text.",
"Previous researchers have evaluated their own systems using in-house data created by individual researchers, and thus it is difficult to compare the performance of different systems and discuss what issues remain in these two tasks.",
"Therefore, publicly available data is necessary for a fair evaluation of MA and LN performance on Japanese UGT.",
"In this paper, we present the blog and Q&A site normalization corpus (BQNC), 1 which is a public Japanese UGT corpus annotated with morphological and normalization information.",
"We have constructed the corpus under the following policies: (1) available and restorable; (2) compatible with the segmentation standard and POS tags used in the existing representative corpora; and (3) enabling a detailed evaluation of UGT-specific problems.",
"For the first requirement, we extracted and used the raw sentences in the blog and Q&A site registers compiled by (the non-core data of) the Balanced Corpus of Contemporary Written Japanese (BCCWJ) (Maekawa et al., 2014), in which the original sentences are preserved.",
"2 For the second requirement, we followed the short unit word (SUW) criterion of the National Institute for Japanese Language and Linguistics (NINJAL), which is used in various NINJAL's corpora, including manually annotated sentences in the BCCWJ.",
"For the third requirement, we organized linguistic phenomena frequently observed in the two registers as word categories, and annotated each word with a category.",
"We expect that this will contribute to future research to develop systems that manage UGT-specific problems.",
"The BQNC comprises sentence IDs and annotation information, including word boundaries, POS, lemmas, standard forms of non-standard word tokens, and word categories.",
"We will release the annotation information that enables BCCWJ applicants to replicate the full BQNC data from the original BCCWJ data.",
"3 Using the BQNC, we evaluated two existing 1 Our corpus will be available at https://github.",
"2 Twitter could be a candidate for a data source.",
"However, redistributing original tweets collected via the Twitter Streaming APIs is not permitted by Twitter, Inc., and an alternative approach to distributing tweet URLs has the disadvantage that the original tweets can be removed in the future.",
"3 https://pj.ninjal.ac.jp/corpus_ center/bccwj/en/subscription.html Category Example Reading Translation Standard forms Type of vocabulary (General words) Neologisms/Slang copipe copy and paste Proper names dorakue Dragon Quest Onomatopoeia kirakira glitter Interjections o oops Dialect words homma truly Foreign words easy Emoticons/AA Type of variant form (Standard forms) Character type variants kawa cute , Alternative representations oki big Sound change variants oishi tasty , Typographical errors tsutai tough , Table 1: Word categories in the BQNC methods: a popular Japanese MA toolkit called MeCab (Kudo et al., 2004) and a joint MA and LN method (Sasano et al., 2013).",
"Our experiments and error analysis showed that these systems did not achieve satisfactory performance for non-general words.",
"This indicates that our corpus would be a challenging benchmark for further research on UGT.",
"Based on our observations and the existing studies (Ikeda et al., 2010; Kaji et al., 2015), we organized word tokens that may often cause segmentation errors into two major types with several categories, as shown in Table 1.",
"We classified each word token from two perspectives: the type of vocabulary to which it belongs and the type of variant form to which it corresponds.",
"For example, nihon Japan' written in katakana corresponds to a proper name and a character type variant of its standard form written in kanji .",
"Specifically, we classified vocabulary types into neologisms/slang , proper names , onomatopoeia , 4 interjections , (Japanese) dialect words , foreign words , and emoticons/ASCII art (AA) , in addition to general words.",
"5 A common characteristic of these vocabularies, except for general words, is that a new word can be indefinitely invented or imported.",
"We annotated word tokens with vocabulary type information, except for general words.",
"From another perspective, any word can have multiple variant forms.",
"Because the Japanese writ-4 Onomatopoeia typically refers to both the phonomime and phenomime in Japanese linguistics literature, similar to ideophones.",
"We follow this convention in this paper.",
"ing system comprises multiple script types including kanji and two types of kana , that is, hiragana and katakana , 6 words have orthographic variants written in different scripts.",
"Among them, nonstandard character type variants that rarely occur in well-formed text but occur in UGT can be problematic, for example, a non-standard form for a standard form kawa cute'.",
"Additionally, ill-spelled words are frequently produced in UGT.",
"We further divided them into two categories.",
"The first is sound change variants that have a phonetic difference from the original form and are typically derived by deletions, insertions, or substitutions of vowels, long sound symbols ( choon ), long consonants ( sokuon ), and moraic nasals ( hatsuon ), for example, oishi for oish tasty'.",
"The second category is alternative representations that do not have a phonetic difference and are typically achieved by substitution among uppercase or lowercase kana characters, or among vowel characters and long sound symbols, for example, for ok big'.",
"Moreover, typographical errors can be seen as another type of variant form.",
"We targeted these four types of non-standard forms for normalization to standard forms.",
"The BQNC was constructed using the following steps.",
"The annotation process was performed by the first author.",
"6 Morphographic kanji and syllabographic hiragana are primarily used for Japanese native words ( wago ) and Japanese words of Chinese origin (Sino-Japanese words or kango ), whereas syllabographic katakana is primarily used, for example, for loanwords, onomatopoeia, and scientific names.",
"Additionally, Arabic numerals, Latin letters ( romaji ), and other auxiliary symbols are used in Japanese sentences.",
"(1)",
"Sentence Selection We manually selected sentences to include in our corpus from the blog and Q&A site registers in the BCCWJ non-core data.",
"We preferentially extracted sentences that contained candidates of UGT-specific words, that is, word tokens that may belong to non-general vocabularies or correspond to non-standard forms.",
"As a result, we collected more than 900 sentences.",
"(2)",
"First Annotation Sentences in the non-core data have been automatically annotated with word boundaries and word attributes, such as POS and lemma.",
"Following the BCCWJ annotation guidelines",
"(Ogura et al., 2011a,b)",
"and UniDic",
"(Den et al., 2007), which is an electronic dictionary database designed for the construction of NINJAL's corpora, we refined the original annotations of the selected sentences by manually checking them.",
"The refined attributes were token, POS, conjugation type, conjugation form, pronunciation, lemma, and lemma ID.",
"Additionally, we annotated each token with a word category shown in Table 1 and a standard form ID if the token corresponded to a nonstandard form.",
"Table 2 shows two examples of annotated sentences.",
"We annotated each non-standard token with a standard form ID denoted as [lemma ID]:[lemma](_[pronunciation]), which is associated with the set of acceptable standard forms shown in Table 3.",
"all tokens in the sentences that we finished the first annotation and fixed the annotation criteria, that is, the",
"definitions of vocabulary types and variant form types, and standard forms for each word.",
"Through these steps, we obtained 929 annotated sentences.",
"Neologisms/Slang: a newly invented or imported word that has come to be used collectively.",
"Specifically, we used a corpus reference application called Chunagon 7 and regarded a word as a neologism/slang if its frequency in the BCCWJ was less than five before the year 2000 and increased to more than ten in 2000 or later.",
"8 Proper names: following the BCCWJ guidelines, we regarded a single word that corresponded to a proper name, such as person name, organization name, location name, and product name, as a proper name .",
"In contrast to the BCCWJ guidelines, we also regarded an abbreviation of a proper name as a proper name , for example, in Table 1.",
"Onomatopoeia: a word corresponds to onomatopoeia.",
"We referred to a Japanese onomatopoeia dictionary",
"(Yamaguchi, 2002)",
"to assess whether a word is onomatopoeic.",
"We followed the criteria in the BCCWJ guidelines on what forms of words are onomatopoeic and what words are associated with the same or different lemmas.",
"Interjections: a word whose POS corresponds to an interjection.",
"Although we defined standard forms for idiomatic greeting expressions registered as single words in UniDic, 9 we did not define standard and non-standard forms for other interjections that express feelings or reactions, for example, e uh-huh' and uwa wow'.",
"Foreign words: a word from non-Japanese languages.",
"We regarded a word written in scripts in the original language as a foreign word , for example, English words written in the Latin alphabet such as plastic.",
"Conversely, we regarded loanwords written in Japanese scripts",
"(hiragana, katakana, or kanji)",
"as general words, for example, 7 https://chunagon.ninjal.ac.jp 8 The original sentences were from posts published between 2004 and 2009.",
"plastic'.",
"Moreover, we did not regard English acronyms and abbreviations written in uppercase letters as foreign words because such words are typically also written in the Latin alphabet in Japanese sentences, for example, .",
"Dialect words: a word from a Japanese dialect.",
"We referred to a Japanese dialect dictionary",
"(Sato, 2009)",
"and regarded a word as a dialect word if it corresponded to an entry or occurred in an example sentence.",
"We did not consider normalization from a dialect word to a corresponding word in the standard Japanese dialect.",
"Emoticons/AA: nonverbal expressions that comprise characters to express feelings or attitudes.",
"Because the BCCWJ guidelines does not explicitly describe criteria on how to segment emoticon/AA expressions as words, we defined criteria to follow emoticon/AA entries in UniDic.",
"10 4.2 Type of Variant Form There are no trivial criteria to determine which variant forms of a word are standard forms because most Japanese words can be written in multiple ways.",
"Therefore, we defined standard forms of a word as all forms whose occurrence rates were approximately equal to 10% or more in the BCCWJ among forms that were associated with the same lemma.",
"For example, among variant forms of the lemma omoshiroi interesting' or funny' that occurred 7.9K times, major forms and accounted for 72% and 27%, respectively, and other forms, such as and , were very rare.",
"In this case, the standard forms of this word are the two former variants.",
"We annotated tokens corresponding to the two latter non-standard forms with the standard form IDs and the types of variant forms.",
"We defined criteria for types of variant forms as follows.",
"Character type variants: among the variants written in different scripts, we regarded variants whose occurrence rates were approximately equal to 5% or less in the BCCWJ as non-standard forms of character type variants .",
"Specifically, variants written in kanji, hiragana, or katakana for native words and Sino-Japanese words, variants written in katakana or hiragana for loanwords, variants 10 For example, if characters expressing body parts were outside of punctuation expressing the outline of a face, the face and body parts were segmented, but both were annotated with emoticons/AA , for example, | | .",
"written in uppercase or lowercase Latin letters for English abbreviations are candidates for character type variants.",
"We assessed whether these candidates were non-standard forms based on the occurrence rates.",
"Alternative representations: a form whose internal characters are",
"(partially)",
"replaced by special characters without phonetic differences.",
"Specifi-cally, non-standard forms of alternative representations include native words and Sino-Japanese words written in historical kana orthography",
"(e.g., for omo / omou think'), and loanwords written as an unusual 11 katakana sequence",
"(e.g., for orchestra').",
"Additionally, alternative representations include substitution with respect to kana: substitution of the long vowel kana by the long sound symbol",
"(e.g., for oish tasty'), substitution of upper/lowercase kana by the other case",
"(e.g., for watashi me'), and phonetic or visual substitution of kana characters by Latin letters and symbols",
"(e.g., for kawa cute' and for konnichiwa hello').",
"Sound change variants: a form whose pronunciation is changed from the original form.",
"Specifically, sound change variants include the insertion of special moras",
"(e.g., tsuyoi for tusyoi strong'), deletion of moras",
"(e.g., kusa for kusai stinking'), and substitution of characters/moras",
"(e.g., ssu for desu polite copula and suge for sugoi awesome').",
"Typographical errors: a form with typographical errors derived from character input errors, kana-kanji conversion errors, or the user's incorrect understanding.",
"For example, tsutai for turai tough' and for sore it'.",
"We present the statistics of the BQNC in Table 4.",
"It comprises 929 sentences, 12.6K word tokens, and 767 non-standard word tokens.",
"As shown in Table 6, the corpus contains tokens of seven types of vocabulary and four types of variant form.",
"Whereas there exist fewer than 40 instances of ne-ologisms/slang, dialect words, foreign words, and 11 We assessed whether a form is unusual if its occurrence rate was approximately equal to 5% or less in the BCCWJ similar to the case of character type variants.",
"typographical errors, each of the other category has more than 100 instances.",
"Our corpus contains a similar number of non-standard tokens to Kaji and Kitsuregawa",
"(2014)'s Twitter corpus",
"(1,831 sentences, 14.3K tokens, and 793 non-standard tokens)",
"and Osaki et al.",
"(2017)'s Twitter corpus",
"(1,405 sentences, 19.2K tokens, and 768 non-standard to-kens).",
"The former follows the POS tags for the Japanese MA toolkit JUMAN and the latter follows the authors own POS tags that extend NINJAL's SUW.",
"In the following subsections, we evaluate the existing methods for MA and LN on the BQNC and discuss correctly or incorrectly analyzed results.",
"We evaluated two existing methods.",
"First, we used MeCab 0.996",
"(Kudo et al., 2004), 12 which is a popular Japanese MA toolkit based on conditional random fields.",
"We used UniDicMA",
"(unidic-cwj-2.3.0)",
"13 as the analysis dictionary, which contains attribute information of 873K words and MeCab's parameters",
"(word occurrence costs and transition costs)",
"learned from annotated corpora, including the BCCWJ",
"(Den, 2009).",
"Second, we used our implementation of Sasano et al.",
"(2013)'s joint MA and LN method.",
"They defined derivation rules to add new nodes in the word lattice of an input sentence built by their baseline system, JUMAN.",
"Specifically, they used the following rules:",
"(i)",
"sequential voicing",
"( ren-daku ),",
"(ii)",
"substitution with long sound symbols and lowercase kana,",
"(iii)",
"insertion of long sound symbols and lowercase kana,",
"(iv)",
"repetitive onomatopoeia",
"(XYXY-form 14 )",
"and",
"(v)",
"non-repetitive onomatopoeia",
"(XQY ri -form and XXQ to -form).",
"For example, rule",
"(iii)",
"adds a node of tsumetai as a variant form of tsumetai cold' 12 https://taku910.github.io/mecab/ 13 https://unidic.ninjal.ac.jp/ 14 X and Y represent the same kana character(s)",
"corresponding to one mora, Q represents a long consonant character / , ri represents a character / , and to represents a character / .",
"The original implementation by Sasano et al.",
"(2013)",
"was an extension of JUMAN and followed JUMAN's POS tags.",
"To adapt their approach to the SUW, we implemented their rules and used them to extend the first method of MeCab using UniDicMA.",
"We set the costs of the new nodes by copying the costs of their standard forms or the most frequent costs of the same-form onomatopoeia, whereas Sasano et al.",
"(2013)",
"manually defined the costs of each type of new word.",
"We denote this method by MeCab+ER",
"(Extension Rules).",
"Notably, we did not conduct any additional training to update the models' parameters for either methods.",
"Table 5 shows the overall performance, that is, P recision, R ecall, and F 1 score, of both methods for SEG mentation, POS tagging 15 and NOR malization.",
"16 Compared with well-formed text domains, 17 the relatively lower performance",
"(F 1 of 9095%)",
"of both methods for segmentation and POS tagging indicates the difficulty of accurate segmentation and tagging in UGT.",
"However, MeCab+ER outperformed MeCab by 2.52.9 F 1 points because of the derivation rules.",
"Regarding the normalization performance of MeCab+ER, the method achieved moderate precision but low recall, which indicates its limited coverage for various variant forms in the dataset.",
"Table 6 shows the segmentation and POS tagging recall for both methods for each category.",
"In contrast to the sufficiently high performance for general words, both methods performed worse for words of characteristic categories in UGT; micro average recall was at most 79.6% for segmentation 15 We only evaluated top-level POS.",
"and 70.4% for POS tagging",
"(non-gen/std total column).",
"MeCab+ER outperformed MeCab particularly for onomatopoeia, character type variants, alternative representations, and sound change variants.",
"The high scores for dialect words were probably because UniDicMA contains a large portion of",
"(19 out of 23)",
"dialect word tokens.",
"Interjection was a particularly difficult vocabulary type, for which both methods recognized only approximately 50% of the gold POS tags.",
"We guess that this is because the lexical variations of interjections are diverse; for example, there are many user-generated expressions that imitate various human voices, such as laughing, crying, and screaming.",
"Table 7 shows the recall of MeCab+ER's normalization for each category.",
"The method correctly normalized tokens of alternative representations and sound change variants with 3040% recall.",
"However, it completely failed to normalize character type variants not covered by the derivation rules and more irregular typographical errors.",
"We performed error analysis of the segmentation results for the two methods.",
"Table 8 shows a matrix of the number of correct or incorrect segmentations for the methods for gold words.",
"There existed 32 tokens that only MeCab correctly segmented",
"(T-F), 200 tokens that only MeCab+ER correctly segmented",
"(F-T), and 413 tokens that both methods MeCab \\ MeCab+ER T F T 11955 32 F 200 413 Table 8: Number of correct",
"In Table 9, we show the actual segmenta-tion/normalization examples using the methods for the three cases; the first, second, and third blocks show examples of T-F, F-T, and F-F cases, respectively.",
"First, out of 32 T-F cases, MeCab+ER incorrectly segmented tokens as onomatopoeia in 18 cases.",
"For example,",
"(a)",
"and",
"(b)",
"correspond to new nodes added by the rules for the XQY ri -form and XYXY-form onomatopoeia, respectively, even though",
"(a)",
"is a verb phrase and",
"(b)",
"is a repetition of interjections.",
"Second, out of 200 F-T cases that only MeCab+ER correctly segmented, the method correctly normalized 119 cases, such as",
"(c),",
"(d), and the first word in",
"(g), and incorrectly normalized 42 cases, such as",
"(e)",
"and the second word in",
"(f).",
"The remaining 39 cases were tokens that required no normalization, such as the first word in",
"(f), the second word in",
"(g), and",
"(h).",
"The method correctly normalized simple examples of sound change variants",
"(c: for )",
"and alternative representations",
"(d: for )",
"because of the substitution and insertion rules, but failed to normalize character type variants",
"(f: for )",
"and complicated sound change variants",
"(e: for ).",
"Third, out of 413 F-F cases, 148 tokens were complicated variant forms, including a combination of historical kana orthography and the insertion of the long sound symbol",
"(i), a combination of the character type variant and sound change variant",
"(j), a variant written in romaji",
"(k).",
"The remaining 265 tokens were other unknown words, including emoticons",
"(l), neologisms/slang",
"(m), and proper names",
"(n).",
"18 5.5 Analysis of the Normalization Results Table 10 shows the detailed normalization results for MeCab+ER.",
"Among 767 non-standard words",
"(Gold), the method correctly normalized 198 true positives",
"(TP)",
"and missed 569",
"(58+511)",
"false nega-18 shawari is an abbreviation of shain waribiki employee discount'.",
"tives",
"(FN).",
"Similarly, among 354 predictions",
"(Pred), the methods incorrectly normalized 156",
"(99+57)",
"false positives",
"(FP).",
"We further divided FN and FP according to whether they were correctly segmented",
"(T-SEG)",
"or not",
"(F-SEG).",
"We do not show TP and FN examples here because we already introduced some examples in 5.4.",
"Among the FP examples, some of them were not necessarily inappropriate results; normalization between similar interjections and onomatopoeia was intuitively acceptable",
"(e.g., was normalized to o oh' and was normalized to sarasara smoothly').",
"However, we assessed these as errors based on our criterion that interjections have no",
"(non-)standard forms and the BCCWJ guidelines that regards onomatopoeia with and without long sound insertion as different lemmas.",
"The derivation rules used in MeCab+ER improved segmentation and POS tagging performance and contributed to the correct normalization of parts of variant forms, but the overall normalization performance was limited to F 1 of 35.3%.",
"normalization errors into two types: complicated variant forms and unknown words of specific vocabulary types such as emoticons and neologisms/slang.",
"The effective use of linguistic resources may be required to build more accurate systems, for example, discovering variant form candidates from large raw text similar to",
"(Saito et al., 2017), and construct-ing/using term dictionaries of specific vocabulary types.",
"UGT Corpus for MA and LN Hashimoto et al.",
"(2011)",
"developed a Japanese blog corpus with morphological, grammatical, and sentiment information, but it contains only 38 non-standard forms and 102 misspellings as UGT-specific examples.",
"Osaki et al.",
"(2017)",
"constructed a Japanese Twitter corpus annotated with morphological information and standard word forms.",
"Although they published tweet URLs along with annotation information, we could only restore parts of sentences because of the deletion of the original tweets.",
"Sasano et al.",
"(2013); Kaji and Kitsuregawa",
"(2014); Saito et al.",
"(2014, 2017)",
"developed Japanese MA and LN methods for UGT, but most of their in-house data are not publicly available.",
"For English LN, Han and Baldwin",
"(2011)",
"constructed an English Twitter corpus and Yang and Eisenstein",
"(2013)",
"revised it as LexNorm 1.2.",
"Baldwin et al.",
"(2015)",
"constructed an English Twitter corpus",
"(LexNorm2015)",
"for the W-NUT 2015 text normalization shared task.",
"Both LexNorm 1.2 and LexNorm2015 have been used as benchmark datasets for LN systems",
"(Jin, 2015; van der Goot, 2019; Dekker and van der Goot, 2020).",
"For Chinese, Li and Yarowsky",
"(2008)",
"published a dataset of formal-informal word pairs collected from Chinese webpages.",
"Wang et al.",
"(2013)",
"released a crowdsourced corpus constructed from microblog posts on Sina Weibo.",
"Classification of Linguistic Phenomena in UGT To construct an MA dictionary, Nakamoto et al.",
"(2000)",
"classified unknown words occurring in Japanese chat text into contraction",
"(e.g., for sugoi awesome'), exceptional kana variant",
"(e.g., for com-puter'), abbreviation, typographical errors, filler, phonomime and phenomime, proper nouns, and other types.",
"Ikeda et al.",
"(2010)",
"classified peculiar expressions in Japanese blogs into visual substitution",
"(e.g., for watashi me'), sound change",
"(e.g., for dekai big'), kana substitution",
"(e.g., for vitamin'), and other unknown words into similar categories to Nakamoto et al.",
"(2000).",
"Kaji et al.",
"(2015)",
"performed error analysis of Japanese MA methods on Twitter text.",
"They classified mis-segmented words into a dozen categories, including spoken or dialect words, onomatopoeia, interjections, emoticons/AA, proper nouns, foreign words, misspelled words, and other non-standard word variants.",
"Ikeda et al.",
"(2010)'s classification of peculiar expressions is most similar to our types of variant forms and Kaji et al.",
"(2015)'s classification is most similar to our types of vocabulary",
"(shown in Table 2), whereas we provide more detailed definitions of categories and criteria for standard and non-standard forms.",
"Other work on Japanese MA and LN did not consider diverse phenomena in UGT",
"(Sasano et al., 2013; Saito et al., 2014).",
"For English, Han and Baldwin",
"(2011)",
"classified ill-formed English words on Twitter into ex-tra/missing letters and/or number substitution",
"(e.g., b4 for before), slang",
"(e.g., lol for laugh out loud ), and others.",
"van der Goot et al.",
"(2018)",
"defined a more comprehensive taxonomy with 14 categories for a detailed evaluation of English LN systems.",
"It includes phrasal abbreviation",
"(e.g., idk for I don't know), repetition",
"(e.g., soooo for so), and phonetic transformation",
"(e.g., hackd for hacked).",
"For Chinese, Li and Yarowsky",
"(2008)",
"classified informal words in Chinese webpages into four types: homophone",
"(informal words with similar pronunciation to formal words, e.g.,",
"(cid:104)",
"xfn",
"(cid:105)",
"19 rice gruel for",
"(cid:104)",
"xhuan",
"(cid:105)",
"like), abbreviation and acronym",
"(e.g., GG for",
"(cid:104)",
"gege",
"(cid:105)",
"elder brother), transliteration",
"(informal words are transliteration of English translation of formal words, e.g., 3Q",
"(cid:104)",
"sanqiu",
"(cid:105)",
"for",
"(cid:104)",
"xixie",
"(cid:105)",
"thank you), and others.",
"Wang et al. (2013) also classified informal words in Chinese microblog posts similar to Li and Yarowsky (2008).",
"Methods for MA and LN In the last two decades, previous work has explored various rules and extraction methods for formal-informal word pairs to enhance Japanese MA and LN models for UGT.",
"Nakamoto et al. (2000) proposed an alignment method based on string similarity between original and variant forms.",
"Ikeda et al. (2010) automatically constructed normalization rules of peculiar expressions in blogs, based on frequency, edit distance, and estimated accuracy improvements.",
"Sasano et al. (2013) defined derivation rules to recognize unknown onomatopoeia and variant forms of known words that frequently occur in webpages.",
"Their rules were also implemented in a recent MA toolkit Juman++ (Tolmachev et al., 2020) to handle unknown words.",
"Saito et al. (2014) estimated character-level alignment from manually annotated pairs of formal and informal words on Twitter.",
"Saito et al. (2017) extracted formal-informal word pairs from unlabeled Twitter data based on semantic and phonetic similarity.",
"For English and Chinese, various classification methods for normalization of informal words (Li and Yarowsky, 2008; Wang et al., 2013; Han and Baldwin, 2011; Jin, 2015; van der Goot, 2019) have been developed based on, for example, string, phonetic, semantic similarity, or co-occurrence frequency.",
"Qian et al. (2015) proposed a transition-based method with append( x ), separate( x ), and separate_and_substitute( x , y ) operations for the joint word segmentation, POS tagging, and normalization of Chinese microblog text.",
"Dekker and van der Goot (2020) automatically generated pseudo training data from English raw tweets using noise insertion operations to achieve comparable performance without manually annotated data to an existing LN system.",
"We presented a publicly available Japanese UGT corpus annotated with morphological and normalization information.",
"Our corpus enables the performance comparison of existing and future systems and identifies the main remaining issues of MA and LN of UGT.",
"Experiments on our corpus demonstrated the limited performance of the existing systems for non-general words and non-standard forms mainly caused by two types of difficult examples: complicated variant forms and unknown words of non-general vocabulary types.",
"In the future, we plan to (1) expand the corpus by further annotating of 510 times more sentences for a more precise evaluation and (2) develop a joint MA and LN method with high coverage.",
"We would like to thank the anonymous reviewers and area chairs for their constructive comments.",
"We used the Balanced Corpus of Contemporary Written Japanese to construct our corpus."
] | [
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"objective",
"objective",
"other",
"other"
] |
[
"Learning high-quality sentence representations is a fundamental problem of natural language processing which could benefit a wide range of downstream tasks.",
"Though the BERT-like pre-trained language models have achieved great success, using their sentence representations directly often results in poor performance on the semantic textual similarity task.",
"Recently, several contrastive learning methods have been proposed for learning sentence representations and have shown promising results.",
"However, most of them focus on the constitution of positive and negative representation pairs and pay little attention to the training objective like NT-Xent, which is not sufficient enough to acquire the discriminating power and is unable to model the partial order of semantics between sentences.",
"So in this paper, we propose a new method ArcCSE, with training objectives designed to enhance the pairwise discriminative power and model the entailment relation of triplet sentences.",
"We conduct extensive experiments which demonstrate that our approach outperforms the previous state-of-the-art on diverse sentence related tasks, including STS and SentEval.",
"Learning sentence representations, which encodes sentences into fixed-sized dense vectors such that semantically similar ones stay close, is a fundamental problem of natural language processing.",
"It could benefit a wide range of downstream applications such as information retrieval, semantic similarity comparison, question answering, and so on.",
"Recently, with the great success of pre-trained Transformer-based language models (Devlin et al., 2019; Liu et al., 2019; Raffel et al., 2020; Brown et al., 2020; Liu et al., 2019) like BERT, they have been widely adopted for generating sentence representations.",
"A straightforward way is by leveraging the [CLS] embedding (Devlin et al., 2019) 1 0 1",
"or applying mean pooling on the last layers of a BERT-like pre-trained language model (Reimers and Gurevych, 2019).",
"However, the sentence embeddings coming from a pre-trained language model without further fine-tuning could not capture the semantic meaning of sentences very well as shown in Figure",
"1(a), and sometimes even un-derperform non-contextualized embeddings like GloVe (Pennington et al., 2014).",
"To make pre-trained language models more suitable for generating sentence embeddings, supervised methods like SBERT (Reimers and Gurevych, 2019) are proposed, which improve the performance by fine-tuning on a labeled dataset.",
"As labeled data is not available or expensive to annotate in many tasks or domains, it is of great value for developing unsupervised/self-supervised approaches for learning sentence representations.",
"So recent works like BERT-Flow (Li et al., 2020) and BERT-Whitening (Su et al., 2021) propose postprocessing methods to improve the BERT-based sentence representation.",
"They address that the non-smooth anisotropic semantic space of BERT is a bottleneck and alleviate the problem through normalizing flows and whitening operation.",
"To further improve the quality of sentence representations, several works (Kim et al., 2021; Yan et al., 2021; Giorgi et al., 2021; Carlsson et al., 2021; Gao et al., 2021) adopt self-supervised contrastive learning approach, which learns sentence representations by minimizing the distance of positive sentence representation pairs and maximizing the distance of negative pairs.",
"In these works, positive pairs are often constituted through data augmentation or encoders with different structure or parameters, while negative pairs are derived from different sentences within the same batch.",
"Then contrastive learning objective like normalized temperature-scaled cross-entropy loss (NT-Xent) (Chen et al., 2020; Gao et al., 2021) is used for optimizing.",
"A typical example unsup-SimCSE (Gao et al., 2021) achieves state-of-the-art performance with a simple and effective idea of using standard dropout for data augmentation.",
"Though existing contrastive methods for learning sentence representation have shown promising results, most of them focus on the positive and negative pairs constitution, and the optimization objective itself is not fully exploited.",
"The contrastive learning objective NT-Xent loss used in recent works (Yan et al., 2021; Giorgi et al., 2021; Gao et al., 2021) is a variation of cross-entropy loss with softmax function.",
"Recent studies (Wang et al., 2018; Deng et al., 2019) have shown that the traditional softmax-based loss is insufficient to acquire the discriminating power, as shown in Figure",
"1(b) in which SimCSE-BERT base adopts the NT-Xent loss and could not separate s b and s c completely.",
"In addition, the current optimization objectives only models sentence relations in a pairwise perspective, which tries to pull sentences with similar semantics closer and push dissimilar ones away from each other.",
"However, there are different degrees of semantic similarity among related sentences.",
"For example in Figure",
"1(d), s b is more similar to s a than s c is.",
"The current optimization objectives lack the ability to model the partial order of semantics between sentences.",
"To alleviate these problems, in this paper, we propose a new approach ArcCSE for sentence representation learning.",
"For pairwise sentence relation modeling, we propose Additive Angular Margin Contrastive Loss (ArcCon Loss), which enhances the pairwise discriminative power by maximizing the decision margin in the angular space.",
"Besides, in order to model the partial order of semantics between sentences, we propose a new self-supervised task that captures the entailment relation among triplet sentences.",
"The task is implemented through automatically constituted triplet sentences with entailment relation among them.",
"A visualization example of the generated representations through ArcCSE is shown in Figure",
"1(c).",
"We evaluate our method on standard semantic textual similarity (STS) tasks and SentEval transfer tasks, and it outperforms the previous state-of-the-art approaches.",
"Early works usually learn sentence representations by augmenting the idea of word2vec (Mikolov et al., 2013), such as predicting surrounding sentences (Kiros et al., 2015; Hill et al., 2016; Logeswaran and Lee, 2018) or summing up n-gram embeddings (Pagliardini et al., 2018).",
"With the rise of pre-trained language models, many works try to generate sentence representations through BERT-like models.",
"A common way is leveraging the [CLS] embedding or applying mean pooling on the last layers of BERT (Reimers and Gurevych, 2019; Li et al., 2020).",
"Instead of using BERT embeddings directly, BERT-Flow (Li et al., 2020) and BERT-Whitening (Su et al., 2021) further improve sentence representation through post-processing.",
"Recently, several works adopt the contrastive learning framework for sentence representation learning.",
"They propose different strategies to constitute contrastive pairs, either through different data transforming methods (Zhang et al., 2020; Yan et al., 2021; Giorgi et al., 2021), or through encoders with different structures or parameters (Carlsson et al., 2021; Kim et al., 2021; Gao et al., 2021).",
"A typical example SimCSE (Gao et al., 2021) uses dropout as data augmentation strategy and achieves state-of-the-art performance.",
"However, most existing works pay little attention to the training objective and use the traditional contrastive BERT-like Encoder (Dropout Turn on) BERT-like Encoder (Dropout Turn off) Parameter Sharing Al Jaber's first long distance travel was of 800km which he covered by circling Qatar.",
"loss directly, which is insufficient in discrimination and unable to model the partial order of semantics between sentences.",
"So, in our work, we propose a new approach that jointly models the pairwise and triple-wise sentence relations and further improves the sentence representations' quality.",
"The goal of Deep Metric Learning (DML) is to learn a function that maps objects into an embedded space, in which similar objects stay close and dissimilar ones are far away.",
"In order to achieve this goal, many approaches have been proposed, and designing appropriate loss functions plays a key role in it.",
"Contrastive training objectives like Contrastive Loss (Chopra et al., 2005), N-Pair Loss (Sohn, 2016), Structured Loss (Song et al., 2016) and Triplet Margin Loss (Ma et al., 2021) apply the definition of metric learning directly.",
"These objectives are among the earliest training objectives used for deep metric learning.",
"Later, softmax-based losses which learn a center for each class and penalize the distances between deep features and their corresponding class centers achieve more promising results in supervised metric learning.",
"Typical examples like Center Loss (Wen et al., 2016), SphereFace (Liu et al., 2017), CosFace (Wang et al., 2018) and ArcFace (Deng et al., 2019) are widely adopted in deep learning applications such as face recognition and sentence classification (Coria et al., 2020).",
"However, these losses need class labels and are not suitable for learning sentence representations.",
"So inspired by ArcFace, we propose a new training objective ArcCon that does not need class labels and can model pairwise sentence relations with more discriminative power than traditional contrastive training objectives.",
"In this section, we present ArcCSE, an angular based contrastive sentence representation learning framework, which could generate superior sentence embeddings from unlabeled data.",
"Given a pre-trained language model M and an unlabeled text dataset D , the task is fine-tuning M on D so that the sentence representations generated through M could be more semantic discriminative.",
"model pairwise and triple-wise sentence relations simultaneously, as shown in Figure",
"2. We start with angular margin based contrastive learning in Section 3.1, which models pairwise relations between sentences by pulling semantic similar ones closer while pushing dissimilar ones away.",
"Then we introduce the method which models the partial order of semantics between automatically constituted triplet sentences in Section 3.2.",
"To model the positive/negative pairwise relations between sentences, we first need to generate sentence representations and group them into positive and negative pairs.",
"Then we feed these pairs to a training objective for optimizing.",
"Given a collection of sentences D = { s i } Ni =1 , we generate the sentence representations through a BERT-like pre-trained language model M .",
"Following SimCSE, we use dropout as the data augmentation method.",
"For each sentence s i , we generate two different representations h i and h i from s i by passing s i to M twice with independently sampled dropout masks.",
"These two representations with the same semantics constitute a positive pair, while the negative pairs are derived from the representations of different sentences within the same batch.",
"After getting the positive and negative sentence pairs, we put them into a training objective for model fine-tune.",
"The most widely adopted training objective is NT-Xent loss (Chen et al., 2020; Gao et al., 2021), which has been used in previous sentence and image representation learning methods and can be formulated as follows: L NT-Xent = log e sim ( h i ,h i ) / (cid:80) n j =1 e sim ( h i ,h j ) / (1) where sim ( h i , h j ) is the cosine similarity h T i h j || h i |||| h j || , is a temperature hyperparameter and n is the number of sentences within a batch.",
"Though the training objective tries to pull representations with similar semantics closer and push dissimilar ones away from each other, these representations may still not be sufficiently discriminative and not very robust to noise.",
"Let us denote angular i,j as follows: i,j = arccos (cid:18) h T i h j || h i || || h j || (cid:19) (2) NT-Xent B e tt e r A li gn m e n t ArcCon D ec i s i on M a r g i n OptimizationDirection Better Uniformity B e tt e r A li gn m e n t Better Uniformity Figure 3: Comparison of NT-Xent loss and ArcCon loss.",
"The decision boundary for h i in NT-Xent is i,i = i,j , as show in Figure",
"3. Due to lack of decision margin, a small perturbation around the decision boundary may lead to an incorrect decision.",
"To overcome the problem, we propose a new training objective for sentence representation learning by adding an additive angular margin m between positive pair h i and h i .",
"We named it Additive Angular Margin Contrastive Loss (ArcCon Loss), which can be formulated as follows: L arc = log e cos ( i,i + m ) / e cos ( i,i + m ) / + (cid:80) j = i e cos( j,i ) / (3) In this loss, the decision boundary for h i is i,i + m = i,j , as show in Figure",
"3. Compared with NT-Xent, it further pushed h i towards to the area where i,i get smaller and i,j get larger, by increasing the compactness of sentence representations with the same semantics and enlarging the discrepancy of different semantic representations.",
"This help enhance the alignment and uniformity properties (Wang and Isola, 2020), which are two key measures of representation quality related to contrastive learning, indicating how close between positive pair embeddings and how well the embeddings are uniformly distributed.",
"The quantitative analysis is illustrated in Section 4.5.",
"Besides, the decision boundary leaves an extra margin m to boundary i,i = i,j which is often used during inference, making it more tolerant to noise and more robust.",
"All these properties make ArcCon loss more discriminative than traditional training objectives like NT-Xent.",
"Compared with Arcface (Deng et al., 2019) which is often used in large-scale fine-grained categorization in computer vision community, ArcCon loss does not need classification labels, and could handle contrastive task properly.",
"Previously the training objectives for sentence representation learning like NT-Xent loss only considered pairwise sentence relations, in which sentences are either similar or dissimilar in semantics.",
"But in fact, there are varying degrees of semantic similarity.",
"For example, sentence s 2 could be more similar to sentence s 1 than sentence s 3 to s 1 .",
"Existing methods lack the ability to model such partial order of semantics between sentences.",
"In order to distinguish the slight differences in semantics between different sentences, we propose a new self-supervised task which models the entailment relation of automatically generated triplet sentences.",
"For each sentence s i in the text dataset D , we first generate an external sentence s i by masking contiguous segments of s i with a masking rate of 20%.",
"Then we enlarge the masking area and get a new sentence s i with a masking rate of 40% to s i .",
"The masking rates are set up experimentally, and an ablation study about the effect of masking rates is illustrated in Section 4.4.",
"An example of the masking procedure is shown as follows: s i Al Jaber's first long distance travel was of 800km which he covered by circling Qatar.",
"s i Al Jaber's first long distance travel was of 800km which he covered by circling Qatar.",
"s i Al Jaber's first long distance travel was of 800km which he covered by circling Qatar.",
"We can constitute a triplet ( s i , s i , s i ) with entailment relation among them.",
"Though in rare cases, the strategy may generate sentences that do not exhibit the desired relationship and introduce some noise, the entailment relation holds true most of the time.",
"We expect encountering enough data will reinforce the correct ones whereas the impact of incorrect ones will diminish.",
"Since the s i , s i and s i are similar literally and semantically, generating their representations with dropout noise may obscure their entailment relation and add inaccurate signals to the representation learning process.",
"So we turn off the dropout of the encoder when modeling the triplet relation.",
"As s i is more similar to s i in semantics than s i is, we could model such relation with a triplet objective: L tri = max (cid:0) 0 , sim ( h i , h i ) sim ( h i , h i ) + m (cid:1) (4) in which h i is the sentence representation of s i generated without dropout noise and sim ( i, j ) is the cosine similarity between i and j .",
"As the semantic difference between s i and s i may be subtle depending on the original sentence s i and the masked words, here we set m to zero.",
"Combine formula (3) and formula (4), the final form of our training objective is: L = L arc + L tri (5) in which is a coefficient.",
"Evaluation Tasks We evaluate our method on two kinds of sentence related tasks:",
"Unsupervised Semantic Textual Similarity (STS): These tasks measure the model's ability to estimate the semantic similarities between sentences.",
"SentEval Transfer Tasks: These tasks measure the effectiveness of sentence embeddings used in downstream transfer tasks.",
"Baselines We compare ArcCSE to several representative methods on STS and SentEval tasks, such as average GloVe embeddings (Pennington et al., 2014), Skip-thought (Kiros et al., 2015), average BERT embeddings from the last layer (De-vlin et al., 2019), BERT-Flow (Li et al., 2020), and BERT-Whitening (Su et al., 2021).",
"We also include the recently proposed contrastive learning methods, such as ISBERT (Zhang et al., 2020), CT-BERT (Carlsson et al., 2021), ConSERT (Yan et al., 2021), and the current state-of-the-art method SimCSE (Gao et al., 2021).",
"Implementation Details We train ArcCSE with the pre-trained checkpoints of BERT base and BERT large (Devlin et al., 2019).",
"We also employ our method to SBERT (Reimers and Gurevych, 2019), which has been trained on NLI datasets, to verify the generalizability of our method.",
"Following SimCSE (Gao et al., 2021), we use the output of the MLP layer on top of the [CLS] as Method STS12 STS13 STS14 STS15 STS16 STS-B SICK-R Avg.",
"the sentence representation during training, and use the [CLS] output without MLP layer for evaluation.",
"The dropout rate is set to 0.1.",
"For ArcCon loss, we set the angular margin m to 10 degrees and the temperature to 0.05.",
"When modeling the entailment relation of triplet sentences, we set the masking ratios as 20% and 40% respectively.",
"Since the semantic difference between triplet sentences is more obvious for long sentences, we filter out sentences with less than 25 words and use the left ones for the triplet loss.",
"The loss coefficient is set to 0.1 experimentally.",
"We use one million random sampled sentences from English Wikipedia for training, which has been used in previous work (Gao et al., 2021) 1 .",
"During training, the sentences are sampled by length.",
"We set different maximum sentence lengths for ArcCon loss and triplet loss to save memory.",
"The length is set to 32 for the ArcCon loss in large models, and to the maximum length within a batch for all other cases.",
"We train our model for one epoch and the learning rate is set to 3e-5 for base 1 https://huggingface.co/datasets/princeton-nlp/datasets-for-simcse/resolve/main/wiki1m_for_simcse.txt models and 1e-5 for large models.",
"We search the batch size within {8, 16, 32} and always update the parameters every 64 steps.",
"The model is optimized by the AdamW with Sharpness-Aware Minimization (Foret et al., 2021) and default configurations.",
"We evaluate our model every 125 training steps on the development set of STS-B, and the best checkpoint is used for the final evaluation on test sets.",
"Our implementation is based on Hugging-Face's Transformers (Wolf et al., 2020).",
"We conduct experiments on 7 semantic textual similarity (STS) tasks, including STS tasks 2012-2016 (Agirre et al., 2012, 2013, 2014, 2015, 2016), STS Benchmark (Cer et al., 2017), and SICK-Relatedness (Marelli et al., 2014).",
"Within these datasets, each sample contains two sentences and a gold score between 0 and 5 which indicates their semantic similarity.",
"We use SentEval toolkit (Con-neau and Kiela, 2018) for evaluation and report the Spearman's correlation following previous works (Reimers and Gurevych, 2019; Gao et al., 2021).",
"from which we can see that ArcCSE outperforms the previous approaches.",
"Compared with the previous state-of-the-art method SimCSE, ArcCSE-BERT base raises the average Spearman's correlation from 76.25% to 78.11%, and ArcCSE-BERT large further pushes the results to 79.37%.",
"The performance is even better than strong supervised method SBERT, which has already been trained on NLI datasets.",
"Furthermore, we can also employ our method to SBERT and improve its performance to 79.06% and 80.69% for the base and large models respectively, which is more effective than SimCSE.",
"We also explore the improvements made by the ArcCon loss and triplet loss independently based on BERT base .",
"From Table 1 we can see that with ArcCon loss alone, the average Spearman's correlation is 77.25%.",
"When combining the traditional NT-Xent loss with our proposed triplet loss, the average Spearman's correlation is 77.02%.",
"Both of them outperform the previous state-of-the-art method SimCSE, whose average Spearman's correlation is 76.25%.",
"This demonstrates the effectiveness of ArcCon and triplet loss we proposed.",
"We evaluate our model with SentEval toolkit on several supervised transfer tasks, including: MR (Pang and Lee, 2005), CR (Hu and Liu, 2004), SUBJ (Pang and Lee, 2004), MPQA (Wiebe et al., 2005), SST-2 (Socher et al., 2013), TREC (Voorhees and Tice, 2000) and MRPC (Dolan and Brockett, 2005).",
"For each task, SentEval trains a logistic regression classifier on top of the sentence embeddings and tests the performance on the downstream task.",
"For a fair comparison, we do not include models with auxiliary tasks like masked language modeling.",
"The results are shown in Table",
"2. We can see that ArcCSE performs on par or better than baseline methods in both BERT base and BERT large level.",
"This demonstrates the effectiveness of our method in learning domain-specific sentence embeddings.",
"Effect of Angular Margin The angular margin m in ArcCon loss affects the discriminative power directly.",
"To investigate the effect of m , we conduct an experiment by varying m from 0 degrees to 20 degrees, increased by 2 degrees at each step.",
"We tune the hyper-parameter based on Spearman's correlation on the development set of STS-B following previous works (Kim et al., 2021; Gao et al., 2021).",
"The results are shown in Figure",
"4. Figure 4: Effect of the angular margin m in ArcCon loss.",
"We can see that the best performance is achieved when m = 10 , either larger or smaller margin degrade the performance.",
"This matches our intuition since small m may have little effect, and large m may negatively influence the positive pair relation modeling.",
"Effect of Temperature The temperature in ArcCon Loss affects its effectiveness, so we carry out an experiment with varying from 0.01 to 0.1, increased by 0.01 at each step.",
"The results are shown in Figure",
"5. We can see that the model ArcCSE-BERT base STS12 STS13 STS14 STS15 STS16 STS-B SICK-R Avg.",
"Effect of Masking Ratios The masking ratios determine the sentences generated for the entailment relation modeling and their differences in semantics, so we conduct an experiment to explore the effect of different masking ratios.",
"The first masking ratio r 1 is varied from 10% to 25%, increased by 5% for each step.",
"The second masking ratio r 2 is derived by adding an extra value r d to r 1 .",
"r d is varied from 10% to 35%, increased by 5% for each step.",
"The results are shown in Figure",
"6. Figure 6: Effect of the masking ratios.",
"We can see that large differences between the two masking ratios tend to lead lower Spearman's correlation compared to the smaller ones.",
"The reason may be that the larger the semantic difference is, the easier for the model to estimate the entailment relations among the triplet sentences, which makes the triplet loss less helpful.",
"The best performance is achieved when r 1 is 20% and r 2 is 40%, and the corresponding Spearman's correlation is 0.847.",
"We use them as our hyper-parameters.",
"Effect of on-off Switching of Dropout The on-off switching of dropout in the BERT-like sentence encoder affects the generated sentence representations directly.",
"Since dropout performs a kind of averaging over the ensemble of possible subnetworks, an embedding generated with dropout turned off can be seen as a kind of \"averaging\" representation, while an embedding generated with dropout turned on can be seen as generated through a subnetwork.",
"In ArcCSE, we use the embeddings generated with the encoder dropout turned on as input for ArcCon loss, which regularizes the network by making representations generated through different subnetworks similar.",
"When modeling the entailment relation, we generate \"averaging\" representations with dropout turn-off to avoid inaccurate signals.",
"In order to verify our intuition, we conduct two experiments with different dropout settings.",
"In the first experiment, we feed ArcCon two sentence representations generated with dropout turns on and off respectively.",
"We carry out this experiment with angular margins ranging between 2 degrees to 12 degrees and report the best result.",
"In the second one, we feed the triplet loss representations that are generated with dropout turns on and maintain the other settings.",
"The results are shown in Table",
"3. We can see that the original settings that turn dropout on for ArcCon and turn dropout off for triplet loss achieve the best performance, which confirms our intuition.",
"Effect of Coefficient in the Training Objective The coefficient in the final optimization objective adjusts the relative weights between ArcCon and the triplet loss, as shown in formula (5).",
"To find the most suitable , we conduct an experiment by varying from 0 to 1.2 and increased by 0.1 at each step.",
"The results are shown in Figure",
"7. We can see that the best performance is achieved Figure 7: Effect of the coefficient in the training objective.",
"when = 0 .",
"1 , and the corresponding Spearman's correlation is 0.847.",
"This demonstrates that we can get the best performance by combining ArcCon and the triplet loss with proper .",
"Alignment and uniformity are two properties closely related to contrastive learning and could be used to measure the quality of representa-tions(Wang and Isola, 2020).",
"Alignment favors encoders that generate similar representations for similar instances.",
"It could be defined with the expected distance between embeddings of the positive paired instances: align = E ( x,x + ) p pos (cid:13)(cid:13) f ( x ) f (cid:0) x + (cid:1)(cid:13)(cid:13) 2 (6) where p pos denotes the distribution of positive paired instances.",
"Uniformity prefers uniformly distributed representations, which helps preserve maximal information.",
"It could be defined as: uniform = log E x,y i.i.d p data e 2 f ( x ) f ( y ) 2 (7) where p data denotes whole data distribution.",
"To justify the inner workings of our approach, we calculate the alignment and uniformity metrics every 10 steps during training on the STS-B development set.",
"We compare our approach with SimCSE and visualize the results in Figure",
"8. We can see that compared to the original BERT checkpoint, both ArcCSE and SimCSE improve the alignment and uniformity measures during training.",
"ArcCSE performs better on the alignment measure and on par with SimCSE on the uniformity measure.",
"This verifies the intuition of our approach and demonstrates that ArcCSE could help improve the quality of sentence representations.",
"In this work, we propose ArcCSE, a self-supervised contrastive learning framework for learning",
"sen-(a) align",
"tence representation.",
"We propose a new optimizing objective ArcCon loss to model pairwise sentence relations with enhanced discriminating power, and a new self-supervised task to model the partial order of semantics between sentences.",
"Experimental results on semantic textual similarity tasks (STS) and SentEval tasks demonstrate that both techniques bring substantial improvements and our method outperforms previous state-of-the-art method for sentence representation learning."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective"
] |
[
"We address the task of explaining relationships between two scientific documents using natural language text.",
"This task requires modeling the complex content of long technical documents, deducing a relationship between these documents, and expressing that relationship in text.",
"Successful solutions can help improve researcher efficiency in search and review.",
"In this paper, we operationalize this task by using citing sentences as a proxy.",
"We establish a large dataset for our task.",
"We pretrain a large language model to serve as the foundation for autoregressive approaches to the task.",
"We explore the impact of taking different views on the two documents, including the use of dense representations extracted with scientific information extraction systems.",
"We provide extensive automatic and human evaluations which show the promise of such models, and make clear the challenges for future work.",
"The output of the world's scientists doubles roughly every nine years (Bornmann and Mutz, 2015).",
"Consequently, researchers must devote significant energy to quickly understand how a new piece of research fits with a rapidly changing research landscape.",
"Several lines of research seek to reduce this burden on scientists.",
"Citation recommendation systems suggest references to relevant published work (McNee et al., 2002; Bhagavatula et al., 2018).",
"Intent classification systems help determine the type and importance of a citation in a work (Valenzuela et al., 2015; Cohan et al., 2019).",
"Summarization systems aim to help researchers more quickly understand the basic ideas in a piece of research (Co-han and Goharian, 2015; Yasunaga et al., 2019).",
"We draw inspiration from these works as well as Equal contribution.",
"broader challenges like explaining the connection between concurrent works or relating a new paper to those a reader is already familiar with.",
"Automatically describing inter-document relationships could decrease the time researchers devote to literature review.",
"For instance, explanations for a new paper can be personalized to a particular reader by relating the new work to ones they have read before.",
"Further, such technology could be incorporated into writing assistance systems to help less experienced or non-native writers better articulate the connection between their work and prior art.",
"Additionally, users of citation recommendation systems can benefit from natural language explanations of recommendation system choices.",
"In addition to the utility of this task to scientists, it presents several interesting technical challenges.",
"These include effectively representing the important information in a document, generating from a long-tailed technical vocabulary, and expressing the variety of connections between related scientific papers.",
"Figure 1 illustrates how the same document is described differently in relation to different documents.",
"In this paper we use citing sentences to operationalize the problem of generating natural language explanations of the relationships between two scientific papers.",
"Authors, when citing other work, oftentimes describe how their work relates to the cited work.",
"To this end, we use in-text citation sentences as a naturally occurring proxy explanations for how two documents relate to each other.",
"However, we generate such sentences from general representations of document content rather than the specific in-text locations where these sentences occur, as this task formulation can better facilitate the applications described above.",
"We approximate the explanation objective by having a GPT2 language model generate sentences containing citations given a pair of documents.",
"This approach relies on providing dense but informative representations of documents to use as conditioning context for the generation model.",
"We explore the use of sentence-based contexts as input including document abstracts, introductions, and sampled sentences from the full document; we find that using introductions and abstracts works well.",
"Finally, we improve our model's performance on automated metrics by using informative entities and terms to both construct dense input and rank the output relationship explanations.",
"In addition to standard automatic metrics, we perform human evaluations of technical outputs with a pool of annotators.",
"In this work, we describe a series of stages of model development, each with its own experiments that, together, informed the task and our series of solutions.",
"Our contributions include: a novel dataset for the relationship explanation task; a domain-adapted GPT2 we release for left-to-right language modeling of scientific text; the SCIGEN model for describing document relationships; and an extensive expert evaluation and analysis of machine generated technical text.",
"1 2 Related Work The current work builds on recent research in scientific document understanding, including citation recommendation, intent categorization, and scientific document summarization.",
"Citation recommendation systems suggest related works given a 1 https://github.com/Kel-Lu/SciGen document or a span of text (McNee et al., 2002; Nallapati et al., 2008; Bhagavatula et al., 2018).",
"Recently, researchers have sought to categorize citations using various ontologies of citation intents.",
"Teufel et al. (2006) develop an annotation scheme and corresponding classification model for citation functions.",
"Valenzuela et al. (2015) seek to discern highly influential citations from others.",
"Jurgens et al. (2018) use six categories including moti-vation, uses, and future work among others.",
"Cohan et al. (2019) condense this ontology to just three: background, method, and result com-parison.",
"Intent classification can identify relationships between documents; our relationship explanation task extends this in two ways.",
"First, data-driven freeform generation can express a wider array of relationships compared to a manually-defined label set.",
"Further, our task framework could be used to describe relationships between works which do not actually cite each other, such as contemporaneous works.",
"Unlike categorization techniques, we require no task-specific annotated data as we supervise with citing sentences that are readily available in scientific documents.",
"In practice, citation classification is used to assist in suggesting relevant works to researchers; our work complements this goal by providing rationales for the recommendation and furthering progress toward explainable AI.",
"Our work is also connected to a long history of research on summarizing scientific documents (Luhn, 1958; Paice, 1980).",
"Work in this area has mostly used used abstracts or peer reviews as targets (Cachola et al., 2020; Cohan et al., 2018; Jaidka et al., 2017).",
"In particular, Pilault et al. (2020) show that using a simple extractive summary as input for abstractive summarization of scholarly texts work well.",
"Researchers have also used citing sentences as part of the input for summarization, recognizing the explanatory power of these texts (Nakov et al., 2004; Cohan and Gohar-ian, 2017; Yasunaga et al., 2019).",
"Ours is the first work to focus on learning to express the specific relationship between two documents from such sentences.",
"The closest work to our own is Xing et al. (2020), who pilot a task of in-line citation generation.",
"Their goal is a model which can insert a citing sentence into a particular context within a document.",
"Our work, on the other hand, aims to learn from citing sentences how to describe general relationships between documents independent of particular in-document contexts.",
"While the Xing et al. (2020) method may facilitate writing assistance, our task has applications in search and summarization.",
"Because our task does not rely on a specific location in a document where the citation will go, solutions can be used at scale to provide users with general explanations of document relationships.",
"Our models rely heavily on recent advances in transfer learning in NLP.",
"Large pretrained models such as BERT (Devlin et al., 2018) and GPT2 (Radford et al., 2019) have made strong advances on a number of tasks (Wang et al., 2019).",
"It has also been shown that pretraining these models on domain-specific data further improves results on domain-specific tasks (Beltagy et al., 2019; Lee et al., 2019).",
"In this work, we apply that methodology by adding a pretraining phase on in-domain data before finetuning a GPT2 model toward the explanation generation task.",
"A key challenge when using pretrained language models for document-level tasks is how to select document content to fit within the limited context window of the model, which is a major focus of our work.",
"We aim to generate an explanation: a natural language sentence which expresses how one document relates to another.",
"Explicit examples of such sentences are nontrivial to find in corpora, especially when annotation for a highly technical task is expensive.",
"To this end, we use in-text citations in a scientific document to prior work as proxies for relationship explanations.",
"We use these citing sentences as partial supervision for our task, and refer to them as explanations. 2 We distinguish one document as the principal document, from which we will draw explanations that reference the cited document.",
"Let t denote an explanation drawn from principal document S , and S (cid:48) denote S without t .",
"Then let P ( t | S (cid:48) , C ) (1) be the probability of t given S (cid:48) and the cited document C .",
"A good generation technique should maximize this probability across a large number of (cid:104) t, S, C (cid:105) triples, so that at inference time the model is able to generate a sentence t which accurately 2 Future work might seek to filter or systematically alter intext citations to be more explanation-like, without otherwise changing our approach.",
"describes the relationship between new documents S and C .",
"Optimizing Equation 1 is made easier by modern representation learning.",
"Pretrained neural language models like GPT2 have shown strong performance when generating sentences conditioned on a context.",
"However, existing implementations of GPT2 limit the context window to 512 or 1024 tokens, far smaller than scientific documents.",
"In this work, we explore ways to represent the documents' content for use with language models.",
"Data We use English-language computer science articles and annotation from S2ORC dataset (Lo et al., 2020).",
"S2ORC is a large citation graph which includes full texts of 8.1 million scientific documents.",
"We use 154K connected computer science articles, from which we extract 622K explanations with a single reference that link back to other documents in our corpus.",
"We omit any sentences that cite more than one reference.",
"We hold 5000 sentences for each of the validation and test sets.",
"Detailed statistics can be found in Table 1.",
"Information on dataset construction can be found in Appendix B. Evaluation The most appropriate evaluation metric for this and many text generation tasks is human judgment by potential users of the system.",
"Evaluating explanations of the relationships between scientific documents requires human judges with scientific expertise whose time and effort can be costly.",
"While collecting human judgments in technical domains is relatively rare, we believe it to be an important step in evaluating our systems for this task.",
"Thus, we conduct thorough human evaluations and analyses with expert judges.",
"We make use of both larger scale expert evaluations yielding hundreds of judgements as well as smaller scale, deeper evaluations where we can effect a higher degree of quality control over fewer datapoints.",
"Further, we make use of intermediate human evaluations in the development of our models, and supplement these evaluations with automatic metrics BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) that are established in other generation tasks.",
"We develop several models for explaining document relationships.",
"Following current work in neural text generation, we finetune the predictions of a large pretrained language model to our task (Sec-tion 4.1).",
"In order to bring the language model into the scientific text domain, we do additional language model pretraining over full scientific texts.",
"We also investigate approximate nearest neighbor methods to retrieve plausible human-authored explanations from the training data as a baseline (Sec-tion 4.2).",
"Recent work has shown that finetuning large pretrained language models to text generation tasks yields strong results (Zellers et al., 2019).",
"To this end, we construct SCIGEN , a model based on GPT2 (Radford et al., 2019), a transformer model trained on 40GB of internet text with a left-to-right language modeling objective (Vaswani et al., 2017).",
"We do so by finetuning the predictions of the language model to generate explanations using different expressions of the principal and cited document as context.",
"To finetune GPT2 architectures for text generation, it is typical to concatenate the conditioning context X = x 1 . . . x n and target sentence Y = y 1 . . . y m with a special separator token y .",
"To adapt this technique to our task, we construct the conditioning context X from the principal and cited documents and use the explanation as Y .",
"We take j tokens from principal document s 1 , . . . , s j along with k tokens from the cited document c 1 , . . . , c k (which tokens to draw from the two documents is an independent variable that we explore exper-imentally).",
"We then condition the generation of explanation Y on X = s 1 , . . . , s j , x , c 1 , . . . , c k , where x is a token used to indicate the end of the principal document.",
"SCIGEN is trained to predict the explanation one token at a time as described above.",
"More details on training can be found in Appendix A. At inference time, the model is provided with an unseen principal/cited document pair.",
"An explanation of their relationship is generated one token at a time using nucleus sampling (Holtz-man et al., 2020).",
"At timestep t , output token y t Web Text GPT2 Pretrain Scientific Text SciGPT2 Cont.",
"is sampled from the top 90% of the distribution P ( y t | X, y , y 1 , . . . , y t 1 ) (renormalized).",
"The selected y t is used to condition the prediction of subsequent tokens.",
"Context The primary question we investigate with the SCIGEN model is what kind of input is best for describing the relationship between the principal and cited documents accurately and informatively.",
"Since models based on GPT2 have a small context window relative to the length of scientific documents, we investigate the use of abstracts, introductions, or non-citing sentences sampled from throughout the document as conditioning context.",
"The effectiveness and description of these approaches is described in Section",
"5. Based on our findings with sentence-based contexts and information retrieval systems, we then explore the possibility of representing the cited document text as a list of important concepts rather than fluent text, in Section",
"6. Language Model Pretraining Prior work has shown that pretraining on in-domain data improves the performance of large language models on domain-specific tasks (Beltagy et al., 2019; Gu-rurangan et al., 2020).",
"Inspired by this, we continue pretraining the GPT2 model in the science domain to produce SCI GPT2, which we use as the underlying language model for SCIGEN described above.",
"SCI GPT2 starts from the standard pretrained GPT2-base model and is trained for an additional 75k gradient updates at batch size of 64 (effectively a single epoch over 4.8 million abstracts and body paragraphs) with a language modeling objective.",
"Figure 2 illustrates the process.",
"We observed significant improvements in the quality of SCIGEN outputs after replacing the underlying GPT2 language model with the domain-specific SCI GPT2 model.",
"We saw a perplexity improvement in a held-out set and, in informal inspections, qualitative improvements as well.",
"When using pretrained language models, text from task-specific test data cannot be guaranteed to be absent from the large task-independent corpora upon which these models are trained, which may improve model performance compared to models without this exposure.",
"For the experiments described in this work, we train a version of SCI GPT2 only on documents appearing in the training data, so that the principal documents and target sentences in the test data are unseen by the language model.",
"We provide this and a full-corpus version of SCI GPT2 as resources for future research.",
"3 4.2 Retrieval with Approximate Nearest Neighbors While neural text generation techniques have advanced significantly in recent years, their outputs are still inferior to human authored texts.",
"For some tasks, it is better to retrieve a relevant human-authored text than to generate novel text automatically (Fan et al., 2018).",
"Is this also the case when generating explanations?",
"To answer this question, we use an information retrieval (IR) baseline.",
"We adapt an approximate nearest neighbor search algorithm to find similar pairs of documents.",
"The basic search procedure is as follows: Given a test instance input ( S, C ) for principal S and cited document C , we find the set NC , the nearest neighbors to C in the training data.",
"For each document NC from NC , let NS be the set of documents that cite NC .",
"This means that each NS NS contains at least one citing sentence t (cid:48) which cites NC .",
"We use the t (cid:48) associated with the ( NS , NC ) pair from the training which is closest to ( S, C ) as the explanation of their relationship.",
"We measure the closeness of two pairs of documents using the cosine distances between vector representations of their abstracts.",
"The abstract of each document is encoded as a single dense vector by averaging the contextualized embeddings provided by the SciBERT model of Beltagy et al. (2019) and normalizing.",
"The distance between ( S, C ) and neighbors ( NS , NC ) is computed as: cos( S, NS ) + cos( C, NC ) (2) where and control the relative contribution of the two document similarities.",
"We explore setting both and to 1, or tuning them to optimize BLEU on the validation data using MERT (Och, 2003).",
"3 https://github.com/Kel-Lu/SciGen 5 Representing Documents with Sentence Selection Methods for the related task of citation recommendation have made use of abstracts, which perhaps act as sufficient summaries of document content.",
"Building on this, we represent the principal and cited documents with the first 450 tokens of either their abstracts, introductions, or sentences randomly sampled from throughout the full document.",
"4 In this section, we answer two questions: 1) do neural generation models with sentence-based context outperform the IR baseline and 2) does the type of sentence-based context (abstract, introduction, sampled) matter?",
"We answer these questions by performing both automatic and human evaluations.",
"We compare the SCIGEN and IR systems using BLEU (Papineni et al., 2002) and ROUGE (specif-ically L; Lin, 2004).",
"The Sentence-based rows of Table 3 show the test set performance of the IR system and the best SCIGEN models when provided with the different sentence-based input context combinations.",
"5 We assesss statistical signifi-cance as well by bootstrapping with 1000 samples in each of 100 iterations.",
"We find that context does make a difference for SCIGEN , and that a slight but statistically significant performance improvement comes from using the introduction of the principal document rather than the abstract.",
"6 We do not, however, find enough evidence to reject the null hypothesis that any particular representation of the cited document's content (abstract, intro, or random sample) is sufficient.",
"We find that using the introduction of the principal document paired with the abstract of the cited document performs best, and so we select these for human evaluation.",
"The IR systems perform well, obtaining slightly better scores in some settings.",
"We choose the MERT-optimized version for human evaluation.",
"4 We exclude any sentence with a citation from being sampled in all conditions.",
"This context type is also only used for the cited document and not the principal document.",
"5 The performance of our best SCIGEN models can be found in Table 3 and the automatic test set evaluations of all systems can be found in Appendix F. 6 p < 0 .",
"01 after Bonferroni correction.",
"We conduct a human evaluation to determine, given a particular pair of principal and cited abstracts, how correct and specific the generated explanation of their relationship is.",
"By correct we mean: does the explanation correctly express the factual relationship between the principal and cited documents?",
"Because generic explanations such as This work extends the ideas of Chomsky and Halle (1968), while possibly factual, do not express a detailed understanding of the documents' relationship, we ask judges whether the explanation describes a specific relationship between the two works.",
"An explanation can be specific even it is incorrect.",
"We compare the principal intro cited abs SCIGEN setting against the tuned IR system.",
"For calibration, we also elicit judgments for the gold explanations extracted from principal documents along with the correct principal and cited abstracts.",
"In all three cases, we ensure that the principal document appeared in the ACL anthology to ensure annotator expertise.",
"In total we solicit 37 NLP researchers and collect over 800 judgments, with over 100 for each system/quality dimension combination.",
"Further details of our evaluation can be found in Appendix D. We perform error analysis on these judgments as well as an additional study to validate human judgments; these are detailed in Appendix E and Appendix G. Table 2 shows the percentage of yes judgments versus the total of yes and no judgements for each system/quality combination, along with pairwise agreement rates.",
"7 Gold texts received the highest scores for all dimensions of text quality from the evaluators as well as the high-7 That gold texts do not achieve perfect scores demonstrates a limitation of our evaluation setup, due in part to the fact that judgments are based on document abstracts rather than their full texts.",
"We take steps to resolve this limitation in our subsequent analysis in Section 6.2.",
"est agreement rate.",
"We can also see that IR systems tend to produce incorrect explanations more often than not.",
"The SCIGEN system performs quite well in this analysis, with a majority of outputs deemed correct.",
"We observe a larger difference in specificity between SCIGEN and gold texts, indicating that SCIGEN , like many neural text generation systems, often generates vague and generic sentences.",
"These generations tended to be vacuous such as (C ITED ) This work is an extension of the paper.",
"Specificity is key for future downstream applications such as automated literature review and will need to be improved for those tasks.",
"Compared to the gold explanations, we found that our generated explanations miss important phrases such as unique model or dataset names and other lower-frequency terms; generally, they lacked specificity.",
"The missing phrases typically appear in the cited document after the abstract and introduction.",
"8 Navely sampling from the full text does not capture them due to sparsity.",
"To address this issue, we explore more sophisticated information extraction (IE) techniques for constructing the conditioning context for SCIGEN .",
"Recent work has shown that pretrained language models can adapt to disfluent inputs such as linearized trees and graphs (Ribeiro et al., 2020).",
"Inspired by this, we investigate whether we can use lists of salient words and phrases to effect a dense representation of the cited document in the conditioning context.",
"Specifically, we construct a list of document-specific terms using tf-idf to score unigrams and entities extracted with a state-of-the-art scientific NER system.",
"The paradigm is illustrated in Figure",
"3. Tf-idf Tf-idf is a measure of the frequency of a term in a document, normalized by the document frequency of that term.",
"In our use, we calculate the tf-idf score for each unigram in the cited document.",
"We keep the 100 highest scoring terms w i sorted in descending order of scores.",
"The terms of this list are concatenated with a special token tf to signal that this part of the input is structured as a list rather than conventional text.",
"The resulting context X tf = w 1 , tf , w 2 , tf , ..., tf , w 100 is used to represent the cited document to the SCIGEN model.",
"Entities We extract entities from abstracts with the DyGIE++ information extraction framework (Wadden et al., 2019) using the model trained on SciERC (Luan et al., 2018), a dataset of scientific document abstracts with entity and relation annotations.",
"9 The extracted entities e i from the cited document are sorted by their tf-idf scores compared to all entities in the corpus.",
"As above, a special token e is used to concatenate entities and help the language model distinguish this list from conventional text.",
"If there is additional room in the context window we append the unigrams with the highest tfidf to the end of the listed entities until the window is full.",
"In that case, the cited document context X e is e 1 , e , e 2 , ..., e , e n , tf , w 1 tf , ..., w m , where n is the number of entities and m is 100 n .",
"Maynez et al. (2020) point out that summarization systems frequently struggle with factuality and generate hallucinations unfaithful to input documents.",
"We observe this problem with some generated explanations as well: popular, topical terms like CNN' would appear in explanations of papers using LSTM models, for example.",
"To combat hallucinations and promote factual accuracy we include a ranking mechanism that rewards generated explanations with higher coverage of important entities from the conditioning context.",
"10 The process we use is as follows: first, we generate a large space of candidate explanations for a given input document pair from SCIGEN via nucleus sampling.",
"We then extract the entities from each candidate using the DyGIE++ IE system.",
"Where possible, we match entities from the candidates with the entities extracted from the cited document.",
"To account for textual variation between the explanations and the input documents, we use a similarity threshold to make soft alignments.",
"11 9 We found relation annotations to be noisy on inspection.",
"We then select the candidate that has the highest mean reciprocal rank of matched entities against the input as the explanation for this document pair.",
"We conducted a manual correctness analysis of the generated explanations from a sentence-based (in-tro abs) and IE-based (intro tfidf generate and rank) model.",
"Two of the authors judged 50 datapoints from each system using a similar setup to that described in Section 5.2, but with the single objective of judging correctness on a 3-way scale: Correct; Too Vague (but not incorrect); and Incorrect.",
"Additionally, the authors made use of the full text of the input documents to make decisions for cases where not enough information is available in the abstract.",
"This resulted in a more accurate though much more time-consuming evaluation process compared to the previous evaluation.",
"After judging all datapoints independently, the two authors discussed disagreements until a consensus was reached.",
"The results of this analysis are shown in Table",
"4. We see a slight increase in correctness with the IE-based model compared to the sentence-based model, though the difference is small.",
"The IE-based rows of Table 3 show the results of automatic metrics for the systems described in this Section.",
"We find that these metrics improve significantly in the settings where the principal document is represented by its introduction and the cited document is represented either as a list of terms or entities, with a slight advantage for entities.",
"The models conditioned on intro tfidf context outperform all other sentence-based, retrieval, and IE-based models.",
"Example system outputs for selected test datapoints are shown in Table",
"5. The first example illustrates Method Context BLEU ACL-BLEU Rouge-L Sentence-based SCIGEN principal abs cited abs 9.82 10.40 8.4 principal intro cited abs 9.92 11.22 8.7 principal intro cited intro 9.80 10.54 8.8 principal intro cited sampled 9.81 10.31 8.7 IR source abs cited abs 9.93 10.50 9.7 +MERT 10.23 10.29 9.8 IE-based SCIGEN principal intro cited tfidf 13.17 16.75 12.0 principal intro cited entities 13.41 13.42 11.8 +Ranking principal intro cited tfidf 13.50 15.10 12.3 principal intro cited entities 13.16 14.47 11.8 Table 3: Automatic test set evaluation of generated texts for a subset of our systems.",
"In this instance, they both use the pinyin representation for Chinese characters in their transliteration models.",
"Output 2 demonstrates a failure of the explanation generation system.",
"The principal document deals with the topic of discourse relations, the automatic identification of which is a long-standing machine learning task.",
"However, this particular document is an analysis paper, and does not involve any training.",
"Output 3 is an example of a Too Vague (but not incorrect) case from the analysis in Section 6.2.",
"Here again the explanation generated by SCIGEN is topical, dealing with the concept of distant super-vision that is key to both input documents.",
"However, this sentence fails to capture the specific use that the principal makes of the research described in cited document.",
"The final example, output 4, showcases potential for our system to explain concurrent work.",
"The generated text summarizes the cited and implies that principal will build on that work.",
"However, selected papers are both concurrent generation papers published in the same venue and do not cite each other.",
"This appears to be a weakness in using citation sentences as proxies for relationship explanations.",
"Citations of contemporaneous work occur less frequently, so these types of sentences appear less often in training.",
"Similarly, relationship explanations between papers with more distant connections (e.g., multi-hop in the citation graph) Principal: A Syllable-based Name Transliteration System 1 Cited: A Joint Source-Channel Model for Machine Transliteration SCIGEN : Following Cited , Chinese characters are considered as Pinyin sequence.",
"In addition to missing some relationships, not all citation sentences are useful as explanations.",
"As pointed out by other work, citation sentences can often be simple summaries of the cited work (Qazvinian and Radev, 2008; Cohan and Goharian, 2017).",
"Alternatively, they can be too specific to be useful, as seen in Output 1, where a higher-level summary might be more useful.",
"Future work could focus on curating better training sets for our task.",
"It is notable that the SCIGEN model usually outputs syntactically correct and topical explanations, even given the difficulty of the vocabulary in this domain.",
"This is consistent with many recent findings using domain-specific language models.",
"The fluency and appropriateness of SCIGEN 's generations shows the promise of generating explanations which accurately capture the relationship between two documents.",
"Based on the results obtained here, we expect pretrained scientific language models to persist as a foundation.",
"Future work should focus on two complementary goals: ensuring the factual accuracy of the generated text and improved modeling of the cited document.",
"Factual accuracy is difficult to enforce in language model-based text generation systems, especially where inference includes sampling procedures.",
"The use of information extraction for contexts showed promise in Section 6; other methods of incorporating information like grounding to knowledge bases could help prune false or irrelevant statements.",
"Combining knowledge graphs with language models and generation is an active research area that has shown promise in other domains (Bosselut et al., 2019; Koncel-Kedziorski et al., 2019; Peters et al., 2019).",
"Applying this line of work to scientific text by modeling input documents as knowledge graphs of their content may help algorithms better understand the cited document, provide distant supervision for concurrent work, and result in better outputs.",
"We have described a task of explaining the relationship between two scientific texts and its connections to facilitating researcher productivity.",
"We employ a large, publicly available dataset of scientific documents to train a domain-adapted left-to-right language model for use in text generation applications and beyond.",
"We explore a collection of techniques for representing document content including using abstracts, introductions, sampled sentences, and lists of informative terms and entities.",
"We conduct thorough human and automatic evaluations to determine the relative strengths of each representation for expressing document relationships in natural language text.",
"Acknowledgements This research was supported in part by the Office of Naval Research under the MURI grant N00014-18-1-2670.",
"We thank the members of Noah Smith's (ARK), Hanna Hajishirzi's (H2Lab), and Luke Zettlemoyer's research groups for their participation in our study.",
"We thank members of Noah's ARK for their helpful comments and the anonymous reviewers for their feedback.",
"Ethical considerations The authors received an IRB approval for this human annotation project.",
"The project was classified as no-risk.",
"All participants in the study were volunteers and gave explicit, informed consent for the study."
] | [
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"objective",
"abstain",
"objective",
"result",
"method",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Fact Verification requires fine-grained natural language inference capability that finds subtle clues to identify the syntactical and semantically correct but not well-supported claims.",
"This paper presents Kernel Graph Attention Network (KGAT), which conducts more fine-grained fact verification with kernel-based attentions.",
"Given a claim and a set of potential evidence sentences that form an evidence graph, KGAT introduces node kernels, which better measure the importance of the evidence node, and edge kernels, which conduct fine-grained evidence propagation in the graph, into Graph Attention Networks for more accurate fact verification.",
"KGAT achieves a 70.38% FEVER score and significantly outperforms existing fact verification models on FEVER, a large-scale benchmark for fact verification.",
"Our analyses illustrate that, compared to dot-product attentions, the kernel-based attention concentrates more on relevant evidence sentences and meaningful clues in the evidence graph, which is the main source of KGAT's effectiveness.",
"All source codes of this work are available at https://github.",
"com/thunlp/KernelGAT .",
"Online contents with false information, such as fake news, political deception, and online rumors, have been growing significantly and spread widely over the past several years.",
"How to automatically fact check the integrity of textual contents, to prevent the spread of fake news, and to avoid the undesired social influences of maliciously fabricated statements, is urgently needed for our society.",
"Recent research formulates this problem as the fact verification task, which targets to automatically verify the integrity of statements using trustworthy corpora, e.g., Wikipedia (Thorne et al., 2018a).",
"For example, as shown in Figure 1, a system could first Al Jardine is an American rhythm guitarist Claim Verification SUPPORTS REFUTES NOT ENOUGH INFO Evidence Reasoning Alan Charles Jardine (born September 3, 1942) is an American musician , singer and songwriter who cofounded the Beach Boys.",
"retrieve related evidence sentences from the background corpus, conduct joint reasoning over these sentences, and aggregate the signals to verify the claim integrity (Nie et al., 2019a; Zhou et al., 2019; Yoneda et al., 2018; Hanselowski et al., 2018).",
"There are two challenges for evidence reasoning and aggregation in fact verification.",
"One is that no ground truth evidence is given; the evidence sentences are retrieved from background corpora, which inevitably contain noise.",
"The other is that the false claims are often deliberately fabricated; they may be semantically correct but are not supported.",
"This makes fact verification a rather challenging task, as it requires the fine-grained reasoning ability to distinguish the subtle differences between truth and false statements (Zhou et al., 2019).",
"This paper presents a new neural structural reasoning model, Kernel Graph Attention Network (KGAT), that provides more fine-grained evidence selection and reasoning capability for fact verification using neural matching kernels (Xiong et al., 2017; Dai et al., 2018).",
"Given retrieved evidence pieces, KGAT first constructs an evidence graph, using claim and evidence as graph nodes and fully-connected edges.",
"It then utilizes two sets of kernels, one on the edges, which selectively summarize clues for a more fine-grained node representation and propagate clues among neighbor nodes through a multi-layer graph attention; and the other on the nodes, which performs more accurate evidence selection by better matching evidence with the claim.",
"These signals are combined by KGAT, to jointly learn and reason on the evidence graph for more accurate fact verification.",
"In our experiments on FEVER (Thorne et al., 2018a), a large-scale fact verification benchmark, KGAT achieves a 70.38% FEVER score, significantly outperforming previous BERT and Graph Neural Network (GNN) based approaches (Zhou et al., 2019).",
"Our experiments demonstrate KGAT's strong effectiveness especially on facts that require multiple evidence reasoning: our kernel-based attentions provide more sparse and focused attention patterns, which are the main source of KGAT's effectiveness.",
"The FEVER shared task (Thorne et al., 2018a) aims to develop automatic fact verification systems to check the veracity of human-generated claims by extracting evidence from Wikipedia.",
"The recently launched FEVER shared task 1.0 is hosted as a competition on Codalab 1 with a blind test set and has drawn lots of attention from NLP community.",
"Existing fact verification models usually employ FEVER's official baseline (Thorne et al., 2018a) with a three-step pipeline system (Chen et al., 2017a): document retrieval, sentence retrieval and claim verification.",
"Many of them mainly focus on the claim verification step.",
"Nie et al. (2019a) concatenates all evidence together to verify the claim.",
"One can also conduct reasoning for each claim evidence pair and aggregate them to the claim label (Luken et al., 2018; Yoneda et al., 2018; Hanselowski et al., 2018).",
"TwoWingOS (Yin and Roth, 2018) further incorporates evidence identifi-cation to improve claim verification.",
"GEAR (Zhou et al., 2019) formulates claim verification as a graph reasoning task and provides two kinds of attentions.",
"It conducts reasoning and aggregation over claim evidence pairs with 1 https://competitions.codalab.org/ competitions/18814 a graph model (Velickovic et al., 2017; Scarselli et al., 2008; Kipf and Welling, 2017).",
"Zhong et al. (2019) further employs XLNet (Yang et al., 2019) and establishes a semantic-level graph for reasoning for a better performance.",
"These graph based models establish node interactions for joint reasoning over several evidence pieces.",
"Many fact verification systems leverage Natural Language Inference (NLI) techniques (Chen et al., 2017b; Ghaeini et al., 2018; Parikh et al., 2016; Radford et al., 2018; Peters et al., 2018; Li et al., 2019) to verify the claim.",
"The NLI task aims to classify the relationship between a pair of premise and hypothesis as either entailment, contradiction or neutral, similar to the FEVER task, though the later requires systems to find the evidence pieces themselves and there are often multiple evidence pieces.",
"One of the most widely used NLI models in FEVER is Enhanced Sequential Inference Model (ESIM) (Chen et al., 2017b), which employs some forms of hard or soft alignment to associate the relevant sub-components between premise and hypothesis.",
"BERT, the pre-trained deep bidirectional Transformer, has also been used for better text representation in FEVER and achieved better performance (Devlin et al., 2019; Li et al., 2019; Zhou et al., 2019; Soleimani et al., 2019).",
"The recent development of neural information retrieval models, especially the interaction based ones, have shown promising effectiveness in extracting soft match patterns from query-document interactions (Hu et al., 2014; Pang et al., 2016; Guo et al., 2016; Xiong et al., 2017; Dai et al., 2018).",
"One of the effective ways to model text matches is to leverage matching kernels (Xiong et al., 2017; Dai et al., 2018), which summarize word or phrase interactions in the learned embedding space between query and documents.",
"The kernel extracts matching patterns which provide a variety of relevance match signals and shows strong performance in various ad-hoc retrieval dataset (Dai and Callan, 2019).",
"Recent research also has shown kernels can be integrated with contextualized representations, i.e., BERT, to better model the relevance between query and documents (MacAvaney et al., 2019).",
"This section describes our Kernel Graph Attention Network (KGAT) and its application in Fact Verification.",
"Following previous research, KGAT first constructs an evidence graph using retrieved evidence sentences D = { e 1 , . . . , e p , . . . , e l } for claim c , and then uses the evidence graph to predict the claim label y (Sec. 3.1 and 3.2).",
"As shown in Figure 2, the reasoning model includes two main components: Evidence Propagation with Edge Kernels (Sec. 3.3) and Evidence Selection with Node Kernels (Sec. 3.4).",
"Similar to previous research (Zhou et al., 2019), KGAT constructs the evidence graph G by using each claim-evidence pair as a node and connects all node pairs with edges, making it a fully-connected evidence graph with l nodes: N = { n 1 , . . . , n p , . . . , n l } .",
"KGAT unifies both multiple and single evidence reasoning scenarios and produces a probability P ( y | c, D ) to predict claim label y .",
"Different from previous work (Zhou et al., 2019), we follow the standard graph label prediction setting in graph neural network (Velickovic et al., 2017) and split the prediction into two components: 1) the label prediction in each node conditioned on the whole graph P ( y | n p , G ) ; 2) the evidence selection probability P ( n p | G ) : P ( y | c, D ) = l (cid:88) p =1 P ( y | c, e p , D ) P ( e p | c, D ) , (1) or in the graph notation: P ( y | G ) = l (cid:88) p =1 P ( y | n p , G ) P ( n p | G ) .",
"The joint reasoning probability P ( y | n p , G ) calculates node label prediction with multiple evidence.",
"The readout module (Knyazev et al., 2019) calculates the probability P ( n p | G ) and attentively combines per-node signals for prediction.",
"The rest of this section describes the initialization of node representations ( n p ) in Sec. 3.2, the calculation of per-node predictions P ( y | n p , G ) with Edge Kernels (Sec. 3.3), and the readout module P ( n p | G ) with Node Kernels (Sec. 3.4).",
"The node representations are initialized by feeding the concatenated sequence of claim, document (Wiki) title, and evidence sentence, to pre-trained BERT model (Devlin et al., 2019).",
"Specifically, in the node n p , the claim and evidence correspond to m tokens (with [SEP]) and n tokens (with Joint Evidence Reasoning MLP MLP Claim Label Node Kernel \" #(%|' 1 , *) #(%|' , , *) #(%|' , *) Evidence Reasoning Evidence Selection Edge Kernel ' 1 ' -' , #(%|G) / 0 #(' |*) 1(' 0 ) 2 3 2 4 2 0 / 4 #(' , |*) 1(' 4 ) / 3 #(' 1 |*) 1(' 3 ) Figure 2: KGAT Architecture.",
"Wikipedia title and [SEP]) .",
"Using the BERT encoder, we get the token hidden states H p with the given node n p : H p = BERT ( n p ) .",
"The rest of the sequences H p 1: m + n are also used to represent the claim and evidence tokens: H p 1: m for the claim tokens and H pm +1: m + n for the evidence tokens.",
"The evidence propagation and per-node label prediction in KGAT are conducted by Edge Kernels, which attentively propagate information among nodes in the graph G along the edges with the kernel attention mechanism.",
"Specifically, KGAT calculates the node n p 's representation v p with the kernel attention mechanism, and uses it to produce the per-node claim prediction y : v p = Edge-Kernel ( n p , G ) , P ( y | n p , G ) = softmax y ( Linear ( v p )) .",
"The edge kernel of KGAT conducts a hierarchical attention mechanism to propagate information between nodes.",
"It uses token level attentions to produce node representations and sentence level attentions to propagate information along edges.",
"Token Level Attention.",
"The token level attention uses kernels to get the fine-grained representation z q p of neighbor node n q , according to node n p .",
"The content propagation and the attention are controlled by kernels.",
"To get the attention weight q p i for i -th token in n q , we first conduct a translation matrix M q p between q -th node and p -th node.",
"Each element of the translation matrix M q p ij in M q p is the cosine similarity of their corresponding tokens' BERT representations: M q p ij = cos( H qi , H pj ) .",
"Then we use K kernels to extract the matching feature (cid:126)K ( M q p i ) from the translation matrix M q p (Xiong et al., 2017; Dai et al., 2018; Qiao et al., 2019; MacAvaney et al., 2019):",
"Each kernel K k utilizes a Gaussian kernel to extract features and summarizes the translation score to support multi-level interactions:",
"The attention weights are used to combine the token representations ( z q p ):",
"Sentence Level Attention.",
"The sentence level attention combines neighbor node information to node representation v p .",
"The aggregation is done by a graph attention mechanism, the same with previous work (Zhou et al., 2019).",
"It first calculate the attention weight q p of n q node according to the p -th node n p : q p = softmax q ( MLP ( z p z q p )) , (11) where denotes the concatenate operator and z p is the initial representation of n p .",
"It updates the node representation with its neighbors, and the updated information are selected first by the token level attention (Eq. 9) and then the sentence level attention (Eq. 11).",
"Sentence Level Claim Label Prediction.",
"The updated p -th node representation v p is used to calculate the claim label probability P ( y | n p ) : P ( y | n p , G ) = softmax y ( Linear ( v p )) .",
"The prediction of the label probability for each node is also conditioned on the entire graph G , as the node representation is updated by gather information from its graph neighbors.",
"The per-node predictions are combined by the readout function in graph neural networks (Zhou et al., 2019), where KGAT uses node kernels to learn the importance of each evidence.",
"It first uses node kernels to calculate the readout representation ( n p ) for each node n p : ( n p ) = Node-Kernel ( n p ) .",
"Similar to the edge kernels, we first conduct a translation matrix M c e p between the p -th claim and evidence, using their hidden state set H p 1: m and H pm +1: m + n .",
"The kernel match features (cid:126)K ( M c e p i ) on the translation matrix are combined to produce the node selection representation ( n p ) : ( n p ) = 1 m m (cid:88) i =1 (cid:126)K ( M c e p i ) .",
"KGAT leverages the kernels multi-level soft matching capability (Xiong et al., 2017) to weight the node-level predictions in the evidence graph based on their relevance with the claim: P ( y | G ) = l (cid:88) p =1 P ( y | n p , G ) P ( n p | G ) .",
"using the ground truth verification label y .",
"This section describes the dataset, evaluation metrics, baselines, and implementation details in our experiments.",
"Dataset.",
"A large scale public fact verification dataset FEVER (Thorne et al., 2018a) is used in our experiments.",
"The FEVER consists of 185,455 annotated claims with 5,416,537 Wikipedia documents from the June 2017 Wikipedia dump.",
"All claims are classified as SUPPORTS, REFUTES or NOT ENOUGH INFO by annotators.",
"The dataset partition is kept the same with the FEVER Shared Task (Thorne et al., 2018b) as shown in Table 1.",
"Evaluation Metrics.",
"The official evaluation metrics 2 for claim verification include Label Accuracy (LA) and FEVER score.",
"LA is a general evaluation metric, which calculates claim classification accuracy rate without considering retrieved evidence.",
"The FEVER score considers whether one complete set of golden evidence is provided and better reflects the inference ability.",
"We also evaluate Golden FEVER (GFEVER) scores, which is the FEVER score but with golden evidence provided to the system, an easier setting.",
"Precision, Recall and F1 are used to evaluate evidence sentence retrieval accuracy using the provided sentence level labels (whether the sentence is evidence or not to verify the claim).",
"Baselines.",
"The baselines include top models during FEVER 1.0 task and BERT based models.",
"Three top models in FEVER 1.0 shared task are compared.",
"Athene (Hanselowski et al., 2018) and UNC NLP (Nie et al., 2019a) utilize ESIM to encode claim evidence pairs.",
"UCL MRG (Yoneda et al., 2018) leverages Convolutional Neural Network (CNN) to encode claim and evidence.",
"These three models aggregate evidence by attention mechanism or label aggregation component.",
"The BERT based models are our main baselines, they significantly outperform previous methods without pre-training.",
"BERT-pair, BERT-concat and GEAR are three baselines from the previous 2 https://github.com/sheffieldnlp/ fever-scorer Split SUPPORTED REFUTED NOT ENOUGH INFO Train 80,035 29,775 35,639 Dev 6,666 6,666 6,666 Test 6,666 6,666 6,666 Table 1: Statistics of FEVER Dataset.",
"work (Zhou et al., 2019).",
"BERT-pair and BERT-concat regard claim-evidence pair individually or concatenate all evidence together to predict claim label.",
"GEAR utilizes a graph attention network to extract supplement information from other evidence and aggregate all evidence through an attention layer.",
"Soleimani et al. (2019); Nie et al. (2019b) are also compared in our experiments.",
"They implement BERT sentence retrieval for a better performance.",
"In addition, we replace kernel with dot product to implement our GAT version, which is similar to GEAR, to evaluate kernel's effectiveness.",
"Document retrieval.",
"The document retrieval step retrieves related Wikipedia pages and is kept the same with previous work (Hanselowski et al., 2018; Zhou et al., 2019; Soleimani et al., 2019).",
"For a given claim, it first utilizes the constituency parser in AllenNLP (Gardner et al., 2018) to extract all phrases which potentially indicate entities.",
"Then it uses these phrases as queries to find relevant Wikipedia pages through the online Me-diaWiki API 3 .",
"Then the convinced article are reserved (Hanselowski et al., 2018).",
"Sentence retrieval.",
"The sentence retrieval part focuses on selecting related sentences from retrieved pages.",
"There are two sentence retrieval models in our experiments: ESIM based sentence retrieval and BERT based sentence retrieval.",
"The ESIM based sentence retrieval keeps the same as the previous work (Hanselowski et al., 2018; Zhou et al., 2019).",
"The base version of BERT is used to implement our BERT based sentence retrieval model.",
"We use the [CLS] hidden state to represent claim and evidence sentence pair.",
"Then a learning to rank layer is leveraged to project [CLS] hidden state to ranking score.",
"Pairwise loss is used to optimize the ranking model.",
"Some work (Zhao et al., 2020; Ye et al., 2020) also employs our BERT based sentence retrieval in their experiments.",
"batch size to 4 and accumulate step to 8.",
"All models are evaluated with LA on the development set and trained for two epochs.",
"The training and development sets are built with golden evidence and higher ranked evidence with sentence retrieval.",
"All claims are assigned with five pieces of evidence.",
"The BERT (Base), BERT (Large) and RoBERTa (Liu et al., 2019) are evaluated in claim verification.",
"In our experiments, the max length is set to 130.",
"All models are implemented with PyTorch.",
"BERT inherits huggingface's implementation 4 .",
"Adam optimizer is used with learning rate = 5e-5 and warm up proportion = 0.1.",
"The kernel size is set to 21, the same as previous work (Qiao et al., 2019).",
"The experiments are conducted to study the performance of KGAT, its advantages on different reasoning scenarios, and the effectiveness of kernels.",
"The fact verification performances are shown in Table 2. Several testing scenarios are conducted to compare KGAT effectiveness to BERT based baselines: BERT (Base) Encoder with ESIM retrieved sentences, with BERT retrieved sentences, and BERT (Large) Encoder with BERT retrieved sentences.",
"Compared with baseline models, KGAT is the best on all testing scenarios.",
"With ESIM sentence retrieval, same as the previous work (Zhou et al., 2019; Hanselowski et al., 2018), KGAT outperforms the graph attention models GEAR and our GAT on both development and testing sets.",
"It illustrates the effectiveness of KGAT among graph based reasoning models.",
"With BERT based sentence retrieval, our KGAT also outperforms BERT (Base) (Soleimani et al., 2019) by almost 1% FEVER score, showing consistent effectiveness with different sentence retrieval models.",
"When using BERT (Large) as the encoder, KGAT also outperforms the corresponding version of Soleimani et al. (2019).",
"KGAT with RoBERTa performs the best compared with all previously published research on all evaluation metrics.",
"CorefBERT (Ye et al., 2020) extends our KGAT architecture and explicitly models co-referring relationship in context for better performance.",
"The sentence retrieval performances of ESIM and BERT are compared in Table 3. The BERT sentence retrieval outperforms ESIM sentence retrieval significantly, thus also helps improve KGAT's reasoning accuracy.",
"Nevertheless, for more fair comparisons, our following experiments are all based on ESIM sentence retrieval, which is the one used by GEAR, our main baseline (Zhou et al., 2019).",
"The verifiable instances are separated (except instances with NOT ENOUGH INFO label ) into two groups according to the golden evidence labels.",
"If more than one evidence pieces are required, the claim is considered as requiring multi-evidence reasoning.",
"The single evidence reasoning set and the multiple evidence reasoning set contain 11,372 (85.3%) and 1,960 (14.7%) instances, respectively.",
"We also evaluate two additional KGAT variations: KGAT-Node which only uses kernels on the node, with the edge kernels replaced by standard dot-production attention, and KGAT-Edge which only uses kernels on the edge.",
"The results of these systems on the two scenarios are shown in Table 4. KGAT-Node outperforms GAT by more than 0.3% on both single and multiple reasoning sce-Reasoning Model LA GFEVER FEVER Multiple GEAR 66.38 n.a. 37.96 -0.25% GAT 66.12 84.39 38.21 KGAT-Node 65.51 83.88 38.52 0.31% KGAT-Edge 65.87 84.90 39.08 0.87% KGAT-Full 65.92 85.15 39.23 1.02% Single GEAR 78.14 n.a. 75.73 -1.69% GAT 79.79 81.96 77.42 KGAT-Node 79.92 82.29 77.73 0.31% KGAT-Edge 79.90 82.41 77.58 0.16% KGAT-Full 80.33 82.62 78.07 0.65% Table 4: Claim Verification Accuracy on Claims that requires Multiple and Single evidence Pieces.",
"narios.",
"As expected, it does not help much on GFEVER, because the golden evidence is given and node selection is not required.",
"It illustrates KGAT-Node mainly focuses on choosing appropriate evidence and assigning accurate combining weights in the readout.",
"KGAT-Edge outperforms GAT by more than 0.8% and 0.1% on multiple and single evidence reasoning scenarios, respectively.",
"Its effectiveness is mostly on combining the information from multiple evidence pieces.",
"The multiple and single evidence reasoning scenarios evaluate the reasoning ability from different aspects.",
"The single evidence reasoning mainly focuses on selecting the most relevant evidence and inference with single evidence.",
"It mainly evaluates model de-noising ability with the retrieved evidence.",
"The multiple evidence reasoning is a harder and more complex scenario, requiring models to summarize necessary clues and reason over multiple evidence.",
"It emphasizes to evaluate the evidence interactions for the joint reasoning.",
"KGAT-Node shows consistent improvement on both two reasoning scenarios, which demonstrates the important role of evidence selection.",
"KGAT-Edge, on the other hand, is more effective on multiple reasoning scenarios as the Edge Kernels help better propagate information along the edges.",
"More Concentrated Attention.",
"This experiment studies kernel attentions by their entropy, which reflects whether the learned attention weights are focused or scattered.",
"The entropy of the kernel attentions in KGAT, the dot-product",
"at-(a) Edge Attention.",
"tentions in GAT, and the uniform attentions are shown in Figure 3. The entropy of Edge attention is shown in Figure",
"3(a).",
"Both GAT and KGAT show a smaller entropy of the token attention than the uniform distribution.",
"It illustrates that GAT and KGAT have the ability to assign more weight to some important tokens with both dot product based and kernel based attentions.",
"Compared to the dot-product attentions in GAT, KGAT's Edge attention focuses on fewer tokens and has a smaller entropy.",
"The entropy of Node attentions are plotted in Figure",
"3(b).",
"GAT's attentions distribute almost the same with the uniform distribution, while KGAT has concentrated Node attentions on a few evidence sentences.",
"As shown in the next experiment, the kernel based node attentions focus on the correct evidence pieces and de-noises the retrieved sentences, which are useful for claim verification.",
"More Accurate Evidence Selection.",
"This experiment evaluates the effectiveness of KGAT-Node through attention distribution and evidence recall.",
"The results are shown in Figure 4. We first obtain the node attention score in the evidence graph from KGAT or GAT, and",
"calcu-(a) GAT.",
"late the statistics of the maximum one for each claim, as most of which only require single evidence to verify.",
"The attention score of the highest attended evidence node for each claim is plotted in Figure",
"4(a).",
"As expected, KGAT concentrates its weight to select evidence nodes and provides a focused attention.",
"Then the evidence selection accuracy is evaluated by their evidence recall.",
"We first rank all evidence pieces for each claim.",
"Then the evidence recall with different ranking depths is plotted in Figure",
"4(b).",
"KGAT achieves a much higher recall on top ranking positionsonly the first ranked sentence covers nearly 80% of ground truth evidence, showing the node kernels' ability to select correct evidence.",
"This also indicates the potential of the node kernels in the sentence retrieval stage, which we reserve for future work as this paper focuses on the reasoning stage.",
"Fine-Grained Evidence Propagation.",
"The third analysis studies the distribution of KGAT-Edge's attention which is used to propagate the evidence clues in the evidence graph.",
"Figure 5 plots the attention weight distribution of the edge attention scores in KGAT and GAT, one from kernels and one from dot-products.",
"The kernel attentions again are more concentrated: KGAT focuses fewer words while GAT's dot-product attentions are almost equally distributed among all words.",
"This observation of the scattered dot-product attention is consistent with previous research (Clark et al., 2019).",
"As shown in the next case study, the edge kernels provide a fine-grained and intuitive attention pattern when combining evidence clues from multiple pieces.",
"Table 5 shows the example claim used in GEAR (Zhou et al., 2019) and the evidence sentences retrieved by ESIM, among which the first two are required evidence pieces.",
"Figure 6 presents the distribution of attentions from the first evidence to the tokens in the second evidence ( 2 1 i ) in KGAT (Edge Kernel) and GAT (dot-product).",
"The first evidence verifies that Al Jardine is an American musician but does not enough information about whether Al Jardine is a rhythm guitarist.",
"The edge kernels from KGAT accurately pick up the additional information evidence (1) required from evidence (2): rhythm guitarist.",
"It effectively fills the missing information and completes the reasoning chain.",
"Interesting, Al Jardine also receives more attention, which helps to verify if the information in the second evidence is about the correct person.",
"This kernel attention pattern is more intuitive and effective than the dot-product attention in GAT.",
"The later one scatters almost uniformly across all tokens and hard to explain how the joint reasoning is conducted.",
"This seems to be a common challenge of the dot-product attention in Transformers (Clark et al., 2019).",
"This paper presents KGAT, which uses kernels in Graph Neural Networks to conduct more accurate evidence selection and fine-grained joint reasoning.",
"Our experiments show that kernels lead to the more accurate fact verification.",
"Our studies illustrate the two kernels play different roles and contribute to different aspects crucial for fact verification.",
"While the dot-product attentions are rather scattered and hard to explain, the kernel-based attentions show intuitive and effective attention patterns: the node kernels focus more on the correct evidence pieces; the edge kernels accurately gather the necessary information from one node to the other to complete the reasoning chain.",
"In the future, we will further study this properties of kernel-based attentions in neural networks, both in the effectiveness front and also the explainability front.",
"This research is jointly supported by the NSFC project under the grant no. 61661146007, the funds of Beijing Advanced Innovation Center for Language Resources (No. TYZ19005), and the NExT++ project, the National Research Foundation, Prime Ministers Office, Singapore under its IRC@Singapore Funding Initiative."
] | [
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"abstain",
"method",
"other"
] |
[
"We propose VALSE ( V ision A nd L anguage S tructured E valuation), a novel benchmark designed for testing general-purpose pretrained vision and language (V&L) models for their visio-linguistic grounding capabilities on specific linguistic phenomena .",
"VALSE offers a suite of six tests covering various linguistic constructs.",
"Solving these requires models to ground linguistic phenomena in the visual modality, allowing more fine-grained evaluations than hitherto possible.",
"We build VALSE using methods that support the construction of valid foils, and report results from evaluating five widely-used V&L models.",
"Our experiments suggest that current models have considerable difficulty addressing most phenomena.",
"Hence, we expect VALSE to serve as an important benchmark to measure future progress of pretrained V&L models from a linguistic perspective , complementing the canonical task-centred V&L evaluations.",
"General-purpose pretrained vision and language (V&L) models have gained notable performance on many V&L tasks (Lu et al., 2019; Tan and Bansal, 2019; Li et al., 2019; Chen et al., 2020; Li et al., 2020a; Su et al., 2020).",
"As a result, V&L research has changed its focus from task-specific architectures to fine-tuning large V&L models.",
"Current benchmarks give a good perspective on model performance on a wide range of V&L tasks (Cao et al., 2020; Lourie et al., 2021; Li et al., 2021), but the field is only starting to assess why models perform so well and whether models learn specific capabilities that span multiple V&L tasks .",
"Specifically, we lack detailed understanding of the extent to which such models are able to ground linguistic phenomenafrom morphosyn-tax to semanticsin the visual modality (Bernardi Corresponding author parcalabescu@cl. uni-heidelberg.de . and Pezzelle, 2021).",
"For example, recent evidence suggests that models are insensitive to linguistic distinctions of verb-argument structure (Hendricks and Nematzadeh, 2021) and word order (Cirik et al., 2018; Akula et al., 2020).",
"Our work addresses this gap with VALSE (Vi-sion And Language Structured Evaluation), a benchmark for V&L model evaluation comprising six tasks, or pieces', where each piece has the same structure: given a visual input, a model is asked to distinguish real captions from foils , where a foil is constructed from a caption by altering a word or phrase that realizes a specific linguistic phenomenon , e.g., semantic number of nouns, verb argument structure, or coreference.",
"VALSE uses a resource-lean diagnostic setup that dispenses with large-scale annotation (e.g., of bounding boxes), and builds on existing high-quality image captioning and VQA data.",
"VALSE is designed to leverage the existing prediction heads in pretrained (or finetuned) V&L models; for that reason, our benchmark does not include any re-training and can be interpreted as a zero-shot evaluation.",
"We build test data for each piece so as to safeguard against the possibility of models exploiting artefacts or statistical biases in the data, a well-known issue with highly parameterised neural models pretrained on large amounts of data (Goyal et al., 2017; Madhyastha et al., 2018; Kafle et al., 2019).",
"With this in view, we propose novel methods to guard against the emergence of artefacts during foiling.",
"Our main contributions are:",
"i) We introduce VALSE, a novel benchmark aimed at gauging the sensitivity of pre-trained V&L models to foiled instances.",
"ii) We cover a wide spectrum of basic linguistic phenomena affecting the linguistic and visual modalities: existence, plurality, counting, spatial relations, actions, and entity coreference.",
"iii) We investigate novel strategies to build valid foils that include automatic and human valida-8253 tion.",
"We balance word frequency distributions between captions and foils, and test against pretrained models solving the benchmark uni-modally by relying only on text.",
"We employ masked language modeling (MLM) in foil creation and semantic inference for validating foils, and finally collect human annotations for the entire benchmark.",
"iv) We establish initial experimental results for pretrained V&L models of diverse architectures on VALSE.",
"The overall weak performance of these models indicates that the time is ripe for a novel, reliable foiling dataset targeting the visual grounding capabilities of V&L models through the lens of linguistic constructs.",
"1 2 Background and Related work Pretrained V&L models learn to combine vision and language through self-supervised multitask learning.",
"Tasks include multimodal masked modeling where words in the text and object labels or regions in the image are masked out, then predicted and image-sentence alignment , whereby a model learns to predict whether an image and a text correspond.",
"Major architectures are singleand dual-stream multimodal transformers: single-stream models concatenate word and image features, and encode the resulting sequence with a single transformer stack; dual-stream models use distinct transformer stacks to handle visual and textual inputs, and additional layers (e.g. co-attention) to fuse these into multimodal features.",
"Benchmarking V&L models V&L models (Li et al., 2019; Lu et al., 2019; Tan and Bansal, 2019; Lu et al., 2020; Li et al., 2020b; Kim et al., 2021) are commonly evaluated on V&L tasks such as VQA (Goyal et al., 2017), visual reasoning (Suhr et al., 2019), or image retrieval (Lin et al., 2014; Plummer et al., 2015).",
"Given how well transformer-based models perform across unimodal and multimodal tasks, research efforts have recently started to address what makes them so effective, and to what extent they learn generalisable representations.",
"Techniques to address these questions in unimodal and multimodal V&L contexts include: adversarial examples (Jia and Liang, 2017; Jia et al., 2019); investigation 1 We release our dataset containing all annotators' votes (Prabhakaran et al., 2021) at https://github.com/ Heidelberg-NLP/VALSE .",
"of the impact of bias, be it linguistic (Gururan-gan et al., 2018), visual semantic (Agarwal et al., 2020), or socio-economic (Garg et al., 2019); and the use of linguistically-informed counterfactual and minimally-edited examples (Levesque et al., 2012; Gardner et al., 2020).",
"A trend within the latter research line that is specific to V&L models is vision-and-language foiling (Shekhar et al., 2017b; Gokhale et al., 2020; Bitton et al., 2021; Parcalabescu et al., 2021; Rosenberg et al., 2021), where the idea is to create counterfactual (i.e., foiled ) and/or minimally edited examples by performing data augmentation on captions (Shekhar et al., 2017b,a) or images (Rosenberg et al., 2021).",
"Since most V&L models are pretrained on some version of the image-text alignment task, it is possible to test their ability to distinguish correct from foiled captions (in relation to an image) in a zero-shot setting.",
"The construction of foils can serve many investigation purposes.",
"With VALSE, we target the linguistic grounding capabilities of V&L models , focusing on pervasive linguistic phenomena that span multiple tokens , described in 3.1 3.6.",
"At the same time, we ensure that our data is robust to perturbations and artefacts by",
"i) controlling for word frequency biases between captions and foils, and",
"ii) testing against unimodal collapse , a known issue of V&L models (Goyal et al., 2017; Madhyastha et al., 2018), thereby preventing models from solving the task using a single input modality.",
"The issue of neural models exploiting data artefacts is well-known (Gururangan et al., 2018; Jia et al., 2019; Wang et al., 2020b; He et al., 2021) and methods have been proposed to uncover such effects, including gradient-based, adversarial perturbations or input reduction techniques (cf. Wallace et al., 2020).",
"Yet, these methods are still not fully understood (He et al., 2021) and can be unreliable (Wang et al., 2020b).",
"Our work is related to Gardner et al. (2020), who construct task-specific contrast sets for NLU.",
"However, our focus is on modelling linguistic phenomena instead of tasks, and we construct carefully curated, balanced, single foils from valid instances that we select from multiple multimodal datasets.",
"We resort to a musical analogy to describe VALSE: Vision And Language Structured Evaluation is composed of 6 pieces , each corresponding to a specific linguistic phenomenon (see Table 1 for an",
"overview).",
"Each piece consists of one or more instruments designed to evaluate a model's ability to ground that specific linguistic phenomenon.",
"All instruments are built by applying foiling functions (FFs) specific to the linguistic phenomenon under study.",
"FFs take a correct caption as input and change a specific part to produce a foiled caption (or foil ).",
"We design FFs such that the sentences they produce fail to describe the image, while still being grammatical and otherwise valid sentences.",
"Of course, a foiled caption may be less likely than the original caption from which it was produced, and such unwarranted biases can be easily picked up by overparameterised V&L models.",
"Moreover, an automatic FF may fail to produce a foil that contradicts the image, for example by altering the original caption to yield a near-synonymous one, or one that is entailed by the original caption.",
"For phenomena that make it difficult to control these crucial properties of foils, we apply additional filters:",
"i) some FFs make use of strong LMs to propose changes to captions, so that the generated foils are still high-probability sentences;",
"ii) we use state-of-the-art natural language inference (NLI) methods to detect cases where there is an entailment between caption and foil, and filter out such foils from the dataset (see 4 for discussion).",
"As a final measure, we employ human annotators to validate all generated testing data in VALSE.",
"datasets.",
"Below, we describe each piece and its instruments, and the corresponding task setup in VALSE.",
"For each instrument, we follow the same procedure:",
"i) we identify captions that contain instances of the targeted linguistic phenomenon;",
"ii) we apply a FF that automatically replaces the expression with a variant that contradicts the original expression's visual content, thereby constructing one or more foils from each target instance in the original caption, as discussed in 4; we then",
"iii) subject the obtained foils to various filters, with the aim of distilling a subset of valid and reliable foils that cannot be easily tricked by a new generation of highly parameterised pretrained V&L models.",
"The existence piece has a single instrument and targets instances with existential quantifiers .",
"Models need to differentiate between examples",
"i) where there is no entity of a certain type or",
"ii) where one or more of these entities are visible in an image.",
"We use the Visual7W visual question answering dataset (Zhu et al., 2016) and source its how many' examples, building a pool of those whose answers are numerals (0, 1, 2, etc.).",
"We use templates to transform question and answer fields into a declarative statement that correctly describes what can be seen in the image, e.g. Q: How many animals are shown? A: 0' There are 0 animals shown'.",
"We then transform these statements into an existential 8255 statement.",
"In the example above, we replace the numeral by the word no' to create a correct caption (There are no animals shown') and remove the numeral altogether to create a foil (There are animals shown').",
"The existence piece has 505 image captionfoil tuples after manual validation, out of 534 candidates (cf. 4), and captions/foils are balanced: 50% of the (correct) captions originally have answer 0, and the remaining have answer 1 or greater.",
"Full details are provided in A.1.",
"The plurality piece has a single instrument, concerned with semantic number .",
"It is intended to test whether a model is able to distinguish between noun phrases denoting a single entity in an image (exactly one flower'), versus multiple entities (some flowers').",
"The dataset consists of 851 validated instances out of 1000 generated candidates (cf. 4), evenly divided between cases where the caption contains a plural NP, foiled by replacing it with a singular ( pl2sg : some flowers' exactly one flower'), or conversely, the caption contains a singular which is foiled by replacing it with a plural ( sg2pl ).",
"Foil candidates were generated from the COCO 2017 validation set (Chen et al., 2015).",
"Full details about the foil construction and our measures against introducing biases with quantifiers such as exactly one', are provided in A.2.",
"The counting piece has three instruments: balanced , adversarial and small numbers .",
"All instances are statements about the number of entities visible in an image .",
"The model needs to differentiate between examples where the specific number of entities in the associated image is correct or incorrect, given the statement.",
"Similarly to the existence piece, we use the Visual7W VQA dataset (Zhu et al., 2016) and source its how many' examples whose answers are numerals (0, 1, 2, etc.).",
"We use templates to transform question and answer fields into a declarative statement describing the image and create foils by replacing the numeral in the correct statement by another numeral.",
"All three instruments are designed to show whether models learn strategies that generalize beyond the training distribution, and to what extent a model exploits class frequency bias.",
"2 In counting balanced we cap the number of examples to 2 We take the original answer in Visual7W as the example class: e.g., in There are 0 animals shown', the class is 0.",
"a maximum per class and make sure correct and foil classes are balanced, so that models that exploit class frequency bias are penalized.",
"In counting adversarial we ensure that all foils take class n { 0 , 1 , 2 , 3 } , whereas all correct captions take class m { m | m 4 } .",
"Biased models are expected to favour more frequent classes.",
"Since small numbers are naturally the most frequent, models that resort to such biases should perform poorly on this adversarial test set.",
"Counting small numbers is a sanity check where all correct captions and foils have class n { 0 , 1 , 2 , 3 } , and caption/foil classes are balanced.",
"Since models likely have been exposed to many examples in this class set and all such classes are high-frequency, with this instrument we disentangle model performance from class exposure.",
"Counting balanced, adversarial, and small numbers have 868 (1000), 691 (756), and 900 (1000) instances after (before) manual validation, respectively (cf. 4).",
"For details, see A.3.",
"The relations piece has a single instrument and focuses on the ability of models to distinguish between different spatial relations.",
"Foils differ from the original caption only by the replacement of a spatial preposition.",
"As with plurals, the data was sourced from the COCO 2017 validation split.",
"To create foils, we first identified all preposition sequences in captions (e.g., in', out of').",
"Foils were created by masking the prepositions and using SpanBERT (Joshi et al., 2020) to generate candidates of between 13 words in length.",
"We keep SpanBERT candidates, which are spans whose lengths vary from 1 to 3, if they differ from the original preposition sequence, but exist in the dataset.",
"There are 535 instances after manual validation out of 614 proposed instances (cf. 4), and we ensure that prepositions are similarly distributed among captions and foils.",
"Full details are provided in A.4.",
"The actions piece has two instruments:",
"i) action replacement and",
"ii) actant swap .",
"They test a V&L model's capability to",
"i) identify whether an action mentioned in the text matches the action seen in the image (e.g., a man shouts / smiles at a woman'), and",
"ii) correctly identify the participants of an action and the roles they play (e.g., is it the man who is shouting or is it the woman, given the picture in Table 1?).",
"504 action verbs, and we generate captions and foils from SWiG annotations of semantic roles and their fillers.",
"For the action replacement piece, we exchange action verbs with other verbs from SWiG that fit the linguistic context as suggested by BERT.",
"For the actant swap, we swap role fillers in the role annotations, hence generating action descriptions with inverted roles.",
"Action replacement and actant swap have 648 (779) and 949 (1042) instances after (before) manual validation, respectively (cf. 4).",
"See A.5 for full details.",
"The coreference piece aims to uncover whether V&L models are able to perform pronominal coreference resolution.",
"It encompasses cases where",
"i) the pronoun has a noun (phrase) antecedent and pronoun and (noun) phrase are both grounded in the visual modality (A woman is driving a motorcycle. Is she wearing a helmet?'), and cases where",
"ii) the pronoun refers to a region in the image or even to the entire image (Is this outside?').",
"We create foils based on VisDial v1.0 (Das et al., 2017) with images from MSCOCO (Lin et al., 2014).",
"VisDial captions and dialogues are Q&A sequences.",
"We select image descriptions of the form [ Caption. Question? Yes/No. ] where the question contains at least one pronoun.",
"When foiling, we exchange the answer from yes to no and vice-versa (see Table 1).",
"We ensure a 50-50% balance between yes / no answers.",
"The coreference piece consists of two instruments: coreference standard originating from the VisDial train set and a small coreference clean set from the validation set, containing 708 (916) and 104 (141) examples after (before) manual validation, respectively (cf. 4).",
"3 See A.6 for full details.",
"In VALSE, an instance consisting of an image-caption-foil triple is considered valid if: the foil minimally differs from the original caption; the foil does not accurately describe the image; and independent judges agree that the caption, but not the foil, is an accurate description of the image.",
"We consider a foiling method to be more reliable the more it ensures that a generated foil does not substantially differ from a human caption regarding distributional and plausibility bias, and cannot be easily solved unimodally.",
"In this section, we discuss automatic and manual means to reliably construct valid foils.",
"In this context, two types of bias are especially worthy of note: distributional bias (4.1) and plausibility bias (4.2).",
"In 4.3 we discuss how we apply a natural language inference model to filter examples in our data pipeline, and 4.4 show how we manually validate all examples in our benchmark.",
"Random samples from the final version of each instrument are shown in Tab.",
"611.",
"A first form of bias is related to distributional imbalance between captions and foils (e.g., certain words or phrases having a high probability only in foils).",
"Previous foiling datasets exhibit such imbalance, enabling models to solve the task disregarding the image (Madhyastha et al., 2019).",
"To mitigate this problem, for each phenomenon and throughout our data creation process, we ensure that the token frequency distributions in correct and foiled captions are approximately the same (cf. App. A and E).",
"A second form of bias may arise from automatic procedures yielding foils that are implausible or unnatural, which can facilitate their detection.",
"Often, VALSE pieces can be safely foiled by simple rules (e.g., switching from existence to non-existence, or from singular to plural or vice versa).",
"However, with spatial relations and actions , a foil could be deemed unlikely given only the textual modality and independently of the image, e.g., a man stands under / on a chair'.",
"Such plausibility biases may be detected by large language models that incorporate commonsense knowledge (Petroni et al., 2019; Wang et al., 2020a), and we expect future V&L models to exhibit similar capabilities.",
"To ensure that foiled and correct captions are similarly plausible, we use language models such as BERT (Devlin et al., 2019) and SpanBERT (Joshi et al., 2020) to suggest replacements in our foiling functions.",
"Additionally, in the case of spatial relations and plurals, we also apply a grammaticality filter using GRUEN (Zhu and Bhat, 2020).",
"GRUEN was originally proposed to automatically score generated sentences based on discourse-level and grammatical properties.",
"We use only the grammaticality component of GRUEN, and retain only foil candidates with a grammaticality score 0 .",
"8 .",
"benchmark could be solved by a multimodal model with strong linguistic capacities in unimodal collapse , whereby a model silently relies on a single modality within which biases are easier to exploit (Goyal et al., 2017; Shekhar et al., 2019a).",
"By evaluating VALSE with unimodal models, we establish a baseline that V&L models should exceed if we are to expect true multimodal integration.",
"When constructing foils, we need to ensure that they fail to describe the image.",
"To test this automatically, we apply natural language inference (NLI) with the following rationale: We consider an image and its caption as a premise and its entailed hypothesis, respectively (a similar rationale is applied in the visual entailment task; Xie et al., 2019).",
"In addition, we consider the caption as premise and the foil as its hypothesis .",
"If a NLI model predicts the foil to be entailed (E) by the caption, it cannot be a good foil since by transitivity it will give a truthful description of the image.",
"By contrast, if the foil is predicted to contradict (C) or to be neutral (N) with respect to the caption, we take this as an indicator of a valid (C) or a plausible (N) foil.",
"4 We use the NLI model ALBERT (Lan et al., 2020) finetuned on the task (see Appendix C for details).",
"Filtering with NLI was initially applied to relations, plurals and actions , on the grounds that foils in these pieces may induce substantive changes to lexical content.",
"5 Following automatic labelling of caption-foil pairs, we manually validated a sample labelled as E, C or N. For relations ( N = 30 ), labels were found to be near 100% accurate with only 2 (0.06%) errors overall.",
"For plurals ( N = 60 , 50% sg2pl and 50% pl2sg ), the error rate was also low, with 0 errors for C, 33% errors for E and 11% errors for N. Here, a number of entailment errors were due to odd formulations arising from the automatic foiling process, whereas no such oddities were observed for C. We therefore include only foils labelled C in the final relations and plurals pieces.",
"For actions , the model labelled 4 See the following examples from action replacement: P: A mother scolds her son.",
"H1: A mother encourages her son.",
"(C; good foil); H2: A mother camps with her son.",
"(N; needs image control); H3: A mother talks to her son.",
"(E; not a suitable foil)",
"If the NLI prediction is N, we still need to check the image, since the description might happen to fit the image content.",
"5 By contrast, existence and counting foils involve a more straightforward swap (e.g., between numerical quantities); similarly, coreference foils simply involve the replacement of a positive with a negative answer.",
"contradictions very accurately (0% error) but was erroneous up to 97.1% for E, meaning that a large number of valid foils would be spuriously excluded.",
"To avoid reducing the dataset too much, we did not use NLI filtering for actions, but relied on human annotation as a final validity check.",
"As a final step, the data for each instrument was submitted to a manual validation.",
"For each instance, annotators were shown the image, the caption and the foil.",
"Caption and foil were numbered and displayed above each other to make differences more apparent, with differing elements highlighted in boldface (Fig. 2, App. E).",
"Annotators were not informed which text was the caption and which was the foil, and captions appeared first (numbered 1 ) 50% of the time.",
"The task was to determine which of the two texts accurately described what could be seen in the image.",
"In each case, annotators had a forced choice between five options:",
"a) the first, but not the second;",
"b) the second, but not the first;",
"c) both of them;",
"d) neither of the two; and",
"e) I cannot tell.",
"Each item was annotated by three individuals.",
"The validation was conducted on Amazon Mechanical Turk with a fixed set of annotators who had qualified for the task.",
"For details see App.",
"E. For the final version of VALSE, we include instances which passed the following validation test: at least two out of three annotators identified the caption, but not the foil, as the text which accurately describes the image.",
"Across all instruments, 87.7% of the instances satisfied this criterion (min 77.3%; max 94.6%), with 73.6% of instances overall having a unanimous (3/3) decision that the caption, but not the foil, was an accurate description.",
"We consider these figures high, suggesting that the automatic construction and filtering procedures yield foils which are likely to be valid, in the sense discussed in 4 above.",
"We compute inter-annotator agreement for each instrument (Tab. 5).",
"On the valid subset, agreement is low to medium (Krippendorff's : min=0.23, max=0.64, mean=0.42, sd=0.12).",
"We note that there is considerable variation in the number of annotations made by individuals, and is computed over 5 categories.",
"Hence, this result cannot be straightforwardly interpreted as a ceiling of human performance for VALSE.",
"However, is higher for pieces on which models also perform better (e.g. 8258 existence, Foil-It!; cf. 5).",
"We propose VALSE as a task-independent, zero-shot benchmark to assess the extent to which models learn to ground specific linguistic phenomena as a consequence of their pretraining (or fine-tuning).",
"VALSE is built in the spirit of approaches such as Checklist (Ribeiro et al., 2020), including pairs consisting of captions and minimally edited foils.",
"The only requirement to evaluate a model on our benchmark is:",
"i) to have a binary classification head to predict whether an image-sentence pair is foiled, or",
"ii) to predict an image-sentence matching score between the image and the caption vs. the foil, returning the pair with the highest score.",
"Systems reporting results on VALSE are expected to report any data used in model training prior to testing on VALSE, for comparability.",
"We employ five metrics 6 for evaluation: overall accuracy ( acc ) on all classes (foil and cor-rect); precision ( p c ) measuring how well models identify the correct examples; foil precision ( p f ) measuring how well foiled cases are identified; pairwise ranking accuracy ( acc r ), which measures whether the image-sentence alignment score is greater for a correct image-text pair than for its foiled pair; and area under the receiver",
"operating characteristic curve (AUROC), which measures how well models distinguish correct vs. foiled examples across different prediction thresholds.",
"acc r is more permissive than acc as it accepts model predictions if the score for a foil is lower than the caption's score.",
"Our main metrics are AUROC and acc r .",
"acc r gives results for a pair (cid:104) image, caption (cid:105) and (cid:104) image, foil (cid:105) .",
"Both AUROC and acc r are well suited to evaluate minimally-edited pairs as neither uses a classification threshold.",
"As for p c and p f , since these are competing metrics where naively increasing one can decrease the other, we report the smaller of the two as an indicator of how informed model predictions are.",
"Since all instruments are implemented as a balanced binary classification, the random baseline is always 50%.",
"We benchmark five V&L models on VALSE: CLIP (Radford et al., 2021), LXMERT (Tan and Bansal,",
"6 All metrics are defined in Appendix B.",
"Multimodal results The best zero-shot results are achieved by ViLBERT 12-in-1 with the highest scores across the board, followed by ViLBERT, 8259 Metric Model Existence Plurality Counting Sp.rel.",
"2019), ViLBERT (Lu et al., 2019), ViLBERT 12-in-1 (Lu et al., 2020), and VisualBERT (Li et al., 2019).",
"These models have different architectures and are pretrained on a variety of tasks with different training data.",
"We also benchmark two unimodal text-only models, GPT1 (Radford et al., 2018) and GPT2 (Radford et al., 2019).",
"See Appendix D for details on all these models used in our evaluation.",
"Unimodal models GPT1 and GPT2 are autoregressive language models pretrained on English text.",
"We test whether VALSE is solvable by these unimodal models by computing the perplexity of the correct and foiled caption and predicting the entry with the lowest perplexity .",
"If the perplexity is higher for the foil, we take this as an indication that the foiled caption may suffer from plausibility bias or other linguistic biases (cf. 4.2).",
"We test V&L and unimodal models on VALSE in a zero-shot setting, and also evaluate on a number of correct captions and foils from the FOIL it! dataset (Shekhar et al., 2017b) (cf. App. A.7 for details).",
"All results are listed in Table 2. Unimodal results For most instruments, unimodal results are close to random and hence do not signal strong linguistic or plausibility biases.",
"One exception is the original FOIL it! dataset, in line with Madhyastha et al. (2019)'s findings.",
"Also the spatial relations (77.2%), action replacement (66.8%) and actant swap (76.9%) instruments suggest plausibility biases in foils.",
"Such biases are hard to avoid in automatic foil generation for actions due to the verb arguments' selectional restrictions, which are easily violated when flipping role fillers, or replacing the verb.",
"Similar considerations hold for relations: though SpanBERT proposals are intended to aid selection of likely replacements for prepositions, plausibility issues arise with relatively rare argument-preposition combinations.",
"While these might be the first instruments in VALSE to be solved in the future, current V&L models struggle to detect even blatant mismatches of actant swap, e.g., A ball throws a tennis player.' For VALSE, the unimodal scores will serve as a baseline for the pairwise accuracy of V&L models.",
"LXMERT, CLIP, 7 and finally VisualBERT.",
"The latter obtains high p f but very low p c values reflected in the min( p c , p f ) scoresindicating that VisualBERT learned a heuristic that does not generalise (see Hendricks and Nematzadeh, 2021, for similar observations with other models).",
"We hypothesise that this is due to the way image-sentence alignment is framed in VisualBERT's pretraining: the model expects an image and a correct sentence c 1 , and predicts whether a second sentence c 2 is a match.",
"8 During pretraining c 1 and c 2 are likely to differ in many ways, whereas in our setting, they are nearly identical.",
"This may bias the model against predicting foils, which would raise the value p f .",
"Instruments centered on individual objects like existence and the FOIL it! dataset are almost solved by ViLBERT 12-in-1, highlighting that models are capable of identifying named objects and their presence in images.",
"However, none of the remaining pieces can be reliably solved in our adversarial foiling settings:",
"i) distinguishing references to single vs. multiple objects or counting them in an 7 CLIP works in a contrastive fashion, therefore we report only acc r (cf. Appendix D for details).",
"8 c 1 is one of the 5 captions describing the relevant image in MSCOCO.",
"During VisualBERT's pretraining, c 2 can be an alternative caption out of these 5, or a randomly drawn caption which does not describe the image.",
"The pretraining task is to determine if c 2 correctly describes the image or not.",
"image (plurality and counting);",
"ii) correctly classifying a named spatial relation between objects in an image (relations);",
"iii) distinguishing actions and identifying their participants, even if supported by preference biases (actions); or,",
"iv) tracing multiple references to the same object in an image through the use of pronouns (coreference).",
"Correct vs. foil precision p c and p f show that V&L models struggle to solve the phenomena in VALSE.",
"When a model achieves high precision on correct captions p c this is often at the expense of very low precision on foiled captions p f (cf. ViL-BERT), or vice-versa (cf. VisualBERT).",
"This suggests that such models are insensitive to VALSE's inputs: models that almost always predict a match will inflate p f at the expense of p c .",
"min( p c , p f ) reveals that VisualBERT and ViLBERT perform poorly and below random baseline, and LXMERT close to or below it.",
"ViLBERT 12-in-1 performs strongly on existence, well on counting, but struggles on plurality, spatial relations, coreference, and actions.",
"These tendencies we see reflected in our main metrics, acc r and AUROC.",
"We present the VALSE benchmark to help the community improve V&L models by hard-testing their visual grounding capabilities through the lens of lin-8260",
"guistic constructs.",
"Our experiments show that V&L models identify named objects and their presence in images well (as shown by the existence piece), but struggle to ground their interdependence and relationships in visual scenes when forced to respect linguistic indicators.",
"We encourage the community to use VALSE for measuring progress towards V&L models capable of true language grounding.",
"Furthermore, VALSE could be used as an indirect assessment of datasets, as models could be evaluated before and after training or fine-tuning to see if a dataset helps models improve on any of the aspects tested by VALSE.",
"VALSE is designed as a living benchmark.",
"As future work we plan to extend it to further linguistic phenomena, and to source data from diverse V&L datasets to cover more linguistic variability and image distributions.",
"IC has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No 838188.",
"AG and MC are supported by the European Union's Horizon 2020 research and innovation Programme under the Marie Skodowska-Curie grant agreement No 860621.",
"This collaboration was facilitated by the Multi3Generation COST Action CA18231."
] | [
"objective",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"objective",
"objective",
"objective",
"abstain",
"objective",
"objective",
"method",
"result",
"objective",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"method",
"method",
"method",
"abstain",
"objective",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"method",
"abstain",
"abstain",
"objective",
"other",
"other",
"other"
] |
[
"Learning disentangled representations of natural language is essential for many NLP tasks, e.g. , conditional text generation, style transfer, personalized dialogue systems, etc .",
"Similar problems have been studied extensively for other forms of data, such as images and videos.",
"However, the discrete nature of natural language makes the disentangling of textual representations more challenging ( e.g. , the manipulation over the data space cannot be easily achieved).",
"Inspired by information theory, we propose a novel method that effectively manifests disentangled representations of text, without any supervision on semantics.",
"A new mutual information upper bound is derived and leveraged to measure dependence between style and content.",
"By minimizing this upper bound, the proposed method induces style and content embeddings into two independent low-dimensional spaces.",
"Experiments on both conditional text generation and text-style transfer demonstrate the high quality of our disentangled representation in terms of content and style preservation.",
"Disentangled representation learning (DRL), which maps different aspects of data into distinct and independent low-dimensional latent vector spaces, has attracted considerable attention for making deep learning models more interpretable.",
"Through a series of operations such as selecting, combining, and switching, the learned disentangled representations can be utilized for downstream tasks, such as domain adaptation (Liu et al., 2018), style transfer (Lee et al., 2018), conditional generation (Denton et al., 2017; Burgess et al., 2018), and few-shot learning (Kumar Verma et al., 2018).",
"Although widely used in various domains, such This work was conducted while the first author was doing an internship at NEC Labs America.",
"as images (Tran et al., 2017; Lee et al., 2018), videos (Yingzhen and Mandt, 2018; Hsieh et al., 2018), and speech (Chou et al., 2018; Zhou et al., 2019), many challenges in DRL have received limited exploration in natural language processing (John et al., 2019).",
"To disentangle various attributes of text, two distinct types of embeddings are typically considered: the style embedding and the content embedding (John et al., 2019).",
"The content embedding is designed to encapsulate the semantic meaning of a sentence.",
"In contrast, the style embedding should represent desired attributes, such as the sentiment of a review, or the personality associated with a post.",
"Ideally, a disentangled-text-representation model should learn representative embeddings for both style and content.",
"To accomplish this, several strategies have been introduced.",
"Shen et al. (2017) proposed to learn a semantically-meaningful content embedding space by matching the content embedding from two different style domains.",
"However, their method requires predefined style domains, and thus cannot automatically infer style information from unlabeled text.",
"Hu et al. (2017) and Lample et al. (2019) utilized one-hot vectors as style-related features (instead of inferring the style embeddings from the original data).",
"These models are not applicable when new data comes from an unseen style class.",
"John et al. (2019) proposed an encoder-decoder model in combination with an adversarial training objective to infer both style and content embeddings from the original data.",
"However, their adversarial training framework requires manually-processed supervised information for content embeddings ( e.g. , reconstructing sentences with manually-chosen sentiment-related words re-moved).",
"Further, there is no theoretical guarantee for the quality of disentanglement.",
"nformation-theoretic D isentangled E mbedding L earning method (IDEL) for text, based on guidance from information theory.",
"Inspired by Variation of Information (VI), we introduce a novel information-theoretic objective to measure how well the learned representations are disentangled.",
"Specifically, our IDEL reduces the dependency between style and content embeddings by minimizing a sample-based mutual information upper bound.",
"Furthermore, the mutual information between latent embeddings and the input data is also maximized to ensure the representativeness of the latent embeddings ( i.e. , style and content embeddings).",
"The contributions of this paper are summarized as follows: A principled framework is introduced to learn disentangled representations of natural language.",
"By minimizing a novel VI-based DRL objective, our model not only explicitly reduces the correlation between style and content embeddings, but also simultaneously preserves the sentence information in the latent spaces.",
"A general sample-based mutual information upper bound is derived to facilitate the minimization of our VI-based objective.",
"With this new upper bound, the dependency of style and content embeddings can be decreased effectively and stably.",
"The proposed model is evaluated empirically relative to other disentangled representation learning methods.",
"Our model exhibits competitive results in several real-world applications.",
"Mutual information (MI) is a key concept in information theory, for measuring the dependence between two random variables.",
"Given two random variables x and y , their MI is defined as I ( x ; y ) = E p ( x , y ) [log p ( x , y ) p ( x ) p ( y )] , (1) where p ( x , y ) is the joint distribution of the random variables, with p ( x ) and p ( y ) representing the respective marginal distributions.",
"In disentangled representation learning, a common goal is to minimize the MI between different types of embeddings (Poole et al., 2019).",
"However, the exact MI value is difficult to calculate in practice, because in most cases the integral in Eq.",
"(1) is Figure 1: The green and purple circles represent the entropy of x and y , respectively.",
"intractable.",
"To address this problem, various MI estimation methods have been introduced (Chen et al., 2016; Belghazi et al., 2018; Poole et al., 2019).",
"One of the commonly used estimation approaches is the Barber-Agakov lower bound (Barber and Agakov, 2003).",
"By introducing a variational distribution q ( x | y ) , one may derive I ( x ; y ) H ( x ) + E p ( x , y ) [log q ( x | y )] , (2) where H ( x ) = E p ( x ) [ log p ( x )] is the entropy of variable x .",
"In information theory, Variation of Information (VI, also called Shared Information Distance) is a measure of independence between two random variables.",
"The mathematical definition of VI between random variables x and y is VI ( x ; y ) = H ( x ) + H ( y ) 2 I ( x ; y ) , (3) where H ( x ) and H ( y ) are entropies of x and y , respectively (shown in Figure 1).",
"Kraskov et al. (2005) show that VI is a well-defined metric, which satisfies the triangle inequality: VI ( y ; x ) + VI ( x ; z ) VI ( y ; z ) , (4) for any random variables x , y and z .",
"Additionally, VI ( x ; y ) = 0 indicates x and y are the same variable (Meila, 2007).",
"From Eq.",
"(3), the VI distance has a close relation to mutual information: if the mutual information is a measure of dependence between two variables, then the VI distance is a measure of independence between them.",
"Consider data { ( x i , y i ) } Ni =1 , where each x i is a sentence drawn from a distribution p ( x ) , and y",
"is the label indicating the style of x i .",
"The goal is to encode each sentence x i into its corresponding style embedding s i and content embedding c i with an encoder q ( s , c | x ) : s i , c i | x i q ( s , c | x i ) .",
"The collection of style embeddings { s i } Ni =1 can be regarded as samples drawn from a variable s in the style embedding space, while the collection of content embeddings { c i } Ni =1 are samples from a variable c in the content embedding space.",
"In practice, the dimension of the content embedding is typically higher than that of the style embedding, considering that the content usually contains more information than the style (John et al., 2019).",
"We first give an intuitive introduction to our proposed VI-based objective, then in Section 3.1 we provide the theoretical justification for it.",
"To disentangle the style and content embedding, we try to minimize the mutual information I ( s ; c ) between s and c .",
"Meanwhile, we maximize I ( c ; x ) to ensure that the content embedding s sufficiently encapsulates information from the sentence x .",
"The embedding s is expected to contain rich style information.",
"Therefore, the mutual information I ( s ; y ) should be maximized.",
"Thus, our overall disentangled representation learning objective is: L Dis = I ( s ; c ) I ( c ; x ) I ( s ; y ) .",
"The objective L Dis has a strong connection with the independence measurement in information theory.",
"As described in Section 2.2, Variation of Information (VI) is a well-defined metric of independence between variables.",
"Applying the triangle inequality from Eq.",
"(4) to s , c and x , we have VI ( s ; x ) + VI ( x ; c ) VI ( s ; c ) .",
"Equality occurs if and only if the information from variable x is totally separated into two independent variable s and c , which is an ideal scenario for disentangling sentence x into its corresponding style embedding s and content embedding c .",
"D ( x ; s , c ) = VI ( s ; x ) + VI ( x ; c ) VI ( c ; s )",
"Since H ( x ) is a constant associated with the data, we only need to focus on I ( s ; c ) I ( x ; c ) I ( x ; s ) .",
"The measurement D ( x ; s , c ) is symmetric to style s and content c , giving rise to the problem that without any inductive bias in supervision, the disentangled representation could be meaningless (as observed by Locatello et al. (2019)).",
"Therefore, we add inductive biases by utilizing the style label y as supervised information for style embedding s .",
"Noting that s x y is a Markov Chain, we have I ( s ; x ) I ( s ; y ) based on the MI data-processing inequality (Cover and Thomas, 2012).",
"Then we convert the minimization of I ( s ; c ) I ( x ; c ) I ( x ; s ) into the minimization of the upper bound I ( s ; c ) I ( x ; c ) I ( y ; s ) , which further leads to our objective L Dis .",
"However, minimizing the exact value of mutual information in the objective L Dis causes numerical instabilities, especially when the dimension of the latent embeddings is large (Chen et al., 2016).",
"Therefore, we provide several MI estimations to the objective terms I ( s ; c ) , I ( x ; c ) and I ( s ; y ) in the following two sections.",
"To maximize I ( x ; c ) and I ( s ; y ) , we derive two variational lower bounds.",
"For I ( x ; c ) , we introduce a variational decoder q ( x | c ) to reconstruct the sentence x by the content embedding c .",
"Leveraging the MI variational lower bound from Eq.",
"(2), we have I ( x ; c ) H ( x ) + E p ( x ; c ) [log q ( x | c )] .",
"Similarly, for I ( s ; y ) , another variational lower bound can be obtained as: I ( s ; y ) H ( y ) + E p ( y, s ) [log q ( y | s )] , where q ( y | s ) is a classifier mapping the style embedding s to its corresponding style label y .",
"Based on these two lower bounds, L Dis has an upper bound: L Dis I ( s ; c ) [ H ( x ) + E p ( x , c ) [log q ( x | c )]] [ H ( y ) + E p ( y, s ) [log q ( y | s )]] .",
"(6) Noting that both H ( x ) and H ( y ) are constants from the data, we only need to minimize: L Dis = I ( s ; c ) E p ( x , c ) [log q ( x | c )] E p ( y, s ) [log q ( y | s )] .",
"As an intuitive explanation of L Dis , the style embedding s and content embedding c are expected to be independent by minimizing mutual information I ( s ; c ) , while they also need to be representative: the style embedding s is encouraged to give",
"a better prediction of style label y by maximizing E p ( y, s ) [log q ( y | s )] ; the content embedding should maximize the log-likelihood E p ( x , c ) [log q ( x | c )] to contain sufficient information from sentence x .",
"To estimate I ( s ; c ) , we propose a novel sample-based upper bound.",
"Assume we have M latent embedding pairs { ( s j , c j ) } Mj =1 drawn from p ( s , c ) .",
"As shown in Theorem 3.1, we derive an upper bound of mutual information based on the samples.",
"A detailed proof is provided in the Supplementary Material.",
"Theorem 3.1.",
"If { ( s j , c j ) } Mj =1 p ( s , c ) , then I( s ; c ) E [ 1 M (cid:80) Mj =1 R j ] =: I( s ; c ) , (8) where R j = log p ( s j | c j ) 1 M (cid:80) Mk =1 log p ( s j | c k ) .",
"Based on Theorem 3.1, given embedding samples { s j , c j } Mj =1 , we can minimize 1 M (cid:80) Mj =1 R j as an unbiased estimation of the upper bound I ( s ; c ) .",
"The calculation of R j requires the conditional distribution p ( s | c ) , whose closed form is unknown.",
"Therefore, we use a variational network p ( s | c ) to approximate p ( s | c ) with embedding samples.",
"To implement the upper bound in Eq.",
"(8), we first feed M sentences { x j } into encoder q ( s , c | x ) to obtain embedding pairs { ( s j , c j ) } .",
"Then, we train the variational distribution p ( c | x ) by maximizing the log-likelihood L ( ) = 1 M (cid:80) Mj =1 log p ( s j | c j ) .",
"After the training of p ( s | c ) is finished, we calculate R j for each embedding pair ( s j , c j ) .",
"Finally, the gradient for 1 M (cid:80) Mj =1 R j is calculated and back-propagated to encoder q ( s , c | x ) .",
"We apply the re-parameterization trick (Kingma and Welling, 2013) to ensure the gradient back-propagates through the sampled embeddings ( s j , c j ) .",
"When the encoder weights are updated, the distribution q ( s , c | x ) changes, which leads to the changing of conditional distribution p ( s | c ) .",
"Therefore, we need to update the approximation network p ( s | c ) again.",
"Consequently, the encoder network q ( s , c | x ) and the approximation network p ( s | c ) are updated alternately during training.",
"In each training step, the above algorithm requires M pairs of embedding samples { s j , c j } Mj =1 and the calculation of all conditional distributions p ( s j | c k ) .",
"This leads to O ( M 2 ) computational complexity.",
"To accelerate the training, we further approximate term 1 M (cid:80) Mk =1 log p ( s j | c k ) in R j by log p ( s j | c k (cid:48) ) , where k (cid:48) is selected uniformly from indices { 1 , 2 , . . . , M } .",
"This stochastic sampling not only leads to an unbiased estimation R j to R j , but also improves the model robustness (as shown in Algorithm 1).",
"Symmetrically, we can also derive an MI upper bound based on the conditional distribution p ( c | s ) .",
"However, the dimension of c is much higher than the dimension of s , which indicates that the neural approximation to p ( c | s ) would have worse performance compared with the approximation to p ( s | c ) .",
"Alternatively, the lower-dimensional distribution p ( s | c ) used in our model is relatively easy to approximate with neural networks.",
"One important downstream task for disentangled representation learning (DRL) is conditional generation.",
"Our MI-based text DRL method can be also embedded into an Encoder-Decoder generative model and trained end-to-end.",
"Since the proposed DRL encoder q ( s , c | x ) is a stochastic neural network, a natural extension is to add a decoder to build a variational autoencoder (VAE) (Kingma and Welling, 2013).",
"Therefore, we introduce another decoder network p ( x | s , c ) that generates a new sentence based on the given style s and content c .",
"A prior distribution p ( s , c ) = p ( s ) p ( c ) , as the product of two multivariate unit-variance Gaussians, is used to regularize the posterior distribution q ( s , c | x ) by KL-divergence minimization.",
"Meanwhile, the log-likelihood term for text reconstruction should be maximized.",
"The objective for VAE is: LVAE = KL ( q ( s , c | x ) (cid:107) p ( s , c )) E q ( s , c | x ) [log p ( x | s , c )] .",
"We combine the VAE objective and our MI-based disentanglement term to form an end-to-end learning framework (as shown in Figure 2).",
"The total Figure 2: Proposed framework: Each sentence x is encoded into style embedding s and content embedding c .",
"loss function is L total = L Dis + LVAE , where L Dis replaces I ( s ; c ) in L Dis (Eq.",
"(7)) with our MI upper bound I ( s ; c ) from Eq.",
"(8); > 0 is a hyper-parameter re-weighting DRL and VAE terms.",
"We call this final framework Information-theoretic Disentangled text Embedding Learning (IDEL).",
"Disentangled representation learning (DRL) can be classified into two categories: unsupervised disentangling and supervised disentangling.",
"Unsupervised disentangling methods focus on adding constraints on the embedding space to enforce that each dimension of the space be as independent as possible (Burgess et al., 2018; Chen et al., 2018).",
"However, Locatello et al. (2019) challenge the effectiveness of unsupervised disentangling without any induced bias from data or supervision.",
"For supervised disentangling, supervision is always provided on different parts of disentangled representations.",
"However, for text representation learning, supervised information can typically be provided only for the style embeddings ( e.g. sentiment or personality labels), making the task much more challenging.",
"John et al. (2019) tried to alleviate this issue by manually removing sentiment-related words from a sentence.",
"In contrast, our model is trained in an end-to-end manner without manually adding any supervision on the content embeddings.",
"Mutual information (MI) is a fundamental measurement of the dependence between two random variables.",
"MI has been applied to a wide range of tasks in machine learning, including generative modeling (Chen et al., 2016), the information bottleneck (Tishby et al., 2000), and domain adaptation (Gholami et al., 2020).",
"In our proposed method, we utilize MI to measure the dependence between content and style embedding.",
"By minimizing the MI, the learned content and style representations are explicitly disentangled.",
"However, the exact value of MI is hard to calculate, especially for high-dimensional embedding vectors (Poole et al., 2019).",
"To approximate MI, most previous work focuses on lower-bound estimations (Chen et al., 2016; Belghazi et al., 2018; Poole et al., 2019), which are not applicable to MI minimization tasks.",
"Poole et al. (2019) propose a leave-one-out upper bound of MI; however it is not numerically stable in practice.",
"Inspired by these observations, we introduce a novel MI upper bound for disentangled representation learning, which stably minimizes the correlation between content and style embedding in a principled manner.",
"the following real-world datasets: Yelp Reviews: The Yelp dataset contains online service reviews with associated rating scores.",
"We follow the pre-processing from Shen et al. (2017) for a fair comparison.",
"The resulting dataset includes 250,000 positive review sentences and 350,000 negative review sentences.",
"Personality Captioning: Personality Captioning dataset (Shuster et al., 2019) collects captions of images which are written according to 215 different personality traits.",
"These traits can be divided into three categories: positive , neutral , and negative .",
"We select sentences from positive and negative classes for evaluation.",
"We build the sentence encoder q ( s , c | x ) with a one-layer bi-directional LSTM plus a multi-head attention mechanism.",
"The style classifier q ( y | s ) is parameterized by a single fully-connected network with the softmax activation.",
"The content-based decoder q ( x | c ) is a one-layer uni-directional LSTM Figure 3: Latent spaces t-SNE plots of IDEL on Yelp.",
"appended with a linear layer with vocabulary size output, outputting the predicted probability of the next words.",
"The conditional distribution approximation p ( s | c ) is represented by a two-layer fully-connected network with ReLU activation.",
"The generator p ( x | s , c ) is built by a two-layer unidirectional LSTM plus a linear projection with output dimension equal to the vocabulary size, providing the next-word prediction based on previous sentence information and the current word.",
"We initialize and fix our word embeddings by the 300-dimensional pre-trained GloVe vectors (Pen-nington et al., 2014).",
"The style embedding dimension is set to 32 and the content embedding dimension is 512.",
"We use a standard multivariate normal distribution as the prior of the latent spaces.",
"We train the model with the Adam optimizer (Kingma and Ba, 2014) with initial learning rate of 5 10 5 .",
"The batch size is equal to 128.",
"We first examine the disentangling quality of learned latent embeddings, primarily studying the latent spaces of IDEL on the Yelp dataset.",
"Latent Space Visualization: We randomly select 1,000 sentences from the Yelp testing set and visualize their latent embeddings in Figure 3, via t-SNE plots (van der Maaten and Hinton, 2008).",
"The blue and red points respectively represent the positive and negative sentences.",
"The left side of the figure shows the style embedding space, which is well separated into two parts with different colors.",
"It supports the claim that our model learns a semantically meaningful style embedding space.",
"The right side of the figure is the content embedding space, which cannot be distinguished by the style labels (different colors).",
"The lack of difference in the pattern of content embedding also provides evidence that our content embeddings have little correlation with the style labels.",
"For an ablation study, we train another IDEL model under the same setup, while removing our MI upper bound I ( s ; c ) .",
"We call this model IDEL in the following experiments.",
"We encode the same sentences used in Figure 3, and display the corresponding embeddings in Figure 4.",
"Compared with results from the original IDEL, the style embedding space (left in Figure 4) is not separated in a clean manner.",
"On the other hand, the positive and negative embeddings become distinguishable in the content embedding space.",
"The difference between Figures 3 and 4 indicates the disentangling effectiveness of our MI upper bound I ( s ; c ) .",
"Label-Embedding Correlation: Besides visualization, we also numerically analyze the correlation between latent embeddings and style labels.",
"Inspired by the statistical two-sample test (Gretton et al., 2012), we use the sample-based divergence between the positive embedding distribution p ( c | y = 1) and the negative embedding distribution p ( c | y = 0) as a measurement of label-embedding correlation.",
"We consider four divergences: Mean Absolute Deviation (MAD) (Geary, 1935), Energy Distance (ED) (Se-jdinovic et al., 2013), Maximum Mean Discrepancy (MMD) (Gretton et al., 2012), and Wasserstein distance (WD) (Ramdas et al., 2017).",
"For a fair comparison, we re-implement previous text embedding methods and set their content embedding dimension to 512 and the style embedding dimension to 32 (if applicable).",
"Details about the divergences and embedding processing are shown in the Supplementary Material.",
"From Table 2, the proposed IDEL achieves the lowest divergences between positive and negative content embeddings compared with CtrlGen (Hu et al., 2017), CAAE (Shen et al., 2017), ARAE (Zhao et al., 2018), BackTranslation (BT) (Lample et al., 2019), and DRLST (John et al., 2019), indicating our model better disentangles the content embeddings from the style labels.",
"For style embeddings, we compare IDEL with DRLST, the Yelp Dataset Personality Captioning Dataset Conditional Generation Style Transfer Conditional Generation Style Transfer ACC BLEU GM ACC BLEU S-BLEU GM ACC BLEU GM ACC BLEU S-BLEU GM CtrlGen 82.5 20.8 41.4 83.4 19.4 31.4 37.0 73.6 18.9 37.0 73.3 18.9 30.0 34.6 CAAE 78.9 19.7 39.4 79.3 18.5 28.2 34.6 72.2 19.5 37.5 72.1 18.3 27.4 33.1 ARAE 78.3 23.1 42.4 78.5 21.3 32.5 37.9 72.8 22.5 40.4 71.5 20.4 31.6 35.8 BT 81.4 20.2 40.5 86.3 24.1 35.6 41.9 74.1 21.0 39.4 75.9 23.1 34.2 39.1 DRLST 83.7 22.8 43.7 85.0 23.9 34.9 41.4 74.9 22.0 40.5 75.7 21.9 33.8 38.3 IDEL 78.1 20.3 39.8 79.1 20.1 27.5 35.1 72.0 19.7 37.7 72.4 19.7 27.1 33.8 IDEL 83.9 23.0 43.9 85.7 24.3 35.2 41.9 75.1 22.3 40.9 75.6 23.3 34.6 39.4 Table 1: Performance comparison of text DRL models.",
"only prior method that infers the text style embeddings.",
"Table 3 shows a larger distribution gap between positive and negative style embeddings with IDEL than with DRLST, which demonstrates the proposed IDEL has better style information expression in the style embedding space.",
"The comparison between IDEL and IDEL supports the effectiveness of our MI upper bound minimization.",
"For style transfer, we encode two sentences into a disentangled representation, and then combine the style embedding from one sentence and the content embedding from another to generate a new sentence via the generator p ( x | s , c ) .",
"For conditional generation, we set one of the style or content embeddings to be fixed and sample the other part from the latent prior distribution, and then use the combination to generate text.",
"Since most previous work only embedded the content information, for fair comparison, we mainly focus on fixing style and sampling context embeddings under the conditional generation setup.",
"To measure generation quality for both tasks, we test the following metrics (more specific description is provided in the Supplementary Material).",
"Style Preservation: Following previous work (Hu et al., 2017; Shen et al., 2017; John et al., 2019), we pre-train a style classifier and use it to test whether a generated sentence can be categorized into the correct target style class.",
"Content Preservation: For style transfer, we measure whether a generation preserves the content information from the original sentence by the self-BLEU score (Zhang et al., 2019, 2020).",
"The self-BLEU is calculated between one original sentence and its style-transferred sentence.",
"Generation Quality: To measure the generation quality, we calculate the corpus-level BLEU score (Papineni et al., 2002) between a generated sentence and the testing data corpus.",
"Geometric Mean: We use the geometric mean (GM) (John et al., 2019) of the above metrics to obtain an overall evaluation metric of representive-ness of DRL models.",
"We compare our IDEL with previous state-of-the-art methods on Yelp and Personality Captioning datasets, as shown in Table 1.",
"The references to the other models are mentioned in Section 5.3.",
"Note that the original BackTranslation (BT) method (Lample et al., 2019) is a AutoEncoder framework, that is not able to do conditional generation.",
"To compare with BT fairly, we add a standard Gaussian prior in its latent space to make it a variational auto-encoder model.",
"From the results in Table 1, ARAE performs well on the conditional generation.",
"Compared to ARAE, our model performance is slightly lower on content preservation (BLEU).",
"In contrast, the style Content Source Style Source Transferred Result I enjoy it thoroughly! never before had a bad experience at the habit until tonight.",
"classification score of IDEL has a large margin above that of ARAE.",
"The BackTranslation (BT) has a better performance on style transfer tasks, especially on the Yelp dataset.",
"Our IDEL has a lower style classification accuracy (ACC) than BT on the style transfer task.",
"However, IDEL achieves high BLEU on style transfer, which leads to a high overall GM score on the Personality-Captioning dataset.",
"On the Yelp dataset, IDEL also has a competitive GM score compared with BT.",
"The experiments show a clear trade-off between style preservation and content preservation, in which our IDEL learns more representative disentangled representation and leads to a better balance.",
"Besides the automatic evaluation metrics mentioned above, we further test our disentangled representation effectiveness by human evaluation.",
"Due to the limitation of manual effort, we only evaluate the style transfer performance on Yelp datasets.",
"The generated sentences are manually evaluated on style accuracy (SA), content preservation (CP), and sentence fluency (SF).",
"The CP and SF scores are between 0 to",
"5. Details are provided in the Supplementary Material.",
"Our method achieves better style and content preservation, with a little performance sacrifice on sentence fluency.",
"Table 4 shows three style transfer examples from IDEL on the Yelp dataset.",
"The first example shows three sentences transferred with the style from a given sentence.",
"The other two examples transfer each given sentence based on the styles of three different sentences.",
"Our IDEL not only transfers sentences into target sentiment classes, but also ACC BLEU S-BLEU GM LVAE 52.1 24.7 20.8 29.9 LVAE + I ( s ; y ) 86.1 23.3 16.4 32.0 LVAE + I ( x ; c ) 50.2 24.0 36.3 34.7 IDEL 79.1 20.1 27.5 35.1 IDEL 85.5 24.0 35.0 41.5 IDEL 85.7 24.3 35.2 41.9 Table 6: Ablation tests for style transfer on Yelp.",
"information ( e.g. , the degree of the sentiment).",
"In addition, we conduct an ablation study to test the influence of different objective terms in our model.",
"We re-train the model with different training loss combinations while keeping all other setups the same.",
"In Table 1, IDEL surpasses IDEL (without MI upper bound minimization) with a large gap, demonstrating the effectiveness of our proposed MI upper bound.",
"The vanilla VAE has the best generation quality.",
"However, its transfer style accuracy is slightly better than a random guess.",
"When adding I ( s ; y ) , the ACC score significantly improves, but the content preservation (S-BLEU) becomes worse.",
"When adding I ( c ; x ) , the content information is well preserved, while the ACC even decreases.",
"By gradually adding MI terms, the model performance becomes more balanced on all the metrics, with the overall GM monotonically increasing.",
"Additionally, we test the influence of the stochastic calculation of R j in Algorithm 1 (IDEL) with the closed form from Theorem 3.1 (IDEL ).",
"The stochastic IDEL not only accelerates the training but also gains a performance improvement relative to IDEL .",
"We have proposed a novel information-theoretic disentangled text representation learning framework.",
"Following the theoretical guidance from information theory, our method separates the textual information into independent spaces, constituting style and content representations.",
"A sample-based mutual information upper bound is derived to help reduce the dependence between embedding spaces.",
"Concurrently, the original text information is well preserved by maximizing the mutual information between input sentences and latent representations.",
"In experiments, we introduce several two-sample test statistics to measure label-embedding correlation.",
"The proposed model achieves competitive performance compared with previous methods on both conditional generation and style transfer.",
"For future work, our model can be extended to disentangled representation learning with non-categorical style labels, and applied to zero-shot style transfer with newly-coming unseen styles.",
"This work was supported by NEC Labs America, and was conducted while the first author was doing an internship at NEC Labs America."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"other"
] |
[
"Approaches to Grounded Language Learning typically focus on a single task-based final performance measure that may not depend on desirable properties of the learned hidden representations, such as their ability to predict salient attributes or to generalise to unseen situations.",
"To remedy this, we present GROLLA, an evaluation framework for Grounded Language Learning with Attributes with three subtasks: 1) Goal-oriented evaluation; 2) Object attribute prediction evaluation; and 3) Zero-shot evaluation.",
"We also propose a new dataset CompGuessWhat?!",
"as an instance of this framework for evaluating the quality of learned neural representations, in particular concerning attribute grounding.",
"To this end, we extend the original GuessWhat?!",
"dataset by including a semantic layer on top of the perceptual one.",
"Specifically, we enrich the VisualGenome scene graphs associated with the GuessWhat?!",
"images with abstract and situated attributes.",
"By using diagnostic classifiers, we show that current models learn representations that are not expressive enough to encode object attributes (average F1 of 44 . 27 ).",
"In addition, they do not learn strategies nor representations that are robust enough to perform well when novel scenes or objects are involved in gameplay (zero-shot best accuracy 50 . 06% ).",
"Several grounded language learning tasks have been proposed to capture perceptual aspects of language (Shekhar et al., 2017; Hudson and Manning, 2019; Suhr et al., 2019; Agrawal et al., 2018).",
"However, the advances in this field have been primarily driven by the final performance measures and less on the grounding capability of the models.",
"In fact, in some cases, high-performance models exploit dataset biases to achieve high scores on the final task (Zhang et al., 2016; Agrawal et al., 2016).",
"In Turn Question Answer 1 is it an appliance?",
"the literature, several methods have been proposed to analyse what kind of information is captured by neural network representations (Kadar et al., 2017; Belinkov and Glass, 2019).",
"Most of these works examine the hidden state representations learned by models trained on only textual data.",
"However, many aspects of human semantic representations are grounded in perceptual experience (Andrews et al., 2009; Riordan and Jones, 2011).",
"This paper explores the idea that visually grounded representations ought to be a result of systematic composition of grounded representations (Harnad, 1990).",
"For instance, the understanding of the word microwave is grounded in perception of objects with specific attributes such as shape, colour, and size see Figure 1 for an example.",
"Therefore, investigating whether the representations learned by a model exhibit forms of attribute composition is beneficial for assessing model interpretability and generalisation.",
"In this work, we propose GROLLA a multitask evaluation framework for Grounded Language Learning with Attributes that expands a goal-type accessories has shaft has handle open black red center Umbrella has eyes has 2 legs has 2 arms has mouth little girl Person Figure 2: CompGuessWhat?!",
"oriented evaluation based on the standard final task measure, with two auxiliary tasks: 1) Object attribute prediction (AP), and 2) Zero-shot evaluation (ZS).",
"The attribute prediction task is designed to evaluate the extent to which the model's latent representations associated with objects are useful for predicting their attributes.",
"The prediction performance on this task can be related to a degree of compositionality of the learned representations.",
"We adopt a behavioural, i.e., task-driven, approach to assessing aspects of compositionality for visually grounded representations, whereby the extent to which a representation is compositional depends on:",
"(a) its ability to predict object attributes, and",
"(b) its ability to generalise to novel contributions of object attributes.",
"To support",
"(b), we design a zero-shot evaluation that measures the extent to which the learned representations can be reused in a task involving objects unseen during training.",
"By optimising for both the final end-goal measure as well as the auxiliary tasks, we aim to drive the design of models that can solve the task more reliably and whose representations are easier to interpret as a result of being a composition of visual attributes.",
"This paper presents three main contributions: (1) We define GROLLA a multi-task evaluation framework for grounded language learning that augments the final end-goal measure(s) with auxiliary tasks aimed at assessing the degree of attribute grounding of the model's representations; (2) We propose an instance of this multi-task evaluation framework, namely CompGuessWhat?!",
"; and (3) We evaluate state-of-the-art models using the CompGuessWhat?!",
"dataset.",
"The evaluation shows that models with high performance in the end-goal task are not able to reliably predict the attributes of given objects and do not generalise to examples with unseen object categories.",
"CompGuessWhat?!",
"is a benchmark of 65 , 700 dialogues (see Section 3).",
"It is based on Guess-What?!",
"(de Vries et al., 2017) dialogues and enhanced by including object attributes coming from resources such as VISA attributes (Silberer and Lapata, 2012), VisualGenome (Krishna et al., 2017) and ImSitu (Yatskar et al., 2016).",
"Our evaluation framework for Grounded Language Learning tasks is based on three different sub-tasks: 1) Goal-oriented evaluation; 2) Object attribute prediction evaluation; 3) Zero-shot evaluation.",
"Goal-oriented evaluation We evaluate the models according to the multi-modal task that they have to solve, which can generally be categorised as classification or generation.",
"Classification tasks such as Visual Question Answering (Antol et al., 2015) or Visual Natural Language Inference (Suhr et al., 2019) involve predicting the correct label for a given example whose performance is measured in terms of predictive accuracy.",
"In generative tasks, such as Image Captioning (Bernardi et al., 2016), the model has to learn to generate a sequence of labels for a given input data whose performance measure is BLEU (Papineni et al., 2002).",
"Object attribute prediction evaluation We support the goal-oriented evaluation with the attribute prediction auxiliary task related to assessing the degree of compositionality of the representations learned for a specific task.",
"With an attribute prediction task, we can assess whether the learned representations capture what we think they should, in terms of object attributes, rather than spurious correlations.",
"The idea of using object attributes as an auxiliary task follows from the Characteristic Feature Hypothesis (Hamp-ton, 1979) according to which every concept category has a set of defining features , which provide a criterion for judging which objects are category members, and which are not.",
"Therefore, the higher the accuracy in the attribute prediction task, the more the representations learned by the model are composed of the set of attributes of the objects.",
"Zero-shot Evaluation Via the attribute prediction task, we can assess the ability of latent representations to recover some of the attributes associated with their object category.",
"Assuming that the model has learned to represent these attributes, we hypothesise that it should solve the original task even when objects that have never been seen during training are involved.",
"In our evaluation framework, inspired by other multi-task evaluation frameworks (Wang et al., 2018; McCann et al., 2018; Wang et al., 2019; Shuster et al., 2019), we define Grounded Language Learning with Attributes ( GROLLA ) as the final score assigned to the model.",
"It is computed as macro-average of the metrics over all tasks.",
"We define the GROLLA score for convenience only and we underline the importance of having multiple scores for assessing different model abilities.",
"In this work, we present CompGuessWhat?!",
"as a dataset implementing this evaluation framework.",
"Thanks to the high overlap between the image set of several datasets (Lu et al., 2019), future work will extend it to other grounded language learning tasks such as image captioning and visual navigation.",
"CompGuessWhat?!",
"is an instance of our evaluation framework that is based on a guessing game (Steels, 2015), which can be viewed as a first step in a curriculum of language games for artificial agents.",
"It involves two agents, a scene, and a target object: the Questioner asks questions in order to identify the target object in a scene, while the Oracle knows the target object and has to answer the questions.",
"A multi-word guessing game requires two essential properties for grounded language learning: 1) the ability to generate discriminative questions aimed at narrowing down the search space (Natural Language Generation), and 2) the ability to understand the information provided so far during the game and exploit it to guess the target object (Natural Language Understanding).",
"CompGuessWhat?!",
"extends the GuessWhat?!",
"dataset (de Vries et al., 2017) to promote the study of attribute -grounded language representations.",
"The original GuessWhat?!",
"dataset is extended with a semantic layer on top of the perceptual layer (i.e., images).",
"This layer consists of a collection of intentional and extensional attributes of the objects in the reference image (Figure 2).",
"We enrich the VisualGenome (Krishna et al., 2017) scene graphs associated with the GuessWhat?!",
"images with several attributes coming from resources such as VISA (Silberer and Lapata, 2012) and ImSitu (Yatskar et al., 2016).",
"Unfortunately, not all the GuessWhat?!",
"images are included in VisualGenome.",
"We were able to reuse 40 .",
"79% of the original GuessWhat?!",
"dialogues for a total of 65 , 700 dialogues (additional information can be found in the related Appendix A.1).",
"By relying on this set of attributes, we define an attribute prediction evaluation to assess the extent to which the learned neural representations can encode the attributes specified during the dialogue.",
"In order to determine the generalisation power of the learned representations and their ability to be transferred, we propose a novel zero-shot learning set of reference games involving target object belonging to an unseen object category.",
"The dataset and the code associated with this paper can be found online 1 .",
"Psycholinguistically-motivated attributes We extend the set of attributes for every object category in MSCOCO with psycholinguistically-motivated semantic representations based on the McRae Norms (McRae et al., 2005) developed by Silberer and Lapata (2012).",
"We use only the subset of so-called abstract attributes, and ignore attributes from the original set that can change depending on the reference image (e.g., shape, texture, etc.).",
"We use the WordNet synset identifier (e.g., person.n.01) associated with a given MSCOCO category (e.g., person) to automatically associate its corresponding abstract attributes with a specific object instance.",
"However, very often several VisualGenome objects have a synset associated with a class that is a hyponym of the MSCOCO category synset.",
"Therefore, we rely on the Wu-Palmer similarity (Wu and Palmer, 1994) to find the best match between the VisualGenome synset and the MSCOCO category synset (with a similarity threshold of 0 . 75 chosen by using as reference the distance between the synset of person and woman ).",
"The intuition behind this heuristic is that we assume that a hyponym will inherit the abstract attributes of its hypernym.",
"Affordances & Behaviours We extract the semantic roles associated to specific object categories using the ImSitu dataset (Yatskar et al., 2016), in order to include affordances and behaviours associated with every object category.",
"An object category is associated with a behaviour every time it appears as the agent of a given predicate.",
"For in-1 https://compguesswhat.github.io stance, the food mixer [agent] blends fruit, where the behaviour is the food mixer's ability to blend something.",
"We also consider affordances associated with a given category and divide them into two categories: 1) can be , every predicate having the object category as item , coagent , vehicle semantic role; 2) used to , every predicate having the object category as tool , heatsource , object .",
"For example, in the statement the person opens the oven [item] an affordance can be intended as the fact that an oven can be opened.",
"These attributes extend the set of abstract attributes.",
"The abstract attributes do not depend on the reference image so they can be reused in other contexts as well.",
"Situated attributes Since the images contained in GuessWhat?!",
"come from the MSCOCO dataset (see Figure 1 for an example), some of them are included in the VisualGenome (Krishna et al., 2017) dataset, which is composed of rich scene graphs for every image.",
"In particular, we veri-fied that 27 , 155 images from the GuessWhat?!",
"dataset are also contained in VisualGenome.",
"However, due to the presence of possible visual elements, the VisualGenome images are not the same as the MSCOCO ones.",
"We use a heuristic approach based on both Intersection over Union (IoU) and language-only features to match the object bounding boxes between the two images.",
"We report more details about the algorithm in Appendix A.2.",
"The set of object attributes from VisualGenome (attribute types, colour, size, etc.) and location/positional attributes (one of top/bottom/left/right/centre, based on bounding box location) make up the situated attributes , which are specific to the reference image.",
"As a final step, due to the image mismatch, we decided to include the original GuessWhat?!",
"object annotations in the VisualGenome graph in case a GuessWhat?!",
"object cannot be mapped to a VisualGenome one.",
"By doing this, we have access to the MSCOCO category of the object from which we can recover all its abstract attributes.",
"times the guesser model can select the correct target object among the candidate objects, given the dialogue generated so far.",
"Due to the importance of this language game for NLU and NLG model skills, we decide to keep the guesser accuracy as a reference metric to assess the ability of the questioner to play the game.",
"However, unlike the original dataset evaluation, we make sure that the score is evaluated ignoring duplicated dialogues.",
"3 4.2 Attribute Prediction Evaluation In a sequential guessing game like the one in Figure 1, we regard the representation for the last turn of the dialogue as a composition or aggregation of all the attributes specified so far.",
"Therefore, we can use it to predict with high accuracy the attributes associated with a specific target object because it should encode the information needed to correctly discriminate the target from all the other objects in the scene.",
"In the dialogue of Figure 1, when the model generates a representation for the last turn of the conversation (i.e., Q: Is it the microwave? A: Yes), it should encode the fact that it is an appliance, it is not the oven and it is the mi-crowave, allowing the agent to guess the target object correctly.",
"By playing several guessing games that have a microwave as the target object, the agent should learn a representation of microwave that is expressive enough to correctly discriminate a microwave from all the other objects in a scene.",
"In this setup we are not assuming that the model has a single representation for the concept of microwave; rather the concept of microwave develops from aggregating multimodal information related to microwaves across the situations in which the object is experienced (Barsalou, 2017).",
"In the context of CompGuessWhat?!",
", every successful dialogue involving a microwave as the target object will be considered as an experience .",
"We are interested in understanding whether the dialogue state representation generated by a neural model for the last turn of the dialogue can encode the attributes of the target object specified during the dialogue.",
"To do so, we define four attribute prediction tasks.",
"For every target object we predict the corresponding vector composed of: 1) abstract attributes only ( A ); 2) situated attributes only ( S ), 3 In the test dataset multiple conversations are associated with the same (image, target object) pair.",
"Therefore, we want the pair (image, target object) to be considered only once in the accuracy evaluation.",
"3) the union of abstract and situated attributes ( AS ), and 4) location attributes ( L ) such as center , top , bottom , right and left .",
"After training the model on the original GuessWhat?!",
"dataset, we can generate dialogue representations corresponding to all the CompGuessWhat?!",
"successful games.",
"Then, we can train a diagnostic classifier that predicts the attributes associated with a given object category using the dialogue hidden representation generated for a given game as features.",
"We hypothesise that a model that has learned grounded representations that are expressive enough to correctly guess the target object should retain the relevant features to predict its attributes.",
"We treat the attribute-prediction problem as a multi-label classification task.",
"We implement our diagnostic classifier as a linear transformation parameterised by a weight matrix R d d d a (where d d is the dialogue hidden state size and d a is the number of attributes to be predicted) followed by a sigmoid activation function.",
"We use a sigmoid activation function because it models a Bernoulli distribution.",
"The diagnostic classifier outputs d a logits where each of them models the probability P ( y k = 1 | d ) (where d is dialogue state represen-tation), one for each attribute y k to be predicted.",
"To mitigate a possible class-imbalance problem, we apply a filtering strategy to remove underrepresented attributes from our attribute set, which is a similar technique used to deal with out-of-vocabulary words.",
"We also decided to avoid using class-weighting so that we could evaluate the power of the learned representations with simple linear classifiers as done in previous work using probing classifiers (Belinkov and Glass, 2019).",
"Please refer to Appendix A.3 for details about the procedure to derive the reference set of attributes.",
"We use the CompGuessWhat?!",
"dataset split as the reference for our training and evaluation setup: we train the diagnostic classifiers on CompGuess-What?!",
"gold training dialogues and evaluate their performance on the test dialogues using the validation set dialogues for early stopping.",
"We consider Precision, Recall, and F1-measure for multi-label classification (Sorower, 2010) (computed as macro-average ) and evaluate them with 0 .",
"5 as the threshold value for the sigmoid activation function (selected after considering the models performance using threshold values of 0 . 75 and 0 . 9 ).",
"We report additional details in Appendix A.3.",
"Assuming that the model has learned to compose concepts during the turns of the dialogue, we hypothesise that it should also be able to use these representations to play games involving target objects that belong to categories that have never been seen before.",
"For example, humans can discriminate between a dolphin and a dog even though they might not know what it is called.",
"The measure presented in this section has the potential to demonstrate whether current models lack the ability to systematically generalise to new instances that are composed of attributes learned during training.",
"In order to assess the true generalisation power of the trained agents, we define a zero-shot learning scenario based on the nocaps dataset images (Agrawal et al., 2018).",
"The nocaps dataset is composed of 3 evaluation splits: 1) in-domain : annotated objects belong to MSCOCO categories only; 2) near-domain : contains a mixture of MSCOCO and OpenImages objects; 3) out-of-domain : contains only OpenImages object categories.",
"Since the number of categories in the original GuessWhat?!",
"dataset (80) is lower than the number of categories in the Open Images dataset (660) contained in nocaps there are many categories that are never seen during training.",
"Therefore, we can create zero-shot learning games by considering a target object for the game whose category has never been seen during training.",
"We define an automatic procedure to generate the set of reference games for the zero-shot learning setup using the nocaps images.",
"We split the nocaps images into near-domain or out-of-domain.",
"An image is considered near-domain if it contains at least one object whose category belongs to MSCOCO.",
"In contrast, we consider the image out-of-domain if it does not contain any MSCOCO category objects.",
"This procedure generates a dataset of 19 , 179 near-domain reference games and 18 , 672 out-of-domain reference games.",
"More details about the automatic procedure as well as the resulting reference set of games can be found in Appendix A.4.",
"As a last step of our evaluation framework, we evaluate the performance of the state-of-the-art models in the zero-shot gameplay setup.",
"For this task, the trained models need to interact with each other and generate dialogues given the pair (image, target object).",
"As an evaluation metric for this task, we consider gameplay guesser accuracy for the near-domain (ND-Acc) and out-of-domain (OD-Acc) reference games.",
"Guesser accuracy We evaluate the GDSE and DeVries models in gameplay mode using the set of reference games provided in CompGuessWhat?!",
".",
"As shown in Table 1, the results are in line with the performance of the models on the original Guess-What?!",
"dataset (de Vries et al., 2017; Shekhar et al., 2019) confirming that our filtering strategy did not affect the complexity of the task.",
"Attribute Prediction We use the CompGuess-What?!",
"benchmark to compare several dialogue state representations: DeVries-SL : the representation learned by the Questioner model presented in (de Vries et al., 2017) that generates the question tokens conditioned on the image features and is trained using Supervised Learning (SL).",
"DeVries-RL : the representations learned by the Questioner model presented in (de Vries et al., 2017), fine-tuned using the Reinforcement Learning procedure proposed in (Strub et al., 2017).",
"GDSE-SL : the grounded dialogue state learned by a seq2seq model trained using the multi-task Learning procedure in (Shekhar et al., 2019).",
"GDSE-CL : the grounded dialogue state learned by the Questioner model used in GDSE-SL , fine-tuned with the Collaborative Learning procedure presented in (Shekhar et al., 2019).",
"GDSE-SL-text : the learned LSTM (Hochreiter and Schmidhuber, 1997) dialogue encoder of the GDSE-SL model.",
"GDSE-CL-text : 4 the learned dialogue encoder of the GDSE-CL model.",
"In order to control for possible bias in our task, we consider unimodal (Thomason et al., 2019a) as well as random attribute predictors: GloVe : a dialogue is represented as the average of the GloVe embeddings associated with each word (Pennington et al., 2014).",
"ResNet : uses the latent representation of the reference scene generated by a ResNet152 as proposed in Shekhar et al. (2019).",
"Random : samples d a scores from U (0 , 1) where samples are independent from each other.",
"We incorporate this baseline as a lower bound performance on the attribute prediction task.",
"With the AP task, we try to answer the following question: Do the representations associated with the target object encoding provide useful information that can be exploited to predict the object attributes correctly?",
"We assume that, due to the na-ture of the CompGuessWhat?!",
"games, the final dialogue state representation should encode relevant features of the target object.",
"So, a high gameplay accuracy should correlate with a high AP score.",
"Table 1 summarises the results of the attribute prediction task evaluated on the CompGuessWhat?!",
"4 We could use the dialogue encoder of the GDSE models only due to their modular architecture.",
"It was not possible to properly separate the dialogue encoder from the visual representation in the DeVries models.",
"test games.",
"As the average best model performance was only 44 .",
"27 , far from ceiling, our hypothesis is only partially supported.",
"In particular, the models having the highest guesser accuracy, GDSE-CL and GDSE-SL , seem to learn better representations than unimodal baselines GloVe and ResNet , confirming the importance of multi-modal training for this task.",
"There is also a gap in performance between the GDSE and DeVries models.",
"This might be related to the multi-task learning strategy used by GDSE models that favours the emergence of more expressive representations than the ones learned by DeVries models which are trained in isolation.",
"By comparing the enhanced versions GDSE-CL and DeVries-RL with the less sophisticated ones, GDSE-SL and DeVries-SL , respectively, we observe that, despite their higher guesser accuracy, these models do not have any advantage in terms of the AP task.",
"We believe that this is because the Collaborative training strategy (for GDSE-CL ) and Reinforcement Learning (for DeVries-RL ) are optimising end-goal performance while sacrificing the expressiveness of the representations.",
"Finding a way to encode task-specific representations and generalise them to learn abstract representations becomes an important research direction to improve on this task.",
"As an additional ablation, we compared the representations learned by the LSTM module used by GDSE to encode the dialogue ( GDSE-*-text ) with their grounded dialogue state counterpart.",
"Differences in terms of F1 are minimal, confirming that the heavy lifting is done by the textual representations and it is not clear how well the grounded dialogue state retains the visual information.",
"Another confirmation of this issue is provided by the results in terms of location attributes prediction.",
"Performance in this task for all the models is around 40 meaning that both VGGNet and ResNet features (used for DeVries and GDSE , respectively) are not able to recover fine-grained object information.",
"This result sheds light on the ability of these models to ground the textual data in perceptual information of the reference scene.",
"We believe that models should be able to co-ground one modality with the other and, as a result, learn more expressive grounded representations.",
"Zero-shot Evaluation Results are summarised in Table 1; the most striking observation is that all models struggle with this dataset (guesser accuracy is barely above 40 ), although arguably humans would be able to solve the task despite their unfamiliarity with a specific object.",
"Indeed, in this zero-shot scenario, reusing previously learned attributes that are shared among the objects or leveraging mutual exclusivity (Markman and Wachtel, 1988) would result in a successful gameplay.",
"Even the most accurate model in the CompGuess-What?!",
"guesser evaluation performs poorly in this zero-shot setup (see Figure 3 for an example).",
"We attribute this drop in performance to the way that these models represent objects.",
"In particular, they all rely on category embeddings , i.e., latent representations associated to specific object categories (refer to (Shekhar et al., 2019; de Vries et al., 2017) for more details).",
"In the case of ZS evaluation, when an object is unknown, its category embedding is also not available.",
"This is true for both DeVries and GDSE models; it seems that GDSE models suffer more than DeVries models possibly due to overfitting.",
"On the other hand, we aim to learn object representations which are not associated with manually-provided categories but are obtained by playing the game and that encode both abstract and situated attributes.",
"Once again, we find that models optimised using Reinforcement Learning seem to learn a better game strategy that results in higher performance on both near-domain and out-of-domain games.",
"To better understand the quality of the generated dialogues, we classify each type of question according to a pre-defined set of types based on (Shekhar et al., 2019) (please refer to Appendix A.5 for a detailed description and a detailed summary of the evaluation results).",
"We noticed that the DeVries models generate dialogues with 70% of their turns comprising location questions (e.g., is it the person on the right?) compared to 20% for GDSE models.",
"We argue that to tackle zero-shot scenes, a model should instead learn features useful to discriminate the target object without relying on locations.",
"Of course, in some reference scenes, location questions are still useful attributes used by humans when playing the game.",
"In addition, asking location questions is an effective strategy because the Oracle has access to positional information that can be used to provide reliable answers but does not have any category embeddings for the target object.",
"task evaluation datasets proposed to mitigate the biases of task-specific datasets (Wang et al., 2018; McCann et al., 2018; Wang et al., 2019).",
"Despite their multi-task nature, these datasets focus on text-only data making the resulting models unable to learn meaning representations which are grounded in perceptual experience (Andrews et al., 2009; Riordan and Jones, 2011).",
"Another downside is that these benchmarks focus only on end-goal metrics, i.e., are not informative on what the model has learned.",
"Going beyond the end-goal metric is fundamental for designing models that are more generalisable and interpretable.",
"By introducing the attribute prediction task in our framework, we assess whether the learned representations are expressive enough to predict the attributes of relevant objects in the scene.",
"Also, we propose a zero-shot evaluation where the model has to generate predictions for examples that have never been seen during training, thus providing a way to understand the generalisation power of the learned representations.",
"Grounded Language Learning Evaluation Several grounded language learning tasks have been proposed in the literature that can be divided into discriminative (Shekhar et al., 2017; Hudson and Manning, 2019; Suhr et al., 2019) and generative grounded language learning tasks (Xu et al., 2015; Agrawal et al., 2018).",
"Recent works proposed models trained in a multi-task fashion by exploiting several language/vision tasks.",
"The dodecaDialogue task (Shuster et al., 2019) proposes twelve dialogue tasks, among which there are two language/vision tasks in which the agent has to generate a response for a given context.",
"Other works try to exploit multi-task learning to improve on single-task model performance in discriminative tasks (Pramanik et al., 2019; Lu et al., 2019).",
"Unfortunately, implementing multi-task learning using different datasets results is cumbersome (Subramanian et al., 2018).",
"We propose an evaluation framework that can be applied in the context of a single task and dataset (e.g. GuessWhat?! ) that allows to understand the extent to which the model can learn useful representations for the task at hand.",
"Inspecting the learned representations is important because, due to biases in the datasets, models might learn spurious correlations between input and output rather than actual grounding capabilities.",
"For instance, in Visual Question Answering, questions starting with What colour are have white as a correct answer 23% of the time; models learn to memorise this sort of association rather than using the visual information (Zhang et al., 2016; Agrawal et al., 2016).",
"This issue calls for a model evaluation aimed at inspecting the model representations as well as how these representations are used.",
"The GQA (Hudson and Manning, 2019) dataset goes in this direction.",
"It presents a Visual Question Answering dataset where images are supported by rich semantic annotations in the form of scene graphs.",
"The GQA task requires the model to select an answer among a set of candidates.",
"However, we advocate the importance of tasks that involve both Natural Language Understanding (NLU) and Natural Language Generation (NLG) skills in a curriculum for grounded language learning.",
"There are significant differences concerning the proposed auxiliary tasks as well.",
"First of all, GQA's tasks are specifically designed around the VQA tasks to make sure that the model is consistent and plausible .",
"It does not however tell us what the model's learned representations are encoding.",
"We propose the AP task as a diagnostic task aimed at better understanding the learned neural representations (Belinkov and Glass, 2017; Con-neau et al., 2018; Peters et al., 2018; Tenney et al., 2019).",
"In addition, going beyond simple object classification is considered beneficial for vision systems (Farhadi et al., 2009) because it allows generalisation across object categories, not just across instances within a category.",
"However, we believe that to truly assess the generalisation ability of a model, object attributes have to be used for the downstream task, which is not necessarily needed in object classification tasks.",
"With the ZS evaluation, we investigate the ability of the models to exploit more fine-grained visual attributes which is important for models able to learn from few examples and easily transfer to new domains.",
"Compositionality Evaluation Andreas (2019) presents a method to estimate the degree of compositionality of neural representations by using an oracle compositional model aware of the compositional structure (i.e., a derivation) of the input data.",
"Building a reference oracle is easy for synthetic scenes (as in Andreas (2019)) but is a significant challenge for real-world scenes.",
"Previous work has studied compositionality in real-world scenes for visual concept composition (Misra et al., 2017) and image captioning (Nikolaus et al., 2019).",
"In our benchmark CompGuessWhat?!",
", we use real-world scenes from the MSCOCO (Lin et al., 2014) and OpenImages (Kuznetsova et al., 2018) datasets.",
"Our AP task is related to measuring compositionality.",
"It relies on image annotations in the form of intensional and extensional attributes as a reference structure for the objects in the scene.",
"We proposed CompGuessWhat?!",
"as an implementation of GROLLA, a multi-task evaluation framework for Grounded Language Learning with Attributes.",
"We found that the best performing model achieves a GROLLA score of 50 .",
"06% ; notably this model's out-of-domain accuracy is under 30% , as compared to the human performance on the original GuessWhat?!",
"dataset of 90 .",
"2% (de Vries et al., 2017).",
"Clearly, even models with high in-domain gameplay success rates still have difficulty generalising to new scenarios.",
"In the following, we discuss insights gained from the evaluation and new research directions for this task.",
"attribute representations.",
"We argue that this result calls for new approaches to exploiting and representing textual and visual data.",
"We believe that models should be equipped with a co-grounding operator that fuses the textual and visual modalities.",
"For instance, in the context of CompGuessWhat?!",
", it would be used to learn a representation for the current turn that is influenced by both the language and visual modality.",
"CompGuessWhat?!",
"requires models to learn to combine the co-grounded information provided for every turn.",
"Therefore, we propose that CompGuessWhat?!",
"represents a benchmark dataset for evaluating the design of such an attribute compositionality operator that would be a possible implementation of compositionality `a la Barsalou (2017).",
"In this work, we have shown how our multi-task evaluation framework can be be applied to Guess-What?!",
".",
"However, the same framework could be applied to other multi-modal tasks.",
"For example, in image captioning, the goal-oriented evaluation would be the textual similarity metrics (e.g. BLEU); the attribute-prediction task would use the decoder representation to predict the attributes of the objects in the image (Elliott and Kadar, 2017, e.g.); and the zero-shot setting could leverage the nocaps dataset (Agrawal et al., 2018).",
"Likewise, in the Vision-and-Dialog navigation task (Thomason et al., 2019b), the goal-oriented evaluation is the navigation task; attribute prediction is based on predicting the attributes of the hidden object when the agent decides it is in the correct room, and the zero-shot setting could evaluate model performance on novel combinations of rooms and object types.",
"Finally, from the evaluation presented here, it emerges that these models learn task-specific representations that do not generalise to unseen object categories.",
"We hope that GROLLA and the CompGuessWhat?!",
"data will encourage the implementation of learning mechanisms that fuse task-specific representations with more abstract representations to encode attributes in a more compositional manner.",
"In addition, we will use the CompGuessWhat?!",
"image annotations to design a visual grounding evaluation to assess the ability of the model to attend to the correct objects during the turns of the dialogue.",
"We thank Arash Eshghi and Yonatan Bisk for fruitful discussions in the early stages of the project."
] | [
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"abstain",
"other",
"method",
"objective",
"other",
"other",
"other",
"abstain",
"method",
"method",
"other",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other"
] |
[
"The mapping of lexical meanings to wordforms is a major feature of natural languages.",
"While usage pressures might assign short words to frequent meanings (Zipf's law of abbreviation), the need for a productive and open-ended vocabulary, local constraints on sequences of symbols, and various other factors all shape the lexicons of the world's languages.",
"Despite their importance in shaping lexical structure, the relative contributions of these factors have not been fully quantified.",
"Taking a coding-theoretic view of the lexicon and making use of a novel generative statistical model, we define upper bounds for the compressibility of the lexicon under various constraints.",
"Examining corpora from 7 typologically diverse languages, we use those upper bounds to quantify the lexicon's optimality and to explore the relative costs of major constraints on natural codes.",
"We find that (com-positional) morphology and graphotactics can sufficiently account for most of the complexity of natural codesas measured by code length.",
"Communication through language can be modeled under Shannon's classic communication framework (Shannon, 1948).",
"Under this perspective, linguistic utterances are codes which need to be decoded by a receiver (listener) who is interested in the message (meaning) they encode.",
"Famously, Zipf (1949) posited that language users shape these codes so to accommodate the principle of least effort .",
"The most widely discussed and investigated empirical evidence for this feature is the so-called law of abbreviation , an ostensive negative correlation between word frequency and word length (Zipf, 1935; Bentz and Ferrer-i-Cancho, 2016).",
"Communication effort decreases by encoding frequent messages in shorter words.",
"This correlation, however, is characteristically modest.",
"There are many instances of short low-frequency words, like wen and jib in English, 1 and long frequent words, like happiness and anything .",
"While the lexicon might be shaped by economy of expression, it is clearly not fully optimized for it.",
"There are multiplepossibly competingreasons why this could be the case.",
"First of all, the sequence of speech sounds, signs, or orthographic characters that serve as building blocks in a language comply with specific rules.",
"These are referred to as phonotactics (in the case of speech sounds) and graphotactics (in written language).",
"2 On top of these constraints, the lexicons of many languages of the world re-use subparts of words; these sub-parts can be productively composed to produce new meaningswhich is referred to as morphological composition .",
"This largely determines the family of wordforms associated with a given basic meaningfor instance, given the wordform health and its meaning, the nominal morphology of English readily provides 1 These mean, respectively, a benign tumor on the skin and a triangular sail on a boat.",
"2 All languages impose these constraints on their wordforms, which might be leveraged for production and learnability (Vitevitch and Luce, 1999; Boersma, 1998).",
"the forms for many of its derived meanings, including healthy , unhealthy , healthier , etc.",
"Beyond these well-attested constraints, it might be argued that the negative correlation between the length of a word and its frequency is not the locus of optimization given the economy of expression pressure.",
"Instead, wordforms might be efficiently encoding meanings based on their contextual surprise rather than frequency (Piantadosi et al., 2011).",
"Finally, there is no reason to expect lexicons to be fully optimized for the economy of expressionthis factor might steer languages in a given direction, but there is certainly room for non-compliance.",
"Languages are, after all, not engineered systems but cultural artifacts.",
"In this paper, we examine how marginally non-optimal the lexicon is by taking the vantage point of the law of abbreviation.",
"We develop a method to quantify the role of different linguistic constraints on determining wordforms, and we produce estimates on how compressible the lexicon could be in their absence (including morphology and phono-tactics/graphotactics).",
"We thus define an upper bound for the compressibility of a lexicon optimized purely for word length efficiency.",
"As stated above, our notion of optimality is derived from Zipf's principle of least effort in the form of the law of abbreviation (Zipf, 1949; Mandelbrot, 1953; Ferrer-i-Cancho et al., 2020).",
"However, this is by no means the only theory under which wordforms are optimized for encoding their messages.",
"One influential hypothesis is that languages optimize for uniform information density (Fenk and Fenk, 1980; Aylett and Turk, 2004; Levy and Jaeger, 2007)roughly keeping the amount of information conveyed in a unit of time constant.",
"In an information-theoretic setting, this would be equivalent to maximizing the use of a noisy channel between the speaker and an audiencekeeping the transmission rate close to the channel capacity.",
"Under this view, it is not necessarily the case that words should be as short as possible.",
"Rather, words that are infrequent or typically less predictable in context should be longer and take more time to produceperhaps because the increased duration makes them more robust to noise.",
"Consistent with this perspective, it has been shown that, in production, words with higher information content take longer to pronounce (Bell et al., 2003; Jurafsky et al., 2001; Gahl, 2008).",
"Additionally, words which are typically predictable in context are shorter than words which are less predictable in context (Piantadosi et al., 2011).",
"On another note, a purely coding-theoretically efficient language could make the lexical codes context dependent (Piantadosi et al., 2012), since context often disambiguates words (Dautriche et al., 2018; Pimentel et al., 2020a).",
"Additionally, the meaning or message being conveyed by a given word might bias its form.",
"Within languages, there seems to be a pressure for more semantically similar words to also be more phonologically similar (Monaghan et al., 2014; Dingemanse et al., 2015; Dautriche et al., 2017; Pimentel et al., 2019).",
"Across languages, words for the same referents exhibit detectable patterns in term of their phonological makeup (Blasi et al., 2016), phonotactics (Pi-mentel et al., 2021b), as well as word length (Lewis and Frank, 2016)this is driven by semantic features such as size, quality or complexity.",
"Finally, there is a cross-linguistic tendency for lexicons to place higher surprisal in word-initial segments (van Son and Pols, 2003a,b; King and Wedel, 2020; Pimentel et al., 2021a) making words more constrained in their choice of final segments.",
"These aspects of language might also collide with a purely Zipfian conception of lexicon optimality.",
"In this work, however, we consider optimality exclusively in the Zipfian sense of compressibility, and we ask how far natural language lexicons are from accommodating to this paradigm.",
"We build a number of models that differ in relation to whether they accommodate to the law of abbreviation, to compositional morphology and to graphotactics.",
"The comparison among these systems allows us to explore the extent to which each part of the linguistic system contributes to the overall cost of the linguistic code.",
"It should be noted, though, that the consequences of unmodeled sources of structure in the lexicon (such as persistent sound-meaning associations or the adaptation of the code to surprisal effects) will forcibly be confounded with the overall lack of fit between our models and the data.",
"The morphological cost i.e. the cost of morphology to a code's lengthis associated with the fact that, across many languages, words are often constructed of meaningful sub-parts that are productively reused across the lexicon.",
"Practically, this means that the wordforms of different meanings might not be independent if they overlap in a particular dimension that is captured by the morphology of the language.",
"For instance, most wordforms that express two or more referents of a kind share a word-final suffix -s in English ( towers , cats , ideas , etc).",
"We treat this cost by considering optimal codes where the basic unit in the lexicon is not the word but sub-pieces, as determined by the unsupervised morphological parser Morfessor (Creutz and Lagus, 2007; Smit et al., 2014).",
"3 Under this regime, a word like unmatched is parsed into the tokens un , match , and ed .",
"The graphotactic cost concurrently imposes a set of additional constraints, determining which sequences of grapheme segments can constitute a valid wordform in a given language.",
"While the main driver of these lexical constraints is actually phonotacticswhich imposes rules dictating the possible phoneme sequenceswe focus on graphotactics because our object of study is written language corpora.",
"The degree to which phonotactics and graphotactics mirror each other vary substantially across languages; thus, in this work (which uses corpora from Wikipedia) we make our claims about language in the written modality and leave it to future work to generalize this work to the phonological domain.",
"This could be done by applying the same method to phonemic representations of words.",
"This paper treats the lexicon , which we define as a set of pairs: L = { ( m n , w n ) } Nn =1 .",
"In general, this set will be infinite; m n refers to a lexical meaning , taken from an abstract set M , and w n refers to a wordform , taken from , the Kleene closure of a grapheme alphabet .",
"4 When the exact index is unnecessary in context, we will drop the subscripted n ; and we make use of uppercase letters to refer to random variables (e.g. M or W ) where necessary.",
"We will write meanings in typewriter font, e.g. cat , and wordforms in italics: cat (English), kissa (Finnish).",
"Viewing the lexicon from a coding-theoretic perspective, we consider the mapping from meaning to form as a code : C : M .",
"Every language comes endowed with a natural code C nat , which 3 We also present results using the additional sub-word tokenizers: byte pair encoding (Gage, 1994; Sennrich et al., 2016) and word piece (Schuster and Nakajima, 2012).",
"See Bostrom and Durrett (2020) for a discussion of the tradeoffs of these schemes, in terms of performance and compressibility.",
"4 This alphabet is augmented with an end-of-word symbol.",
"is the observed mapping from lexical meanings to forms.",
"As an example, consider the meaning cat and its Finnish form: we have C nat ( cat ) = kissa .",
"The topic of interest in this paper is the efficiency of language's natural codes.",
"The space of meanings and lexical ambiguity.",
"The space of meanings M is non-trivial to define, but could be operationalized as R d , which is infinite, continuous and uncountable (Pilehvar and Camacho-Collados, 2020).",
"Meanwhile, the space of wordforms is also infinite, but discrete and countable.",
"As such, many meanings m n must be mapped to the same form, resulting in lexical ambiguity.",
"See Pimentel et al. 2020a for a longer discussion on these operationalizations.",
"In this work, though, we do not engage with such ambiguity, considering M as an abstract set of meanings, each of which defined by a distinct wordformi.e. the code C nat is a bijection.",
"A consequence of this strategy is that we take the space of meanings to be infinite, but discrete and countable; we only distinguish as many meanings as there are words, therefore, we end up with a countable number of meanings.",
"Additionally, by considering a distinct meaning m n for each wordform w n in the lexicon, we only consider codes with as much lexical ambiguity as in the original language.",
"5 3.1 Words as Meanings The unigram distribution represents the frequency of each wordform in a text, i.e. the probability of a token without conditioning on context p ( W = kissa ) .",
"In this work, though, we assume the unigram distribution is a distribution over M , e.g. p ( M = cat ) this way we can analyze how changing the code C would affect its efficiency.",
"As stated above, though, we take C nat to be a bijection.",
"Such an assumption implies there is a deterministic function from wordforms to meanings in a specific lexicon C 1 nat ( w ) = m .",
"Probabilistically speaking, we write p ( M = m | W = w ) = 1 (cid:110) m = C 1 nat ( w ) (cid:111) (1) p ( W = w | M = m } = 1 (cid:110) w = C nat ( m ) (cid:111) (2) 5 Lexical ambiguity allows the mapping of multiple meanings to the same wordform and, in doing so, it enables the preferential re-use of short words (Piantadosi et al., 2012).",
"Thus, the mapping of multiple meanings to the same form could be a source of efficiency in the lexicon (Fenk-Oczlon and Fenk, 2008; Ferrer-i-Cancho and Vitevitch, 2018; Casas et al., 2019; Trott and Bergen, 2020; Xu et al., 2020).",
"Nonetheless, we do not treat it explicitly here.",
"This mapping implies p ( M = m n ) = (cid:88) w p ( M = m n , W = w ) (3) = p ( W = w n ) Given this equality, we can reduce the problem of estimating the unigram distribution over meanings p ( m ) to the one over wordforms p ( w ) .",
"3.2 Code-length and optimality As stated above, we assume the unigram distribution to be a distribution over M .",
"We now define the cost of a code as its expected length: cost( C ) = (cid:88) m M p ( m ) | C ( m ) | (4) A smaller cost, then, implies a more efficient code.",
"The famous source-coding theorem of Shannon (1948) gives us a theoretical limit on coding cost: H( M ) cost( C (cid:63) ) < H( M ) + 1 (5) where we define C (cid:63) to be the most efficient code, and where H( M ) is the entropy of distribution p : H( M ) = (cid:88) m M p ( m ) log | | 1 p ( m ) (6) According to the source-coding theorem, if we know the true distribution p over lexical meanings, then we know how to optimally code them.",
"This turns the problem of estimating the efficiency of the lexicon into the one of estimating the entropy of an unknown discrete distribution p , a well-defined task with a pool of previous work (Miller, 1955; Antos and Kontoyiannis, 2001; Paninski, 2003; Archer et al., 2014).",
"Because the distributions over wordforms and meanings are equivalent, we estimate the entropy H( M ) using wordforms: H( M ) = H( W ) = (cid:88) w p ( w ) log | | 1 p ( w ) (7) 3.3 Finite and Infinite Support This section reviews a few technical results as regards the construction of codes from a probability distribution.",
"If p had finite supporti.e. there were a finite set of possible meanings or wordformsa simple Huffman encoding (Huffman, 1952) would give us an optimal code for our lexicon.",
"However, this is not the case p ( w ) has support on all of so we might need a more complex strategy to get such a code.",
"Linder et al. (1997) proved the existence of an optimal encoding for a distribution with infinite support, given that it has finite entropy.",
"Proposition",
"1. If distribution p ( w ) has finite entropy, i.e. H( W ) < , then there exists an optimal encoding for it such that: cost( C (cid:63) ) < H( M ) + 1 .",
"Proof.",
"See Linder et al. (1997).",
"Luckily, under a weak assumption, this is the case for a well-trained language model.",
"Definition",
"1. Language model p ( w ) is -smooth if for all histories h we have p ( EoW | h ) .",
"6 This fairly weak assumption states that partial wordforms have a lowerbound on their probability of ending.",
"As such, there is an upperbound on the probability of a wordform which decreases exponentially with its length.",
"Armed with this assumption, we can now show that any -smooth language model has a finite entropy.",
"Proof.",
"See App.",
"C. Safe-guarded by Propositions 1 and 2, we now train a model to capture the unigram distribution.",
"We will then use this model to estimate the code-length of an optimal lexicon.",
"Zipf's (1935) law states that the frequency of a word in a corpus is inversely proportional to its rank, resulting in a power-law distribution where a small subset of the words dominate the corpus.",
"As such, navely training a character-level model on a language's tokens (i.e. predicting non-contextual wordforms with their natural corpus frequencies) would be unlikely to capture morphological regularities (Goldwater et al., 2011).",
"Furthermore, it would burden the model to learn a mostly arbitrary assignment between form and frequency.",
"As an example, the English verb make is much more common than the nouns cake and lake , even if graphotactically they may be equally probable.",
"A closer inspection of English shows that most frequent words tend to come from closed lexical classes including articles, pronouns, prepositions, and auxiliaries, such as the , of , it and be (Sinclair, 1999).",
"These words tend to be short and manifest fossilized graphotactics (and phonotactics) as well 6 Under this assumption our language model is also consistent, as defined by Welleck et al. (2020)sequences with infinite length have asymptotically zero probability mass. as a more abundant prevalence of otherwise rare segments, such as the voiced and voiceless dental fricatives (orthographically expressed with th ).",
"These rare segments would be overrepresented in such a nave training regime, making it hard for the character-level model to correctly represent the language's graphotactics.",
"In order to address the problem of skewed frequencies, we use a novel neuralization of Goldwater et",
"al.'s (2011) two-stage model to capture the unigram distribution.",
"This model consists of two components: a wordform generator and a token frequency adaptor .",
"The generator is a character-level model which produces wordforms, for which we use an LSTM; 7 this model should place similar probability mass on graphotactically good wordforms, such as make , cake , and lake .",
"Meanwhile, the adaptor sets the frequency with which these wordforms will appear as tokens.",
"Following Goldwater et al., we base our adaptor on the PitmanYor Chinese restaurant process (PYCRP; Pitman and Yor, 1997), which allows the adaptor to model a power-law distribution; this model is then responsible for capturing the fact that make is a more frequent token than cake , and lake .",
"The generative process of our two-stage model is presented graphically in Fig.",
"2. Our generator is a character-level LSTM language model, which generates a potentially infinite number of i.i.d. wordforms { (cid:96) k } Kk =1 .",
"Independently, the PYCRP adaptor assigns each observed token in a dataset to a cluster { z n } Nn =1 .",
"In the literature, the value of z n is the table assignmment of the n th token.",
"These clusters are then used as lookup indices to the wordforms, producing the observed word tokens { w n } Nn =1 where w n = (cid:96) z n .",
"In general N (cid:29) K , so tokens with the same wordform are grouped in few clusters.",
"In this way, the adaptor sets the frequency with which wordforms appear as tokens in a corpus by defining each cluster's probability.",
"Generating Wordforms.",
"As mentioned above, wordforms are sampled i.i.d. from a distribution p over strings defined by the generator.",
"Specifically, this distribution over forms is defined as follows: p ( (cid:96) ) = | (cid:96) | (cid:89) t =1 p ( (cid:96) t | (cid:96) <t ) (8) 7 LSTMs have been shown to be able to model phonotactics well by Pimentel et al. (2020b), and so we expect them to also work well with graphotactics.",
"where (cid:96) is a vector of characters forming a word and (cid:96) t is its t th character.",
"8 Each of these characters is encoded with a lookup vector, producing representations e t R d 1 where d 1 is the embedding size.",
"These embeddings are then used as input to an LSTM (Hochreiter and Schmidhuber, 1997), producing the representations h t R d 2 , where d 2 is the size of the LSTM's hidden layer.",
"The LSTM output is further used to obtain the distribution over potential characters: p ( (cid:96) t | (cid:96) <t ) = softmax( W h t + b ) (9) In this equation, both W R | | d 2 and b R | | are learnable parameters and the zero vector is used as the initial hidden state h 0 .",
"The distribution p , representing the generator, is then used to generate the set of wordforms { (cid:96) k } Kk =1 , which is expected to represent the graphotactics and morphology of the language.",
"Notedly, these wordforms do not explicitly capture any notion of token frequency.",
"9 Adapting Word Frequencies.",
"The adaptor is responsible for modeling the word frequencies, and it has no explicit notion of the wordforms themselves.",
"The PYCRP assigns each token n to a cluster z n .",
"Each cluster z n , in turn, has an associated wordform (cid:96) z n , sampled from the generator.",
"Consequently, all instances in a cluster share the same wordform.",
"The probability of an instance n being assigned to cluster z n is defined as follows: p ( Z n = z n | z <n ) (10) (cid:40) c ( z n ) <n a 1 z n K <n (old cluster) a K <n + b z n = K <n + 1 (new cluster) 8 We note two subscripts are used here: k refers to the k th wordform, while t indexes the t th character in the wordform.",
"In this equation, K <n is the current number of populated clusters; while c ( z n ) <n is the number of instances currently assigned to cluster z n .",
"The PYCRP has two hyperparameters: 0 a < 1 and b 0 .",
"The parameter a controls the rate in which the clusters grow (Teh, 2006), while b controls an initial preference for dispersion.",
"Together, these ensure the formation of a long-tailconcocting a power-law distribution for the cluster frequencies.",
"This property allows a cluster with wordform make , for example, to have an exponentially larger frequency than its graphotactic neighbor cake .",
"Modeling Word Tokens.",
"Finally, given the set of wordforms and the cluster assignments, defining the form associated with a token is deterministic.",
"Since each cluster only contains instances of one wordform, the form of a token is defined looking up the label of the cluster it was assigned to (cid:96) z n : p ( W n = w n | z n , (cid:96) ) = 1 { w n = (cid:96) z n } (11) This way, the adaptor captures the frequency information of the words in the corpuswhereas the generator can focus on learning the language's graphotactics and morphology.",
"Model training.",
"Unfortunately, we cannot directly infer the parameters of our model with a closed form solution.",
"We thus use a solution akin to expectation maximization (Wei and Tanner, 1990): We freeze our LSTM generator while learning the PYCRP parameters, and vice versa .",
"The PYCRP is trained using Gibbs sampling.",
"For each token, we fix all cluster assignments z n except for one z n .",
"This cluster is then re-sampled from the marginal p ( Z n = z n | z n , (cid:96) , w n ) , where we have access to w n since it is an observed variable.",
"During this optimization small clusters may vanish, and new clusters z n = K + 1 (previously with no samples) may be created.",
"This procedure, thus, may also produce new sets of wordforms { (cid:96) k } K (cid:48) k =1 , composed of the populated clusters' labels (where K (cid:48) is the new number of clusters).",
"We assume the distribution of these wordformswhich have dampened frequenciesto be more balanced than in the original full set of word tokens.",
"The LSTM is trained using stochastic gradient descent, minimizing the cross-entropy of precisely this set of cluster's wordforms.",
"As such, it is expected to be a more representative model of a language's graphotactics; the irregular common words are less dominant in this training set.",
"We give a longer explanation of our model training procedure, together with the used hyperparameters, in App.",
"Despite its slightly odd formulation, the two-stage model has an intuitive interpretation.",
"Once we have learned (and fixed) its parameters, we obtain the marginal probability of a wordform as: p ( w ) = (12) c w smoothing factor (cid:122) (cid:125)(cid:124) (cid:123) n w a | z | + b (cid:124) (cid:123)(cid:122) (cid:125) smoothed unigram frequencies + ( a K + b ) | z | + b (cid:124) (cid:123)(cid:122) (cid:125) interpolation weight p ( w ) (cid:124) (cid:123)(cid:122) (cid:125) LSTM In this equation, c w is the count of tokens with form w in the training set, while n w is the number of distinct clusters with this same form.",
"The model interpolates between a smoothed unigram corpus frequency and the probability an LSTM gives the analyzed wordform.",
"This interpolation enables the model to place a non-zero probability mass on all possible wordformsthus modeling an open vocabulary and having infinite supportwhile also placing a large probability mass on frequent wordforms.",
"Furthermore, the smoothing factors per word type, together with the interpolation weight, are holistically learned by the PYCRP model using the training set.",
"10 5 Experimental Setup 5.1 Evaluation The value in which we are interested in this work is the expected cost of a code, given in eq.",
"(4).",
"We can easily estimate this value for a natural code by using its sample estimate: cost( C nat ) 1 NN (cid:88) n =1 | C nat ( m n ) | = 1 NN (cid:88) n =1 | w n | (13) For an optimal code, we can upperbound it using the entropy of the distribution, while the entropy itself can be upperbounded by the cross-entropy of a model on it.",
"We can compute this upperbound with a sample estimate of the cross-entropy: cost( C (cid:63) ) H( W ) + 1 H ( W ) + 1 (14) (cid:46) 1 NN (cid:88) n =1 log | | 1 p ( w n ) + 1 10 Our model consistently produced lower cross-entropies (on held out tokens) to the ones of an LSTM baseline na vely trained on a language's tokens.",
"Shannon (1948) code's lengths directly: cost( C (cid:63) ) (cid:46) 1 NN (cid:88) n =1 (cid:24) log | | 1 p ( w n ) (cid:25) (15) where (cid:100)(cid:101) is the ceiling operation.",
"As mentioned in 2, we use Morfessor (Smit et al., 2014) to tokenize our corpus into morphological units.",
"Morfessor is a method for finding morphological segmentations from raw text data.",
"As an unsupervised model, Morfessor is inherently noisy, but we take it as a proxy for a language's morphological segmentation.",
"To compare the robustness of our results across different unsupervised segmentation algorithms, though, we also run our experiments using byte pair encoding (BPE; Gage, 1994; Sennrich et al., 2016) and WordPieces (Schuster and Nakajima, 2012).",
"We train Morfessor on all pre-tokenized sentences in our language-specific Wikipedia corpus (described in 5.4).",
"With this pre-trained model in hand, we tokenize all words in our training, development and test sets.",
"We get a set of morpheme tokens { u n,j } J n j =1 for each word w n , where this word is split into J n morphological units.",
"We can now get the optimal length of a morphologically constrained code.",
"With this in mind, we first train a fresh version of our two-stage model on the full set of morphological unit tokensi.e. { u n,j | n N, j J n } , as opposed to the set of full word tokens, { w n } Nn =1 .",
"We estimate the length of this code with the following equation: cost( C morph ) = (cid:88) m M p ( m ) (cid:12)(cid:12) C morph ( m ) (cid:12)(cid:12) (16) (cid:46) 1 NN (cid:88) n =1 J n (cid:88) j =1 (cid:24) log | | 1 p ( u n,j ) (cid:25) Note that this cost estimate is still the average code-length per word token, as such we take the expectation over the meanings distribution.",
"Each word's code-length, though, is now defined as the sum of the length of each of its constituent morphemes.",
"The second linguistic constraint we would like to impose on our codes is graphotactic well-formednessi.e. we wish our code to be composed only by sequences of characters that comply with the regularities observed in the language,",
"such as e.g. vowel harmony, syllable structure, or word-initial and word-final constraints.",
"We use our generator LSTM for this.",
"As mentioned before, this model is trained on wordforms with dampened frequencieswe thus expect it to learn a language's graphotactic patterns above a minimum quality threshold.",
"We use this character-level model to sample (without replacement) as many unique wordforms as there are word types in that language (see Tab. 3 in App. A).",
"11 We assign each of these sampled wordforms w (cid:48) n , ordered by word length, to one of the languages meanings m n , inversely ordered by unigram probability, i.e. C graph ( m n ) = w (cid:48) n thus generating an optimally Zipfian frequencylength correlation.",
"With these assignments, we estimate the cost of a graphotactically constrained code: cost( C graph ) 1 NN (cid:88) n =1 | w (cid:48) n | (17) Analogously, with the generator trained on morpheme units we get an optimal code under both morphological and graphotactic constraints.",
"We use Wikipedia data in our experiments.",
"The data is preprocessed by first splitting it into sentences and then into tokens using SpaCy's language-specific sentencizer and tokenizer (Hon-nibal et al., 2020).",
"After this, all punctuation is removed and the words are lower-cased.",
"We subsample (without replacement) one million sentences of each language for our experiments, due to computational constraints.",
"We then use an 80-10-10 split for our training, validation and test sets.",
"We choose typologically diverse languages for our experiments, each from a different language family: English, Finnish, Hebrew, Indonesian, Tamil, Turkish and Yoruba.",
"12 These languages vary in their graphotactic tendencies and morphological 11 Unfortunately, our LSTMs use a softmax non-linearity to assign probabilities and, as such, can't produce zeros.",
"Furthermore, due to the compositional nature of wordform probabilities (see eq.",
"(8)), short implausible forms may have larger probability mass than long plausible ones.",
"To mitigate this effect, when sampling wordforms we impose a minimum threshold of 0 .",
"01 on each transition probability p ( (cid:96) t | (cid:96) <t ) .",
"12 Dataset statistics are presented in App.",
"A. Figure 3: Bar plots of the code lengths under different constraints.",
"complexity.",
"In order to improve our data quality, we hand-defined an alphabet for each language and filter sentences with them, only considering sentences consisting exclusively of valid characters.",
"13 5.5 Summary In this paper we consider the following codes: Optimal.",
"An information-theoretically optimal code under our two-stage model, estimated as defined by eq.",
"(15).",
"This is our most compressed code and does not include either morphlogical or graphotactic contraints.",
"Morph.",
"A morphologically constrained code, as defined by eq.",
"(16).",
"Graph.",
"A code constrained by graphotactics, as defined by eq.",
"(17).",
"Morph+Graph.",
"A code constrained by both morphology and graphotactics; defined by eq.",
"(18).",
"Natural.",
"The natural codeequivalent to the average token length and defined by eq.",
"(13).",
"This is the code length actually observed in our corpora.",
"Zipfian.",
"A code estimated by re-pairing wordforms with meanings based on their frequencies; we then compute eq.",
"(13) in this new code.",
"This would be equivalent to the natural code length if lexicons had a perfect word lengthfrequency correlation (i.e., a Spearman's rank correlation of 1).",
"Shuffle.",
"A code estimated by randomly repairing wordforms with meanings and computing eq.",
"(13) in this new code.",
"This would be equivalent to the natural code length if Zipf's law of abbreviation did not exist, i.e. lexicons had no word lengthfrequency correlation.",
"The average length for each considered code is presented in Fig. 3 and Tab.",
"1. As expected, we find that the average code length across natural languages is shorter than the shuffle condition and longer than the optimal condition.",
"Interestingly, the codes produced by the other conditions investigated here also have the same identical order across all analyzed languages.",
"Adding morphological constraints on the code incurs no more than one extra character over the optimal conditionexcept for Finnish, for which the cost of morphology is slightly above one character.",
"Notably, the use of unsupervised morphological segmentation may introduce some noise into our measurements.",
"Consistently with our expectations, though, Yoruba (a morphologically poor language) pays the smallest cost for its morphology, while Finnish (a morphologically rich one) pays the largest.",
"BPE and WordPiece systematically produce shorter codes than Morfessor.",
"This is sensible, since the first two would keep most frequent wordforms intact, generating a unique code for each of them.",
"This would lead to codes in which the morphological productivity of frequent and infrequent words differ, amplifying frequency effects encountered in natural languages (Lieberman et al., 2007).",
"The graphotactic condition yields systematically longer codes than the morphological one, although here there are important differences between languages: English, Hebrew and Indonesian have similar code lengths for both code constraints; in the other languages the graphotactic code is substantially longer than the morphological one.",
"In all cases, the natural code is longer than the one with both graphotactic and morphological constraintssuggesting languages are not opti-Morph Morph + Graph Language Optimal Morfessor BPE WordPieces Graph Morfessor BPE WordPieces Zipfian Natural Shuffle English 3.09 3.82 3.34 3.31 4.39 5.34 4.67 4.70 3.93 6.11 8.91 Finnish 3.89 5.13 4.94 4.95 7.37 7.55 7.60 7.65 6.59 8.72 10.97 Hebrew 3.52 4.38 3.98 3.99 4.82 5.19 4.95 4.88 4.50 5.79 6.97 Indonesian 3.31 4.08 3.67 3.66 4.63 5.08 5.02 4.95 4.25 7.06 8.30 Tamil 3.38 4.15 4.07 4.01 7.52 8.01 8.16 8.22 6.41 9.21 11.48 Turkish 3.52 4.28 4.12 4.03 5.67 6.31 5.98 5.93 5.31 7.52 9.09 Yoruba 2.84 3.18 3.00 2.97 4.63 4.85 4.69 4.61 4.24 5.34 7.10 Table 1: The average code lengths under the different coding schemes.",
"mally compressed, even when accounting for these constraints.",
"That said, all of the natural languages are considerably more compressed than a lexicon produced by randomly reassigning wordforms.",
"In this paper, we introduced a model-based strategy to assess the relative contribution of different constraints on word (code) length at large.",
"In particular, we evaluated how much natural languages differ from systems optimized for Zipf's law of abbreviation.",
"Our proposed model improves upon an old method used to consider the efficiency of the lexicon: random typing models (Miller, 1957; Moscoso del Prado, 2013; Ferrer-i-Cancho et al., 2020).",
"Miller introduced the idea of monkeys typing randomly on a keyboard and analyzed the properties of its resulting language.",
"The monkeys' text, however, has no morphological or graphotactic constraints (but see Caplan et al., 2020) and does not follow a language's unigram distribution (Howes, 1968).",
"As such, it cannot directly encode the same meanings or messages as the original language.",
"Our results show that, while natural languages do tend to map frequent messages to shorter words, the magnitude of this effect varies widely across our set of diverse languages.",
"Notably, the distance between natural languages and the optimal codes is Figure 5: Fraction of code length accounted for by the combined morphology and graphotactics model larger than the distance between natural languages and their corresponding shuffled code (see Fig. 4).",
"In other words, natural codes are closer to not being optimized (in the Zipfian sense) than to being maximally compressed.",
"That said, our morphological and graphotactic baselines, when combined, yield codes that display mean code lengths that are (in most cases) closer to the natural code than to the optimal (see Fig. 5).",
"If our models are indeed able to capture the true patterns in our data, then this means that (composi-tional) morphology and graphotactics, along with the law of abbreviation, are sufficient to account for most of the length of natural codesas observed in real languages.",
"Graphotactic (primarily) and morphological constraints are enough to derive a code with a similar complexity to that of natural languages, which suggests the other factors discussed above (associated with, e.g., surprisal and non-arbitrary form-meaning mappings) likely play a more modest role in pushing natural languages away from the optimal Zipfian code.",
"The optimality of the lexicon occupies a major place in the scientific study of the structure and functional evolution of languages (Bentz and Ferrer-i-Cancho, 2016; Gibson et al., 2019; Mahowald et al., 2020).",
"We hope that the method presented herewhich allows for a more precise quantification of the (non-)optimality of lexicons will be used to further the goal of understanding why languages are structured in the ways that they are, while offering insight into the functional tradeoffs that underlie language variation and change.",
"This paper concerns itself with investigating lexi-cons' optimality under the perspective of Zipf's Law of Abbreviation.",
"As we focus on computational linguistic experiments, we see no clear ethical concerns here.",
"Nonetheless, we note that Wikipedia (from where we collect data) is not a fully representative source of a language's data the biases in the data will likely also be present in our results.",
"Damian E. Blasi acknowledges funding from the Branco Weiss Fellowship, administered by the ETH Zurich.",
"Damian E. Blasi's research was also executed within the framework of the HSE University Basic Research Program and funded by the Russian Academic Excellence Project 5-100'."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Text summarization aims to generate a short summary for an input text.",
"In this work, we propose a Non-Autoregressive Unsupervised Summarization (NAUS) approach, which does not require parallel data for training.",
"Our NAUS first performs edit-based search towards a heuristically defined score, and generates a summary as pseudo-groundtruth.",
"Then, we train an encoder-only non-autoregressive Transformer based on the search result.",
"We also propose a dynamic programming approach for length-control decoding, which is important for the summarization task.",
"Experiments on two datasets show that NAUS achieves state-of-the-art performance for unsupervised summarization, yet largely improving inference efficiency.",
"Further, our algorithm is able to perform explicit length-transfer summary generation.",
"1 1 Introduction Text summarization is an important natural language processing (NLP) task, aiming at generating concise summaries for given texts while preserving the key information.",
"It has extensive real-world applications such as headline generation (Nenkova et al., 2011).",
"In this paper, we focus on the setting of sentence summarization (Rush et al., 2015; Filippova et al., 2015).",
"State-of-the-art text summarization models are typically trained in a supervised way with large training corpora, comprising pairs of long texts and their summaries (Zhang et al., 2020; Aghajanyan et al., 2020, 2021).",
"However, such parallel data are expensive to obtain, preventing the applications to less popular domains and less spoken languages.",
"Unsupervised text generation has been attracting increasing interest, because it does not require parallel data for training.",
"One widely used approach 1 Our code, model, and output are released at: https: //github.com/MANGA-UOFA/NAUS is to compress a long text into a short one, and to reconstruct it to the long text by a cycle consistency loss (Miao and Blunsom, 2016; Wang and Lee, 2018; Baziotis et al., 2019).",
"Due to the in-differentiability of the compressed sentence space, such an approach requires reinforcement learning (or its variants), which makes the training difficult (Kreutzer et al., 2021).",
"Recently, Schumann et al. (2020) propose an edit-based approach for unsupervised summarization.",
"Their model maximizes a heuristically defined scoring function that evaluates the quality (fluency and semantics) of the generated summary, achieving higher performance than cycle-consistency methods.",
"However, the search approach is slow in inference because hundreds of search steps are needed for each data sample.",
"Moreover, their approach can only select words from the input sentence with the word order preserved.",
"Thus, it is restricted and may generate noisy summaries due to the local optimality of search algorithms.",
"To address the above drawbacks, we propose a Non-Autoregressive approach to Unsupervised Summarization (NAUS).",
"The idea is to perform search as in Schumann et al. (2020) and, inspired by Li et al. (2020), to train a machine learning model to smooth out such noise and to speed up the inference process.",
"Different from Li et al. (2020), we propose to utilize non-autoregressive decoders, which generate all output tokens in parallel due to our following observations: Non-autoregressive models are several times faster than autoregressive generation, which is important when the system is deployed.",
"The input and output of the summarization task have a strong correspondence.",
"Non-autoregressive generation supports encoder-only architectures, which can better utilize such inputoutput correspondence and even outperform autoregressive models for summarization.",
"sign a length-control algorithm based on dynamic programming to satisfy the constraint of output lengths, which is typical in summarization applications but cannot be easily achieved with autoregressive models.",
"We conducted experiments on Gigaword headline generation (Graff et al., 2003) and DUC2004 (Over and Yen, 2004) datasets.",
"Experiments show that our NAUS achieves state-of-the-art performance on unsupervised summarization; especially, it outperforms its teacher (i.e., the search approach), confirming that NAUS can indeed smooth out the search noise.",
"Regarding inference efficiency, our NAUS with truncating is 1000 times more efficient than the search approach; even with dynamic programming for length control, NAUS is still 100 times more efficient than search and several times more efficient than autoregressive models.",
"Our NAUS is also able to perform length-transfer summary generation, i.e., generating summaries of different lengths from training.",
"In our approach, we first follow Schumann et al. (2020) and obtain a summary by discrete search towards a heuristically defined objective function (2.1).",
"Then, we propose a non-autoregressive model for the summarization task (2.2).",
"We present the training strategy and the proposed length-control algorithm in 2.3.",
"Consider a given source text x = (x 1 , x 2 , . . . , x n )",
"The goal of summarization is to find a shorter text y = (y 1 , y 2 , . . . , y m ) as the summary.",
"Our work on unsupervised summarization follows the recent progress of search-based text generation (Liu et al., 2020, 2021a; Kumar et al., 2020).",
"Schumann et al. (2020) formulate summarization as word-level extraction (with order preserved), and apply edit-based discrete local search to maximize a heuristically designed objective.",
"Specifically, the objective function considers two aspects: (1) a language fluency score f LM ( y ) , given by the reciprocal of a language model's perplexity; and (2) a semantic similarity score f SIM ( y ; x ) , given by the cosine embeddings.",
"The overall objective combines the two aspects as f ( y ; x ) = f LM ( y ) f SIM ( y ; x ) (1) where is a weighting hyperparameter.",
"Further, the desired summary length can be spec-ified as a hard constraint, achieved by searching only among sentences of the correct length.",
"Suppose the desired summary length is T , the approach selects T random words from the input, and maximizes the scoring function (1) by changing the selection and non-selection of two words.",
"A greedy hill-climbing algorithm determines whether the change is accepted or not.",
"In other words, a change is accepted if the score improves, or rejected otherwise.",
"Such a process continues until a (possibly local) optimum is found.",
"A pilot analysis in Schumann et al. (2020) shows that words largely overlap between a source text and its reference summary.",
"This explains the high performance of such a word extraction approach, being a state-of-the-art unsupervised summarization system and outperforming strong competitors, e.g., cycle consistency (Wang and Lee, 2018; Baziotis et al., 2019).",
"Despite the high performance, such edit-based search has several drawbacks.",
"First, the search process is slow because hundreds of local search steps are needed to obtain a high-quality summary.",
"Second, their approach only extracts the original words with order preserved.",
"Therefore, the generated summary is restricted and may be noisy.",
"To this end, we propose a Non-Autoregressive approach to Unsupervised Summarization (NAUS) by learning from the search results.",
"In this way, the machine learning model can smooth out the search noise and is much faster, largely alleviating the drawbacks of search-based summarization.",
"Compared with training an autoregressive model from search (Li et al., 2020), non-autoregressive generation predicts all the words in parallel, further improving inference efficiency by several times.",
"Moreover, a non-autoregressive model enables us to design an encoder-only architecture, which is more suited to the summarization task due to the strong correspondence between input and output, which cannot be fully utilized by encoderdecoder models, especially autoregressive ones.",
"Specifically, we propose to use multi-layer Transformer (Vaswani et al., 2017) as the non-autoregressive architecture for summarization.",
"Each Transformer layer is composed of a multihead attention sublayer and a feed-forward sublayer.",
"Additionally, there is a residual connection in each sublayer, followed by layer normalization.",
"Let X ( n ) RT d be the representation at the n th layer, where T is the number of words and d is the dimension.",
"Specially, the input layer X (0) is the embeddings of words.",
"Suppose we have h attention heads.",
"The output of the i th head in the n th attention sublayer is A ( n ) i = softmax (cid:16) Q i K (cid:62) i d k (cid:17) V i , where Q i , K i , and V i are matrices calculated by three distinct multi-layer perceptrons (MLPs) from X ( n 1) ; d k is the attention dimension.",
"Multiple attention heads are then concatenated: A ( n ) = Concat (cid:0) A ( n ) 1 , . . . , A ( n ) h (cid:1) WO where WO R d d is a weight matrix.",
"Then, we have a residual connection and layer normalization by A ( n ) = LayerNorm (cid:0) X ( n 1) + A ( n ) (cid:1) (2) Further, an MLP sublayer processes A ( n ) , followed by residual connection and layer normalization, yielding the n th layer's representation X ( n ) = LayerNorm (cid:0) A ( n ) + MLP( A ( n ) ) (cid:1) (3) The last Transformer layer X ( N ) is fed to softmax to predict the words of the summary in a non-autoregressive manner, that is, the probability at the t th step is given by softmax( W x ( N ) t ) , where x ( N ) t is the t th row of the matrix X ( N ) and W is the weight matrix.",
"It is emphasized that, in the vocabulary, we include a special blank token (cid:15) , which is handled by dynamic programming during both training and inference (2.3).",
"This enables us to generate a shorter summary than the input with such a multi-layer Transformer.",
"Our model can be thought of as an encoder-only architecture, differing from a typical encoder decoder model with cross attention (Vaswani et al., 2017; Baziotis et al., 2019; Zhou and Rush, 2019).",
"Previously, Su et al. (2021) propose a seemingly similar model to us, but put multiple end-of-sequence (EOS) tokens at the end of the generation; thus, they are unable to maintain the correspondence between input and output.",
"Instead, we allow blank tokens scattering over the entire sentence; the residual connections in Eqns (2) and (3) can better utilize such inputoutput correspondence for summarization.",
"In this section, we first introduce the Connectionist Temporal Classification (CTC) training.",
"Then, we propose a length-control decoding approach for summary generation.",
"CTC Training.",
"The Connectionist Temporal Classification (CTC, Graves et al., 2006) algorithm allows a special blank token (cid:15) in the vocabulary, and uses dynamic programming to marginalize out such blank tokens, known as latent alignment (Sa-haria et al., 2020).",
"In addition, non-autoregressive generation suffers from a common problem that words may be repeated in consecutive steps (Gu et al., 2018; Lee et al., 2018); thus, CTC merges repeated words unless separated by (cid:15) .",
"For example, the sequence of tokens a(cid:15)(cid:15)aabb(cid:15) is reduced to the text aab , denoted by ( a(cid:15)(cid:15)aabb(cid:15) ) = aab .",
"Concretely, the predicted likelihood is marginal-ized over all possible fillings of (cid:15) , i.e., all possible token sequences that are reduced to the groundtruth text: P ( y | x ) = (cid:88) w :( w )= y P ( w | x ) (4) 7918 where P ( w | x ) is the probability of generating a sequence of tokens w .",
"Although enumerating every candidate in { w : ( w ) = y } is intractable, such marginalization fortunately can be computed by dynamic programming in an efficient way.",
"Let s,t = (cid:80) w 1: s :( w 1: s )= y 1: t P ( w 1: s | x ) be the marginal probability of generating y 1: t up to the s th decoding slot.",
"Moreover, s, 0 is defined to be the probability that w 1: s is all (cid:15) , thus not having matched any word in y .",
"The s,t variable can be further decomposed into two terms s,t = (cid:15)s,t + (cid:15) s,t , where the first term is such probability with w s = (cid:15) , and the second term w s (cid:54) = (cid:15) .",
"Apparently, the initialization of variables is (cid:15) 1 , 0 = P (w 1 = (cid:15) | x ) (5) (cid:15) 1 , 1 = P (w 1 = y 1 | x ) (6) (cid:15) 1 ,t = 0 , t 1 (7) (cid:15) 1 ,t = 0 , t > 1 or t = 0 (8) Eqn.",
"(7) is because, at the first prediction slot, the empty token (cid:15) does not match any target words; Eqn.",
"(8) is because the predicted non(cid:15) first token must match exactly the first target word.",
"The recursion formula for (cid:15)s,t is (cid:15)s,t = s 1 ,t P (w t = (cid:15) | x ) since the newly predicted token (cid:15) with probability P (w t = (cid:15) | x ) does not match any target word, inheriting s 1 ,t .",
"The recursion formula for (cid:15) s,t is (cid:15) s,t = (cid:0) (cid:15)s 1 ,t 1 + (cid:15) s 1 ,t (cid:1) P (w s = y t | x ) , if y t = y t 1 (cid:0) s 1 ,t 1 + (cid:15) s 1 ,t (cid:1) P (w s = y t | x ) , otherwise Here, w s is not (cid:15) , so we must have w s = y t , having the predicted probability P (w s = y t | x ) .",
"If y t = y t 1 , then we have two sub-cases: first, w 1: s 1 is reduced to y 1: t 1 with w s 1 = (cid:15) separating two repeating words in y , having probability (cid:15)s 1 ,t 1 ; or second, w 1: s 1 is reduced to y 1: t with w s 1 = y t (cid:54) = (cid:15) , having probability (cid:15) s 1 , which implies we are merging w s 1 and w s .",
"If y t (cid:54) = y t 1 , w 1: s 1 is reduced to either y 1: t 1 or y 1: t .",
"In the first case, w s 1 can be either (cid:15) or non(cid:15) , given by s 1 ,t 1 = (cid:15) s 1 ,t 1 + (cid:15) s 1 ,t 1 .",
"In the second case, we must have w s 1 (cid:54) = (cid:15) , which has a probability of (cid:15) s 1 ,t .",
"The CTC maximum likelihood estimation is to maximize the marginal probability, which is equivalent to minimizing the loss | w | , | y | .",
"Since the dynamic programming formulas are differentiable, the entire model can be trained by backpropagation in an end-to-end manner with auto-differentiation tools (such as PyTorch).",
"Length-Control Inference.",
"Controlling output length is the nature of the summarization task, for example, displaying a short news headline on a mo-bile device.",
"Moreover, Schumann et al. (2020) show that the main evaluation metric ROUGE (Lin, 2004) is sensitive to the summary length, and longer summaries tend to achieve higher ROUGE scores.",
"Thus, it is crucial to control the summary length for fair comparison.",
"We propose a length-control algorithm by dynamic programming (DP), following the nature of CTC training.",
"However, our DP is an approximate algorithm because of the dependencies introduced by removing consecutive repeated tokens.",
"Thus, we equip our DP with a beam search mechanism.",
"We define B s,t to be a set of topB sequences with s predicted tokens that are reduced to t words.",
"B s,t is constructed by three scenarios.",
"First, the blank token (cid:15) is predicted for the s th generation slot, and thus the summary length t remains the same, shown by the blue arrow in Figure 2.",
"This yields a set of candidates B (1) s,t = (cid:8) b (cid:15) : b B s 1 ,t (cid:9) (9) where refers to string/token concatenation.",
"Second, a repeated word is predicted for the s th generation slot, i.e., b s 1 for a subsequence b of length s 1 .",
"In this case, the summary length t also remains the same, also shown in the blue arrow in Figure 2.",
"This gives a candidate set B (2) s,t = (cid:8) b b s 1 : b B s 1 ,t (cid:9) (10) Third, a non(cid:15) , non-repeating word w s is generated, increasing the summary length from t 1 to 7919 t , shown by the red arrow in Figure 2.",
"This gives B (3) s,t = top B (cid:8) b w : b B s 1 ,t 1 , w s (cid:54) = (cid:15), w s (cid:54) = b s 1 (cid:9) (11) where top B selects the best B elements by the probability P (w s | x ) .",
"Based on the three candidates sets, we select topB sequences to keep the beam size fixed: B s,t = top B ( B (1) s,t B (2) s,t B (3) s,t ) (12) where the sequences are ranked by their predicted joint probabilities.",
"Theorem 1.",
"(1) If repeating tokens are not merged, then the proposed length-control algorithm with beam size B = 1 finds the exact optimum B S,T being the most probable lengthT sentence given by S prediction slots.",
"(2) If we merge repeating tokens predicted by CTC-trained models, the above algorithm may not be exact.",
"Appendix A presents the proof of the theorem and provides a more detailed analysis, showing that our length-control algorithm, although being approximate inference, can generate a summary of the desired length properly.",
"Compared with truncating an overlength output, our approach is able to generate more fluent and complete sentences.",
"Also, our length-control algorithm is different from conventional beam search, shown in Appendix C. 3 Experiments 3.1 Setup Datasets.",
"We evaluated our NAUS model on Gigaword headline generation and DUC2004 datasets.",
"The headline generation dataset (Rush et al., 2015) is constructed from the Gigaword news corpus (Graff et al., 2003), where the first sentence of a news article is considered as input text and the news title is considered as the summary.",
"The dataset contains 3.8M/198K/1951 samples for train-ing/validation/test.",
"Based on the analysis of the training size in Appendix B, we used 3M samples for training NAUS.",
"It should be emphasized that, when NAUS learns from search, we only use the input of the training corpus: we perform search (Schumann et al., 2020) for each input, and train our NAUS from the search results.",
"Therefore, we do not utilize any labeled parallel data, and our approach is unsupervised.",
"Moreover, we considered two settings with desired summary lengths of 8 and 10, following Schumann et al. (2020).",
"Our NAUS is trained from respective search results.",
"The DUC2004 dataset (Over and Yen, 2004) is designed for testing only with 500 samples, where we also take the first sentence of an article as the input text.",
"Our NAUS is transferred from the above headline generation corpus.",
"Based on the length of DUC2004 summaries, we trained NAUS from search results with 13 words, also following Schumann et al. (2020) for fair comparison.",
"Evaluation Metrics.",
"We evaluated the quality of predicted summaries by ROUGE scores 2 (Lin, 2004), which are the most widely used metrics in previous work (Wang and Lee, 2018; Baziotis et al., 2019; Zhou and Rush, 2019).",
"Specifically, ROUGEn evaluates n -gram overlap between a predicted summary and its reference summary; ROUGE-L, instead, measures the longest common sequence between the predicted and reference summaries.",
"Different ROUGE variants are adopted in previous work, depending on the dataset.",
"We followed the standard evaluation scripts and evaluated headline generation by ROUGE F1 (Wang and Lee, 2018; Baziotis et al., 2019; Schumann et al., 2020) and DUC2004 by Truncate ROUGE Recall (Dorr et al., 2003; West et al., 2019).",
"In addition to summary quality, we also evaluated the inference efficiency of different methods, as it is important for the deployment of deep learning models in real-time applications.",
"We report the average inference time in seconds for each data sample, and compare the speedup with Schumann et al. (2020)'s search approach, which achieves (previous) state-of-the-art ROUGE scores.",
"Our experiments were conducted on an i9-9940X CPU and an RTX6000 graphic card.",
"Appendix B presents additional implementation details.",
"Main Results.",
"Table 1 presents the performance of our model and baselines on the Gigaword headline test set.",
"For a fair comparison, we categorize all approaches by average summary lengths of ~8 and ~10 into Groups A and B, respectively.",
"The Lead baseline extracts the first several words of the input sentence.",
"Despite its simplicity, the Lead approach is a strong summarization baseline adopted in most previous work (Fvry and Phang, 2018; Baziotis et al., 2019).",
"Wang and Lee (2018) utilize cycle consistency (Miao and Blunsom, 2016) for unsupervised summarization; the performance is relatively low, because the cycle consistency loss cannot ensure the generated text is a valid summary.",
"Zhou and Rush (2019) perform beam search towards a step-by-step decomposable score of fluency and contextual matching.",
"Both are unable to explicitly control the summary length: in a fair comparison of length 10 (Group B, Table 1), their performance is worse than the (previous) state-of-the-art approach (Schu-mann et al., 2020), 3 which performs edit-based local search.",
"Our NAUS approach follows Schumann et al. (2020), but trains a non-autoregressive model from 3 Schumann et al. (2020) present a few variants that use additional datasets for training language models (in an unsupervised way).",
"In our study, we focus on the setting without data augmentation, i.e., the language model is trained on nonparallel the Gigawords corpus.",
"search results.",
"We consider two settings for controlling the summary length: truncating longer summaries and decoding with our proposed length-control algorithm.",
"Both of our variants outperform Schumann et al. (2020) by 1.212.73 in terms of the total ROUGE score (Rows 56 & 1314, Table 1).",
"As mentioned, Schumann et al. (2020) only extract original words with order preserved, yielding noisy sentences.",
"Our NAUS, as a student, learns from the search-based teacher model and is able to smooth out its noise.",
"This is a compelling result, as our student model outperforms its teacher.",
"Regarding inference efficiency, our NAUS method with truncating is more than 1300 times faster than Schumann et al. (2020), because we do not need iterative search.",
"Even with dynamic programming and beam search for length control, NAUS is still over 100 times faster.",
"This shows our NAUS is extremely efficient in inference, which is important for real-time applications.",
"Although the efficiency of Wang and Lee (2018) and Zhou and Rush (2019) is not available, we still expect our approach to be a few times faster (despite our higher ROUGE scores) because their models are autoregressive.",
"By contrast, our NAUS is non-autoregressive, meaning that it predicts all words simultaneously.",
"We will provide a controlled comparison between autoregressive and non-autoregressive models in Table 3. Table 2 shows the results on the DUC2004 dataset.",
"tis et al., 2019; West et al., 2019) does not perform well on this dataset, outperformed by an early rule-based syntax tree trimming approach (Za-jic et al., 2004) and the state-of-the-art edit-based search (Schumann et al., 2020).",
"The performance of our NAUS model is consistent with Table 1, outperforming all previous methods in terms of the total ROUGE score, and being 1001000 times faster than the search approach (Schumann et al., 2020).",
"In general, the proposed NAUS not only achieves state-of-the-art ROUGE scores for unsupervised summarization, but also is more efficient when deployed.",
"Results are consistent on both datasets, demonstrating the generality of our NAUS.",
"In-Depth Analyses.",
"We conduct in-depth analyses on the proposed NAUS model in Table 3. Due to the limit of time and space, we chose the Gigaword headline generation as our testbed.",
"All the autoregressive (AR) and non-autoregressive (NAR) variants learn from the search output of our replication (Rows 2 & 11), where we achieve very close results to those reported in Schumann et al. (2020).",
"We first tried vanilla encoderdecoder NAR Transformer (Rows 4 & 13, Gu et al., 2018), where we set the number of decoding slots as the desired summary length; thus, the blank token and the length-control algorithm are not needed.",
"As seen, a vanilla NAR model does not perform well, and CTC largely outperforms vanilla NAR in both groups (Rows 56 & 1415).",
"Such results are highly consistent with the translation literature (Sa-haria et al., 2020; Chan et al., 2020; Gu and Kong, 2021; Qian et al., 2021; Huang et al., 2022).",
"The proposed encoder-only NAUS model outperforms encoderdecoder ones in both groups in terms of the total ROUGE score, when the summary length is controlled by either truncating or length-control decoding (Rows 89 & 1718).",
"Profoundly, our non-autoregressive NAUS is even better than the autoregressive Transformer (Rows 3 & 12).",
"We also experimented with previous non-autoregressive work for supervised summarization (Su et al., 2021) 4 in our learning-from-search setting.",
"Although their approach appears to be encoder-only, it adds end-of-sequence (EOS) tokens at the end of the generation, and thus is unable to utilize the inputoutput correspondence.",
"Their performance is higher than vanilla NAR models, but lower than ours.",
"By contrast, NAUS is able to capture such correspondence with the residual connections, i.e., Eqns.",
"(2) and (3), in its encoder-only architecture.",
"Generally, the efficiency of encoder-only NAR 5 (without length-control decoding) is ~2 times faster than encoderdecoder NAR and ~20 times faster than the AR Transformer.",
"Further, our length-control decoding improves the total ROUGE score, compared with truncating, for both encoderdecoder CTC and encoder-only NAUS models (Rows 6, 9, 15, & 18), although its dynamic programming is slower.",
"Nevertheless, our non-autoregressive NAUS with length control is ~200 times faster than search and ~3 times faster than the AR Transformer.",
"results in our appendices: C. Analysis of Beam Search D. Case Study E. Human Evaluation F. Length-Transfer Summarization",
"4 To the best of our knowledge, the other two non-autoregressive supervised summarization models are Yang et al. (2021) and Qi et al. (2021).",
"Their code and pretrained models are not available, making replication difficult.",
"5 The standard minimal encoderdecoder NAR model has 6 layers for the encoder and another 6 layers for the decoder (Vaswani et al., 2017).",
"Our NAUS only has a 6-layer encoder.",
"Our pilot study shows that more layers do not further improve performance in our encoder-only architecture.",
"Summarization systems can be generally categorized into two paradigms: extractive and abstractive.",
"Extractive systems extract certain sentences and clauses from input, for example, based on salient features (Zhou and Rush, 2019) or feature construction (He et al., 2012).",
"Abstraction systems generate new utterances as the summary, e.g., by sequence-to-sequence models trained in a supervised way (Zhang et al., 2020; Liu et al., 2021b).",
"Recently, unsupervised abstractive summarization is attracting increasing attention.",
"Yang et al. (2020) propose to use the Lead baseline (first several sentences) as the pseudo-groundtruth.",
"However, such an approach only works with well-structured articles (such as CNN/DailyMail).",
"Wang and Lee (2018) and Baziotis et al. (2019) use cycle consistency for unsupervised summarization.",
"Zhou and Rush (2019) propose a step-by-step decomposable scoring function and perform beam search for summary generation.",
"Schumann et al. (2020) propose an edit-based local search approach, which allows a more comprehensive scoring function and outperforms cycle consistency and beam search.",
"Our paper follows Schumann et al. (2020) but trains a machine learning model to improve efficiency and smooth out search noise.",
"Previously, Li et al. (2020) fine-tune a GPT-2 model based on search results for unsupervised paraphrasing; Jolly et al. (2022) adopt the search-and-learning framework to improve the semantic coverage for few-shot data-to-text generation.",
"We extend previous work in a non-trivial way by designing a non-autoregressive generator and further proposing a length-control decoding algorithm.",
"The importance of controlling the output length is recently realized in the summarization community.",
"Baziotis et al. (2019) and Su et al. (2021) adopt soft penalty to encourage shorter sentences; Yang et al. (2021) and Qi et al. (2021) control the summary length through POS tag and EOS predictions.",
"None of these studies can control the length explicitly.",
"Song et al. (2021) is able to precisely control the length by progressively filling a predetermined number of decoding slots, analogous to the vanilla NAR model in our non-autoregressive setting.",
"Non-autoregressive generation is originally proposed for machine translation (Gu et al., 2018; Guo et al., 2020; Saharia et al., 2020), which is later extended to other text generation tasks.",
"Wiseman et al. (2018) address the table-to-text generation task, and model output segments by a hidden semi-Markov model (Ostendorf et al., 1996), simultaneously generating tokens for all segments.",
"Jia et al. (2021) apply non-autoregressive models to extractive document-level summarization.",
"Su et al. (2021) stack a non-autoregressive BERT model with a conditional random field (CRF) for abstractive summarization; since the summary is shorter than the input text, their approach puts multiple end-to-sequence (EOS) tokens at the end of the sentence, and thus is unable to utilize the strong inputoutput correspondence in the summarization task.",
"Yang et al. (2021) apply auxiliary part-of-speech (POS) loss and Qi et al. (2021) explore pretraining strategies for encoderdecoder non-autoregressive summarization.",
"All these studies concern supervised summarization, while our paper focuses on unsupervised summarization.",
"We adopt CTC training in our encoder-only architecture, allowing blank tokens to better align input and output words, which is more appropriate for summarization.",
"In this work, we propose a non-autoregressive unsupervised summarization model (NAUS), where we further propose a length-control decoding algorithm based on dynamic programming.",
"Experiments show that NAUS not only archives state-of-the-art unsupervised performance on Gigaword headline generation and DUC2004 datasets, but also is much more efficient than search methods and autoregressive models.",
"Appendices present additional analyses and length-transfer experiments.",
"Limitation and Future Work.",
"Our paper focuses on unsupervised summarization due to the importance of low-data applications.",
"One limitation is that we have not obtained rigorous empirical results for supervised summarization, where the developed model may also work.",
"This is because previous supervised summarization studies lack explicit categorization of summary lengths (Yang et al., 2020; Qi et al., 2021), making comparisons unfair and problematic (Schumann et al., 2020).",
"Such an observation is also evidenced by Su et al. (2021), where the same model may differ by a few ROUGE points when generating summaries of different lengths.",
"Nevertheless, we have compared with Su et al. (2021) in our setting and show the superiority of the NAUS under fair comparison.",
"We 7923 plan to explore supervised summarization in future work after we establish a rigorous experimental setup, which is beyond the scope of this paper.",
"We thank Raphael Schumann for providing valuable suggestions on the work.",
"We also thank the Action Editor and reviewers for their comments during ACL Rolling Review.",
"The research is supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) under grant No.",
"RGPIN2020-04465, the Amii Fellow Program, the Canada CIFAR AI Chair Program, a UAHJIC project, a donation from DeepMind, and Compute Canada (www.computecanada.ca)."
] | [
"abstain",
"objective",
"objective",
"result",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"result",
"objective",
"other",
"other",
"other",
"other"
] |
[
"Recent work in multilingual machine translation (MMT) has focused on the potential of positive transfer between languages, particularly cases where higher-resourced languages can benefit lower-resourced ones.",
"While training an MMT model, the supervision signals learned from one language pair can be transferred to the other via the tokens shared by multiple source languages.",
"However, the transfer is inhibited when the token overlap among source languages is small, which manifests naturally when languages use different writing systems.",
"In this paper, we tackle inhibited transfer by augmenting the training data with alternative signals that unify different writing systems, such as phonetic, romanized, and transliterated input.",
"We test these signals on Indic and Turkic languages, two language families where the writing systems differ but languages still share common features.",
"Our results indicate that a straightforward multi-source self-ensemble training a model on a mixture of various signals and ensembling the outputs of the same model fed with different signals during inference outperforms strong ensemble baselines by 1.3 BLEU on both language families.",
"Further, we find that incorporating alternative inputs via self-ensemble can be particularly effective in low-resource settings, leading to +5 BLEU when only 5% of the total training data is accessible.",
"Finally, our analysis demonstrates that including alternative signals yields more consistency and translates named entities more accurately, which is crucial for increased factuality of automated systems.",
"improvement benefits languages with large quantities of high-quality training data (high-resource languages).",
"Recently, researchers have focused on the development of multilingual translation models (Aharoni et al., 2019; Fan et al., 2020) capable of translating between many different language pairs rather than specialized models for each translation direction.",
"In particular, such multilingual models hold great promise for improving translation quality for low-resource languages, as grouping languages together allows them to benefit from linguistic similarities as well as shared data between related languages.",
"For example, training a translation system with combined Assamese and Bengali data would enable transfer learning between the two languages.",
"We investigate how to enable multilingual translation models to optimally learn these similarities between languages and leverage this similarity to improve translation quality.",
"The fundamental unit representing lingual similarity is the token languages that are similar often have similar words or phrases and during training, translation models can learn strong representations of tokens in low-resource languages if they are also present in high-resource languages.",
"However, a challenge arises when similar languages share only a small amount of tokens, which inhibits the transfer to limited and trivial cases of token sharing, e.g., punctuation marks and digits.",
"This is particularly clear in cases where similar languages are written in different scripts, as the amount of shared tokens is small compared to languages using the same written script.",
"An example would be Hindi and Gujarati, which have phonetic similarity but are written in their own native scripts.",
"To tackle inhibited transfer due to distinct writing systems, we transform the original input via transliteration , the process of converting text from one script to another, to get alternative signal from the original source sentences.",
"Transliteration has 5291 Model __bn_romani__ pradhanmantri __bn__ \u0000\u0000 __bn_ipa__ p hanm nt i Average Input Log probabilities Prime Minister Output Decoding __bn_inscrip__ \u0000 \u0000 \u0000 Figure 1: A generic illustration of self-ensemble for a multilingual translation system while translating Bengali to English.",
"been used in many real world cases, such as converting Cyrillic Serbian to Latin Serbian, as the language is commonly written with both scripts, or typing in romanized Hindi for convenience on a Latin-script keyboard.",
"To unify various writing scripts to increase token overlap, we experiment with three types of transliteration: (1) transliterate into phonemes expressed by international phonetic alphabet (IPA), (2) transliterate into Latin script (ROMANI ), and (3) transliterate into a script used by another language within the same language family (INSCRIP ).",
"Beyond training on alternative inputs created through transliteration, we also systematically examine approaches to combining different signals.",
"Our experimental results on Indic and Turkic datasets demonstrate that",
"(i) a self-ensemble (Figure 1) training a model on the mixture of different signals and using an ensemble of the same model given different input signals during inference time, outperforms other methods such as multi-source ensemble and multi-encoder architecture, which require training multiple models or significant architectural changes.",
"(ii) Further, without the need for additional bitext, a self-ensemble over the original and transliterated input consistently outperforms baselines, and is particularly effective when the training set is small (e.g. low-resource languages) with improvements of up to +5 BLEU.",
"(iii) Finally, the improvements in BLEU originate from clear gain in the accuracy and consistency in the translation of named entities, which has strong implications for increased factuality of automated translation systems.",
"languages can benefit from similarities to high-resource languages where data is plentiful.",
"However, surface-level differences between languages, such as writing system, can obscure semantic similarities.",
"We describe an approach to transliterating input sentences to various alternative forms that maximize transfer learning between different languages, and various modeling approaches to incorporating such varied inputs.",
"While training a multilingual translation system, tokens shared by multiple source languages serve as anchors to transfer information obtained from learning one language pair to the other.",
"For example, the translation of terisini' in low-resourced Uzbek data can benefit from the word derisinin' in relatively high-resourced Turkish data after tok-enizing into sub-word units.",
"However, the transfer is hindered when the amount of shared tokens is small exacerbated by cases where the source and target languages are written in different scripts.",
"1 To alleviate the issue of various writing systems and encourage languages to transfer, we focus on alternative signals that unify the script of source languages and have larger token overlap.",
"The core concept we explore is how to best leverage transliteration , or the process of converting the text from one script to the other.",
"We demonstrate that transliteration can be an effective data augmentation approach that improves the translation performance without the need of acquiring additional parallel data.",
"We explore three alternative inputs that allow 1 Muller et al. (2021) show that the discrepancy in scripts causes failure to transfer in multilingual models and further hurts performance in downstream tasks.",
"models to share information more easily across languages with low token overlap but high semantic similarity.",
"Figure 4 in Appendix C shows example alternative signals of the same Oriya sentence.",
"Phonetic Input.",
"Related languages in the same language family usually sound similar, such as languages in the Romance language family and those in the Indo-Aryan language family.",
"Although cognates can be captured to some degree for Romance languages on subword-level, it is difficult for the Indo-Aryan family as those languages use different writing systems.",
"Therefore, to fully exploit shared information, we transform the original textual input (BASE ) into the phonetic space, where the basic units are phonemes expressed in international phonetic alphabet (IPA).",
"For example, ' in Bengali looks like phanmnti' in IPA form.",
"Romanized Input.",
"Many languages use Latin alphabet (or Roman alphabet) in their default writing system, if not, they more or less have romanization of their default script in order to accommodate conventional keyboards, e.g., Chinese can be typed on U.S. keyboards through Pinyin, the romanization of Chinese.",
"To utilize this existing form of alternative input, the romanized input is another signal we explore in this work.",
"For example, ' looks like pradhanmantri' in romanized form.",
"In-family Script Input.",
"The two previous alternative representations introduce tokens not present in the existing vocabulary, which increases the number of input and output representations the translation models must learn.",
"Further, phonetic input is artificial in the sense that it is not used by people to communicate to each other in written form and only used for pronunciation.",
"Romanization naturally would introduce many additional tokens if the source language does not use Latin script.",
"A third alternative that does not suffer these drawbacks is transliterate source language into the script of any of the other source languages in the multilingual translation model.",
"To take advantage of language relatedness (Dhamecha et al., 2021), we unify the source languages with the script used by a language within the same language family (INSCRIP ).",
"This method has the additional advantage of not needing to learn new subword tokenization models or replace the old vocabulary with a new one since all the inputs are expressed in one of the existing multilingual model's source language scripts.",
"For example, ' looks like ' when transliterated into Hindi script.",
"Advantages of Transliterated Inputs.",
"Various different input representations have been inserted into translation models, from parse trees (Li et al., 2017; Currey and Heafield, 2018) to pretrained embeddings (Artetxe et al., 2018; Conneau et al., 2018).",
"Compared to these alternatives, transliteration has several clear advantages.",
"Most importantly, transliteration is fast and accurate.",
"Several existing alternatives often use other models to produce a different input, such as a parse tree, which cascades error from the first model into the translation model.",
"Comparatively, the alphabet alignment between various writing systems is quite well known, even for many low-resource languages, as alphabet is one of the foundational aspects of studying any new language.",
"Similarly, phonetic pronunciation guides are often widely available.",
"These resources are also easily accessible programmatically, making them ideal for converting large quantities of supervised training data, for instance, the espkea-ng tool supports phonemization of more than 100 languages and accents.",
"Beyond the ease of creating transliterations, we emphasize that this technique does not require any data annotation or collection of parallel data.",
"Thus, it can be utilized in any existing translation system.",
"How can additional transliterated inputs be incorporated into modern machine translation architectures?",
"Since each alternative signal could capture a different view of the original input, in addition to training on each of the individual alternative signal alone, we investigate different approaches to combining them.",
"Straight Concatenation The simplest combination strategy is to concatenate different input signals and separate them by a special token.",
"For instance, to combine the original and phonetic input, we re-arrange the input to be of the format: [original input] [SEP] [phonetic input] .",
"During training, the decoder explicitly attends to tokens in both input signals.",
"The advantage of this method is that no architectural change is required as all modi-fication is operated on the input data.",
"However, as the concatenated input becomes longer, this method requires more computation to train compared to the baseline model trained on the original input only.",
"Multi-Encoder Architectures Prior works have found multi-encoder architecture to be effective for multi-source machine translation (Nishimura et al., 2018).",
"To cope with input from different sources, each encoder in the multi-encoder architecture deals with one type of input.",
"To attend to multiple encoders on the decoder side, four cross-attention mechanisms can be adopted.",
"We direct the reader to Appendix A for a detailed description of these attention variations.",
"Although prior work investigates the efficacy of this approach, it is a complicated model choice requiring non-trivial architectural changes.",
"Multi-Source Ensemble Ensembles are usually employed to boost the performance of a translation system.",
"In a standard setup, each ensemble component is trained with identical configuration except for the random seed.",
"We generalize this method to multi-source ensemble, i.e., individual ensemble components are trained on different transliterated inputs.",
"During inference time, each component is fed with the type of transliteration it was trained on and produces the predicted log probabilities, which are averaged over all components for the subsequent decoding process.",
"It is important for models trained on different source signals to have the same target vocabulary so that the average of log probabilities can happen.",
"Unlike the previous two methods, this approach requires training multiple full models, thus requiring even more computation.",
"Multi-Source Self-Ensemble Ensembling models that are trained on different input transliterations has the advantage that each individual model is maximally simple only the input data for training changes.",
"However, it comes with the downside that multiple different models need to be trained.",
"This creates challenges particularly when models grow in size, as a new model would need to be created for each different transliterated input.",
"Instead, we propose the Multi-Source Self-Ensemble , which has all the advantages of traditional ensembling, but only requires one model to be trained.",
"Previous works in self-ensembles have focused on model robustness (Liu et al., 2018), which is distinct from varying input representations.",
"Other work creates inputs in different languages (Fan et al., 2020), but have to use a translation model to create those inputs first.",
"target sentence.",
"Concretely, the model is trained on the mixture of various input signals, each preceded by a special language token indicating which type of signal this input belongs to.",
"At inference time, the alternative transliterated signals of the same test sentence are fed to the same model and the log probabilities produced by these separate passes are averaged as in multi-source ensemble.",
"This approach is simple to implement as it requires no architectural change, meaning the transliterated inputs we propose can be added seamlessly to any existing translation library.",
"Unlike multi-source ensemble, only one model needs to be trained, stored and loaded for inference, greatly simplifying the ensembling process and increasing the scalability of our approach (particularly as translation models increase in size).",
"To enforce fair comparison between multi-source self-ensemble and multi-source ensemble, we scale the former so that it has the same number of parameters as that of all ensemble components of the latter.",
"For the purpose of minimally impacting inference speed, the scaling is done only to the encoder embedding dimension so that the decoder remains the same.",
"Dataset We train our model on two language families: Indic and Turkic.",
"The Indic dataset is from the WAT MultiIndic MT task 2 , including 10 Indic languages and in total around 11 million Indic-English bi-texts.",
"Six of the Indic languages are Indo-Aryan languages and the rest are Dravidian languages.",
"All of these languages use a different writing system.",
"The Turkic dataset is collected from the open parallel corpus (Tiedemann, 2012) 3 .",
"For relatively high-resourced language Turkish, we randomly select 4 million subset from the CCAligned (El-Kishky et al., 2020) corpus.",
"Within this dataset, two languages use Cyrillic alphabet (Kazakh and Kyrgyz) and the rest use Latin alphabet.",
"Detailed dataset statistics are displayed in Table 7 in Appendix B. Single-input model To test the effectiveness of each input signal, we train models on each single type of input: original input (BASE ), phonetic input (IPA), romanized input (ROMANI ) or input all expressed in the script of a language within the same language family (INSCRIP ).",
"On the Indic dataset, 2 https://lotus.kuee.kyoto-u.ac.jp/WAT/ indic-multilingual/index.html 3 https://opus.nlpl.eu/ 5294 93 M parameters 2 93 M parameters Indic Turkic Indic Turkic Single-input Original Standard Ensemble BASE 33.6 20.3 BASE +B ASE 34.5 21.1 Single-input Alternative Multi-Source Ensemble IPA 32.7 17.9 BASE +IPA 34.3 20.9 ROMANI 32.5 20.7 BASE +R OMANI 34.4 21.4 INSCRIP 33.4 20.5 BASE +I NSCRIP 34.5 21.5 Multi-Source Self-Ensemble Multi-Source Self-Ensemble BASE +IPA 34.1 20.5 BASE +IPA 35.7 21.9 BASE +R OMANI 33.8 20.9 BASE +R OMANI 35.7 22.2 BASE +I NSCRIP 34.2 21.3 BASE +I NSCRIP 35.8 22.4 Table 1: BLEU scores on Indic test set and FloRes Turkic Devtest set.",
"for the INSCRIP signal, all Indo-Aryan languages are transliterated into Hindi script, and all Dravidian languages into Tamil script.",
"On the Turkic dataset, all languages in Latin script are transliterated into Cyrillic script.",
"Multi-Source Ensemble A baseline for ensembling models trained on different signals is the standard ensemble (BASE +B ASE ) where two BASE models are ensembled, each trained with a different random seed.",
"Although there are multiple combinations of input signals, we only discuss the cases where BASE is combined with one of {IPA, ROMANI , INSCRIP }, since in our preliminary experiments, we found dropping the BASE model leads to significantly degraded performance.",
"Multi-Source Self-Ensemble Similar to above, we train a single model on the mixture of original input and one of {IPA, ROMANI , INSCRIP } input for multi-source self-ensemble.",
"To enforce fair comparisons with the ensembled models, which have more parameters in total, we train two sizes of the self-ensemble (SE) model, one having the same size of a single baseline model, the other scaled to have twice the number of parameters of a single BASE model.",
"Data Preprocessing We use espeak-ng 4 to convert the original input to phonetic input.",
"For Indic languages, we use indic-trans 5 (Bhat et al., 2015) to obtain the romanized as well as the in-family transliterated input.",
"On the 4 https://github.com/espeak-ng/ espeak-ng 5 https://github.com/libindic/ indic-trans Turkic dataset, we manually align the Cyrillic and Latin alphabet and substitute the letter(s) in one script with the corresponding one in another.",
"6 The Indic languages are tokenized with indic_nlp_library and the rest are tokenized with mosesdecoder 7 .",
"We use sentencepiece 8 to create 32K BPE (Sennrich et al., 2016) subword vocabularies for each type of input signal.",
"Examples longer than 250 tokens are discarded.",
"We merge the source dictionaries of different signals by dropping duplicated tokens, while keeping the decoder dictionaries all the same in order to compute the average log probabilities in ensemble settings.",
"Training & Evaluation We train many-to-En language directions during training (10 and 5 directions for Indic and Turkic dataset respectively).",
"The architecture is a standard 6-layer encoder 6-layer decoder Transformer model, with 512 embedding dimension and 2048 hidden dimension in the default setting.",
"For the scaled self-ensemble model, we increase the encoder hidden dimension such that the number of parameters in this model approximately matches that of n baseline models ( n = 2 for results in Table 1).",
"We use 4000 warmup steps and learning rate 0 .",
"0003 .",
"Both the dropout and attention dropout rate are set to 0 .",
"2 .",
"Label smoothing is set to 0.1.",
"Data from different language pairs are sampled with 1.5 temperature sampling.",
"We 6 The substitution process starts from the letter in the target script that corresponds to the most number of letters in the source script.",
"train all models for 18 epochs and 40 epochs for Indic and Turkic dataset respectively and evaluate the best checkpoint selected by dev loss.",
"We use spBLEU 9 (Goyal et al., 2021; Guzmn et al., 2019) to compute the BLEU scores.",
"10 4 Results In this section, we compare the performance of our proposed multi-source self-ensemble model to various alternative ways of input combinations on two low-resource language families: Indic and Turkic languages.",
"Furthermore, we show multi-source self-ensemble learns faster and generates more consistent and accurate translations.",
"Our method is based on the hypothesis that incorporating alternative inputs increases the token overlap of source languages, which benefits the transfer during training.",
"To verify this, we compute average sentence-level uni-gram overlap of all source language pairs (Table 2) and find that alternative signals do have higher token overlap compared to the original input.",
"For instance, the IPA signal, having similar average sentence length as BASE , has much higher token overlap (0.15 vs. 0.03).",
"Do increased token overlaps result in better translation performance?",
"We train models on each of the alternative inputs alone and report the results in the left column of Table 1.",
"We find that using only one alternative input in the source has either worse or similar performance as the original baseline, indicating higher token overlap among source languages does not guarantee better BLEU scores.",
"The degraded performance is likely due to unfavorable interference introduced by shared tokens in 9 https://github.com/facebookresearch/ flores#spm-bleu 10 While prior work (Kocmi et al., 2021) has shown better correlation between neural metrics and human ratings, there have not been extensive evaluations for low-resource languages, especially for systems dealing with various writing scripts.",
"Therefore we use spBLEU which is consistent with previous works (Goyal et al., 2021).",
"the alternative signals.",
"The interference may create information loss 11 or increased ambiguity 12 , which reinforces the importance of combining alternative inputs with the original input.",
"Due to undesired interference exhibited in the alternative input spaces, we therefore adopt the input combination using our proposed Multi-Source Self-Ensemble to combine the original input and alternative signals.",
"Results in left lower part of Table 1 demonstrate improvements over the single-input baseline.",
"Our best performing alternative input configuration improves +1.0 BLEU on Turkic languages and +0.6 BLEU on Indic languages for 93M parameter models.",
"In production, model ensembles are often employed to achieve the best possible performance.",
"This is usually done by training multiple models each initialized with a different random seed (Baw-den et al., 2020; Tran et al., 2021b), and averaging the predicted next token probabilities at inference time.",
"We also provide results against these strong ensemble baselines and observe +1.3 BLEU improvements on both Indic and Turkic languages.",
"Note that, to enforce a fair comparison, we compare a scaled version of the multi-source self-ensemble model which has the same number of parameters as multiple ensemble baseline components.",
"11 For example, the punctuation marks are lost during phonemization process.",
"12 For instance, multiple words may have the same pronunciation and thus have the same input in IPA form, which makes the learning harder.",
"Architectural Simplicity.",
"As introduced in 2.2, there are various ways to incorporate multiple inputs, such as concatenation to form a longer input or using multiple encoders networks.",
"In Table 3, we show that using multiple encoders has no improvements over the comparable baseline with raw text input, and straight concatenation only brings marginal gains (+0.1 BLEU).",
"Further, our simple but effective Multi-Source Self-Ensemble technique reaches the same performance as that of a much larger quad-encoder model, which requires non-trivial architectural changes and takes more compute to train.",
"Thus, our technique is suitable to be used out of the box in any seq-to-seq library.",
"Faster Learning in Low-Resource Settings.",
"To understand how self-ensemble performs with different amounts of data, we plot the learning curve of both the baseline and the self-ensemble model on 5% 13 to 80% of the total Indic training set.",
"14 As 13 When the training set is very small ( 5% and 10% ), we train for 60 epochs and select the model by dev loss.",
"shown in Figure 2, the self-ensemble model outperforms the baseline model by a large margin when the amount of training data is small ( +5 BLEU when only 5% of the total set is used for train-ing).",
"This is the scenario for most low-resource languages, as the gap gradually closes when more data is available.",
"Overall, the multi-source self-ensemble model is consistently better than the baseline model irrespective of training data scale.",
"This suggests that transliteration can be a cheap and effective data augmentation approach when used in conjunction with multi-source self-ensemble.",
"Improved Output Consistency.",
"We conduct a deeper analysis to understand the performance improvement of Multi-Source Self-Ensembles beyond BLEU scores alone.",
"We find that our proposed technique generates much more consistent output, which could be a benefit of alternative signals transferring information more easily amongst source languages.",
"We propose consistency BLEU ( C-BLEU ) to quantify the consistency of multi-way evaluation output of a many-to-En translation model.",
"We treat the output of L 1 -En direction as reference and output of all other L i -En directions as hypothesis.",
"We compute this for all N source languages in the dataset, accounting for total N ( N 1) C-BLEU scores, then take the average of all (Table 4).",
"While training on IPA or ROMANI alone does not outperform the baseline in terms of C-BLEU, model trained on INSCRIP input improves the score by +1.3.",
"Self-ensemble over BASE and IPA increases the C-BLEU to 36.2 (and from 36.3 to 38.1 with scaled model), indicating the alternative signals are best trained together with the original input.",
"Improved Named Entity Accuracy.",
"The previous analysis implies the self-ensemble model outputs more consistent translation, yet this does not mean the consistent translations are accurate.",
"In this section, we conduct an analysis targeted at named entities.",
"We use spaCy (Honnibal et al., 2020) NER tagger to extract all named entities, and then compute the exact match of the extracted entities.",
"According to the results in Table 4, self-ensemble introduces small gains (+0.5) in terms of named entity F1 (NE-F1), whereas the scaled self-ensemble boosts NE-F1 score by +1.1 .",
"Although the improvement is small in aggregate, we find significant improvement when breaking down by entity type.",
"As shown in Figure 3, the multi-source self-ensemble model (without scaling) outperforms the baseline model on certain entity types, e.g., person, organization, time and event by a large margin.",
"Our work can be viewed as multilingual MT (Firat et al., 2016) combined with multi-source MT (Zoph and Knight, 2016), where the sources are not other languages but rather alternative transliterated signals.",
"The transliterated input has been explored in the past for translation system.",
"Nakov and Ng (2009) use transliteration as a preprocessing step for their phrase-based SMT model to tackle systematic spelling variation.",
"Both Chakravarthi et al. (2019) and Koneru et al. (2021) convert Dravidian languages to Latin script and train multilingual models with both source and target in Latin script; the latter identify code-switching to be a challenge during back-transliteration.",
"Besides converting to Latin script, Dabre et al. (2018) use another common script, Devanagari, for Indic languages.",
"In addition to the natural written scripts, previous works also explored artificial script, such as IPA.",
"Liu et al. (2019) incorporate phonetic representations, specifically for Chinese Pinyin, to cope with homophone noise.",
"Unlike our work, Chakravarthi et al. (2019) adopt transliteration to IPA for both the source and target.",
"Apart from transliterated input, other potential alternative signals we did not fully explored include orthographic syllable units (Kunchukuttan and Bhattacharyya, 2016, 2020), morpheme-based units (Ataman et al., 2017; Dhar et al., 2020), and character (Lee et al., 2017) or byte (Wang et al., 2019a) level input in addition to the subword-level units (Sennrich et al., 2016).",
"Multi-encoder architecture is the most common way to combine input from different sources.",
"While previous works mainly use additional encoders to encode syntactic information (Li et al., 2017; Currey and Heafield, 2018) or input in another language (Nishimura et al., 2018), we feed in each encoder with different signals of the same sentence.",
"Prior works also investigated approaches to combining input at different granularity (Ling et al., 2015; Chen et al., 2018; Casas et al., 2020).",
"Wang et al. (2019b) combine the decoupled lexical and semantic representations through an attention mechanism.",
"Another common method of utilizing additional input signal is multi-task learning, force the model to output extra labels (Luong et al., 2016; Grnroos et al., 2017).",
"Apart from combining the sources during training, inference-time ensemble (Garmash and Monz, 2016) is often adopted by recent submissions to shared MT tasks (Ng et al., 2019; Tran et al., 2021a).",
"The ensemble components are usually separate systems trained with different random initialization or language pairs.",
"Fan et al. (2020) ensemble the same 5298 model by feeding in source sentences in different languages.",
"The self-ensemble approach was also found to make networks more robust after adding random noises (Liu et al., 2018).",
"Prior work also uses the term \"self-ensemble\" to refer to an ensemble of models using weights from different time steps during training (Xu et al., 2020).",
"To overcome the low token-overlap issue exhibited in multilingual MT systems due to distinct writing system, we examined three alternative signals (phonetic, romanized and in-family transliterated input) and investigated four approaches (input concatenation, multi-encoder, multi-source ensemble, self-ensemble) to combining them with the original input.",
"Our results show that training a single model with a mixture of diverse signals and performing self-ensemble during inference time can improve BLEU by 1.3 points on Indic and Turkic dataset.",
"The improvements can reach +5 BLEU when training data size is small.",
"Further, we show this approach generate more accurate and consistent translation of named entities which greatly impacts the factuality accuracy of news translation.",
"We thank Shiyue Zhang, Xiang Zhou, Jean Mail-lard, Yixiao Song, and Marzena Karpinska for the helpful discussions during the course of this work."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"result",
"abstain",
"result",
"other"
] |
[
"In this work, we present an information-theoretic framework that formulates cross-lingual language model pre-training as maximizing mutual information between multilingual-multi-granularity texts.",
"The unified view helps us to better understand the existing methods for learning cross-lingual representations.",
"More importantly, inspired by the framework, we propose a new pretraining task based on contrastive learning.",
"Specifically, we regard a bilingual sentence pair as two views of the same meaning and encourage their encoded representations to be more similar than the negative examples.",
"By leveraging both monolingual and parallel corpora, we jointly train the pretext tasks to improve the cross-lingual transferability of pre-trained models.",
"Experimental results on several benchmarks show that our approach achieves considerably better performance.",
"The code and pre-trained models are available at https://aka.ms/infoxlm .",
"Learning cross-lingual language representations plays an important role in overcoming the language barrier of NLP models.",
"The recent success of cross-lingual language model pre-training (Devlin et al., 2019; Conneau and Lample, 2019; Conneau et al., 2020a; Chi et al., 2020; Liu et al., 2020) significantly improves the cross-lingual transferability in various downstream tasks, such as cross-lingual classification, and question answering.",
"State-of-the-art cross-lingual pre-trained models are typically built upon multilingual masked language modeling (MMLM; Devlin et al. 2019; Conneau et al. 2020a), and translation language modeling (TLM; Conneau and Lample 2019).",
"The goal of both pretext tasks is to predict masked tokens given input context.",
"The difference is that Contribution during internship at Microsoft Research.",
"MMLM uses monolingual text as input, while TLM feeds bilingual parallel sentences into the model.",
"Even without explicit encouragement of learning universal representations across languages, the derived models have shown promising abilities of cross-lingual transfer.",
"In this work, we formulate cross-lingual pretraining from a unified information-theoretic perspective.",
"Following the mutual information maximization principle (Hjelm et al., 2019; Kong et al., 2020), we show that the existing pretext tasks can be viewed as maximizing the lower bounds of mutual information between various multilingual-multi-granularity views.",
"Specifically, MMLM maximizes mutual information between the masked tokens and the context in the same language while the anchor points across languages encourages the correlation between cross-lingual contexts.",
"Moreover, we present that TLM can maximize mutual information between the masked tokens and the parallel context, which implicitly aligns encoded representations of different languages.",
"The unified information-theoretic framework also inspires us to propose a new cross-lingual pre-training task, named as cross-lingual contrast (XLCO ).",
"The model learns to distinguish the translation of an input sentence from a set of negative examples.",
"In comparison to TLM that maximizes token-sequence mutual information, XLCO maximizes sequence-level mutual information between translation pairs which are regarded as cross-lingual views of the same meaning.",
"We employ the momentum contrast (He et al., 2020) to realize XLCO .",
"We also propose the mixup contrast and conduct the contrast on the universal layer to further facilitate the cross-lingual transferability.",
"Under the presented framework, we develop a cross-lingual pre-trained model (INFOXLM) to leverage both monolingual and parallel corpora.",
"We jointly train INFOXLM with MMLM, TLM and XLCO .",
"We conduct extensive experiments on several cross-lingual understanding tasks, including cross-lingual natural language inference (Con-neau et al., 2018), cross-lingual question answering (Lewis et al., 2020), and cross-lingual sentence retrieval (Artetxe and Schwenk, 2019).",
"Experimental results show that INFOXLM outperforms strong baselines on all the benchmarks.",
"Moreover, the analysis indicates that INFOXLM achieves better cross-lingual transferability.",
"Multilingual BERT (mBERT; Devlin et al. 2019) is pre-trained with the multilingual masked language modeling (MMLM) task on the monolingual text.",
"mBERT produces cross-lingual representations and performs cross-lingual tasks surprisingly well (Wu and Dredze, 2019).",
"XLM (Conneau and Lample, 2019) extends mBERT with the translation language modeling (TLM) task so that the model can learn cross-lingual representations from parallel corpora.",
"Unicoder (Huang et al., 2019) tries several pre-training tasks to utilize parallel corpora.",
"ALM (Yang et al., 2020) extends TLM to code-switched sequences obtained from translation pairs.",
"XLM-R (Conneau et al., 2020a) scales up MMLM pre-training with larger corpus and longer training.",
"LaBSE (Feng et al., 2020) learns cross-lingual sentence embeddings by an additive translation ranking loss.",
"In addition to learning cross-lingual encoders, several pre-trained models focus on generation.",
"MASS (Song et al., 2019) and mBART (Liu et al., 2020) pretrain sequence-to-sequence models to improve machine translation.",
"XNLG (Chi et al., 2020) focuses on the cross-lingual transfer of language generation, such as cross-lingual question generation, and abstractive summarization.",
"Various methods have successfully learned visual or language representations by maximizing mutual information between different views of input.",
"It is difficult to directly maximize mutual information.",
"In practice, the methods resort to a tractable lower bound as the estimator, such as InfoNCE (Oord et al., 2018), and the variational form of the KL divergence (Nguyen et al., 2010).",
"The estimators are also known as contrastive learning (Arora et al., 2019) that measures the representation similarities between the sampled positive and negative pairs.",
"In addition to the estimators, various view pairs are employed in these methods.",
"The view pair can be the local and global features of an image (Hjelm et al., 2019; Bachman et al., 2019), the random data augmentations of the same image (Tian et al., 2019; He et al., 2020; Chen et al., 2020), or different parts of a sequence (Oord et al., 2018; Henaff, 2020; Kong et al., 2020).",
"Kong et al. (2020) show that learning word embeddings or contextual embeddings can also be unified under the framework of mutual information maximization.",
"In representation learning, the learned representations are expected to preserve the information of the original input data.",
"However, it is intractable to directly model the mutual information between the input data and the representations.",
"Alternatively, we can maximize the mutual information between the representations from different views of the input data, e.g., different parts of a sentence, a translation pair of the same meaning.",
"In this section, we start from a unified information-theoretic perspective, and formulate cross-lingual pre-training with the mutual information maximization principle.",
"Then, under the information-theoretic framework, we propose a new cross-lingual pre-training task, named as cross-lingual contrast (XLCO ).",
"Finally, we present the pre-training procedure of our INFOXLM.",
"The goal of multilingual masked language modeling (MMLM; Devlin et al. 2019) is to recover the masked tokens from a randomly masked sequence.",
"For each input sequence of MMLM, we sample a text from the monolingual corpus for pretraining.",
"Let ( c 1 , x 1 ) denote a monolingual text sequence, where x 1 is the masked token, and c 1 is the corresponding context.",
"Intuitively, we need to maximize their dependency (i.e., I ( c 1 ; x 1 ) ), so that the context representations are predictive for masked tokens (Kong et al., 2020).",
"For the example pair ( c 1 , x 1 ) , we construct a set N that contains x 1 and |N | 1 negative samples drawn from a proposal distribution q .",
"According to the InfoNCE (Oord et al., 2018) lower bound, we have: I ( c 1 ; x 1 ) (cid:62) E q ( N ) (cid:20) log f ( c 1 , x 1 ) (cid:80) x (cid:48) N f ( c 1 , x (cid:48) ) (cid:21) + log |N | (1) where f is a function that scores whether the input c 1 and x 1 is a positive pair.",
"Given context c 1 , MMLM learns to minimize the cross-entropy loss of the masked token x 1 : LMMLM = log exp( g T ( c 1 ) (cid:62) g E ( x 1 )) (cid:80) x (cid:48) V exp( g T ( c 1 ) (cid:62) g E ( x (cid:48) )) (2) where V is the vocabulary, g E is a look-up function that returns the token embeddings, g T is a Transformer that returns the final hidden vectors in position of x 1 .",
"According to Equation (1) and Equation (2), if N = V and f ( c 1 , x 1 ) = exp( g T ( c 1 ) (cid:62) g E ( x 1 )) , we can find that MMLM maximizes a lower bound of I ( c 1 ; x 1 ) .",
"Next, we explain why MMLM can implicitly learn cross-lingual representations.",
"Let ( c 2 , x 2 ) denote a MMLM instance that is in different language as ( c 1 , x 1 ) .",
"Because the vocabulary, the position embedding, and special tokens are shared across languages, it is common to find anchor points (Pires et al., 2019; Dufter and Schutze, 2020) where x 1 = x 2 (such as subword, punctuation, and digit) or I ( x 1 , x 2 ) is positive (i.e., the representations are associated or isomorphic).",
"With the bridge effect of { x 1 , x 2 } , MMLM obtains a v-structure dependency c 1 { x 1 , x 2 } c 2 , which leads to a negative co-information (i.e., interaction information) I ( c 1 ; c 2 ; { x 1 , x 2 } ) (Tsujishita, 1995).",
"Specifically, the negative value of I ( c 1 ; c 2 ; { x 1 , x 2 } ) indicates that the variable { x 1 , x 2 } enhances the correlation between c 1 and c 2 (Fano, 1963).",
"In summary, although MMLM learns to maximize I ( c 1 , x 1 ) and I ( c 2 , x 2 ) in each language, we argue that the task encourages the cross-lingual correlation of learned representations.",
"Notice that for the setting without word-piece overlap (Artetxe et al., 2020; Conneau et al., 2020b; K et al., 2020), we hypothesize that the information bottleneck principle (Tishby and Zaslavsky, 2015) tends to transform the cross-lingual structural similarity into isomorphic representations, which has similar bridge effects as the anchor points.",
"Then we can explain how the cross-lingual ability is spread out as above.",
"We leave more discussions about the setting without word-piece overlap for future work.",
"Similar to MMLM, the goal of translation language modeling (TLM; Conneau and Lample 2019) is also to predict masked tokens, but the prediction is conditioned on the concatenation of a translation pair.",
"We try to explain how TLM pre-training enhances cross-lingual transfer from an information-theoretic perspective.",
"Let c 1 and c 2 denote a translation pair of sentences, and x 1 a masked token taken in c 1 .",
"So c 1 and x 1 are in the same language, while c 1 and c 2 are in different ones.",
"Following the derivations of MMLM in Section 3.1, the objective of TLM is maximizing the lower bound of mutual information I ( c 1 , c 2 ; x 1 ) .",
"By re-writing the above mutual information, we have: I ( c 1 , c 2 ; x 1 ) = I ( c 1 ; x 1 ) + I ( c 2 ; x 1 | c 1 ) (3) The first term I ( c 1 ; x 1 ) corresponds to MMLM, which learns to use monolingual context.",
"In contrast, the second term I ( c 2 ; x 1 | c 1 ) indicates cross-lingual mutual information between c 2 and x 1 that is not included by c 1 .",
"In other words, I ( c 2 ; x 1 | c 1 ) encourages the model to predict masked tokens by using the context in a different language.",
"In conclusion, TLM learns to utilize the context in both languages, which implicitly improves the cross-lingual transferability of pre-trained models.",
"Inspired by the unified information-theoretic framework, we propose a new cross-lingual pre-training task, named as cross-lingual contrast (XLCO ).",
"The goal of XLCO is to maximize mutual information between the representations of parallel sentences c 1 and c 2 , i.e., I ( c 1 , c 2 ) .",
"Unlike maximizing token-sequence mutual information in MMLM and TLM, XLCO targets at cross-lingual sequence-level mutual information.",
"where N is a set that contains the positive pair c 2 and |N | 1 negative samples.",
"In order to maximize the lower bound of I ( c 1 ; c 2 ) , we need to design the function f that measures the similarity between the input sentence and the proposal distribution q ( N ) .",
"where g is the Transformer encoder that we are pre-training.",
"Following (Devlin et al., 2019), a special token [CLS] is added to the input, whose hidden vector is used as the sequence representation.",
"Additionally, we use a linear projection head after the encoder in g .",
"Momentum Contrast Another design choice is how to construct N .",
"As shown in Equation (4), a large |N | improves the tightness of the lower bound, which has been proven to be critical for contrastive learning (Chen et al., 2020).",
"In our work, we employ the momentum contrast (He et al., 2020) to construct the set N , where the previously encoded sentences are progressively reused as negative samples.",
"Specifically, we construct two encoders with the same architecture which are the query encoder g Q and the key encoder g K .",
"The loss function of XLCO is: LXLCO = log exp( g Q ( c 1 ) (cid:62) g K ( c 2 )) (cid:80) c (cid:48) N exp( g Q ( c 1 ) (cid:62) g K ( c (cid:48) )) (6) During training, the query encoder g Q encodes c 1 and is updated by backpropagation.",
"The key encoder g K encodes N and is learned with momentum update (He et al., 2020) towards the query encoder.",
"The negative examples in N are organized as a queue, where a newly encoded example is added while the oldest one is popped from the queue.",
"We initialize the query encoder and the key encoder with the same parameters, and pre-fill the queue with a set of encoded examples until it reaches the desired size |N | .",
"Notice that the size of the queue remains constant during training.",
"Mixup Contrast For each pair, we concatenate it with a randomly sampled translation pair from another parallel corpus.",
"For example, consider the pairs (cid:104) c 1 , c 2 (cid:105) and (cid:104) d 1 , d 2 (cid:105) sampled from two different parallel corpora.",
"The two pairs are concatenated in a random order, such as (cid:104) c 1 d 1 , c 2 d 2 (cid:105) , and (cid:104) c 1 d 2 , d 1 c 2 (cid:105) .",
"The data augmentation of mixup encourages pre-trained models to learn sentence boundaries and to distinguish the order of multilingual texts.",
"Contrast on Universal Layer As a pre-training task maximizing the lower bound of sequence-level mutual information, XLCO is usually jointly learned with token-sequence tasks, such as MMLM, and TLM.",
"In order to make XLCO more compatible with the other pretext tasks, we propose to conduct contrastive learning on the most universal (or transferable) layer in terms of MMLM and TLM.",
"In our implementations, we instead use the hidden vectors of [CLS] at layer 8 to perform contrastive learning for base-size (12 layers) models, and layer 12 for large-size (24 layers) models.",
"Because previous analysis (Sabet et al., 2020; Dufter and Schutze, 2020; Conneau et al., 2020b) shows that the specific layers of MMLM learn more universal representations and work better on cross-lingual retrieval tasks than other layers.",
"We choose the layers following the same principle.",
"The intuition behind the method is that MMLM and TLM encourage the last layer to produce language-distinguishable token representations because of the masked token classification.",
"But XLCO tends to learn similar representations across languages.",
"So we do not directly use the hidden states of the last layer in XLCO .",
"We pretrain a cross-lingual model INFOXLM by jointly maximizing the lower bounds of three types of mutual information, including monolingual token-sequence mutual information (MMLM), cross-lingual token-sequence mutual information (TLM), and cross-lingual sequence-level mutual information (XLCO ).",
"Formally, the loss of cross-lingual pre-training in INFOXLM is defined as: L = LMMLM + LTLM + LXLCO (7) where we apply the same weight for the loss terms.",
"Both TLM and XLCO use parallel data.",
"The number of bilingual pairs increases with the square of the number of languages.",
"In our work, we set English as the pivot language following (Conneau and Lample, 2019), i.e., we only use the parallel corpora that contain English.",
"In order to balance the data size between high-resource and low-resource languages, we apply a multilingual sampling strategy (Conneau and Lample, 2019) for both monolingual and parallel data.",
"An example in the language l is sampled with the probability p l ( n l /n ) 0 .",
"7 , where n l is the number of instances in the language l , and n refers to the total number of data.",
"Empirically, the sampling algorithm alleviates the bias towards high-resource languages (Conneau et al., 2020a).",
"In this section, we first present the training config-uration of INFOXLM.",
"Then we compare the fine-tuning results of INFOXLM with previous work on three cross-lingual understanding tasks.",
"We also conduct ablation studies to understand the major components of INFOXLM.",
"Corpus We use the same pre-training corpora as previous models (Conneau et al., 2020a; Conneau and Lample, 2019).",
"Specifically, we reconstruct CC-100 (Conneau et al., 2020a) for MMLM, which remains 94 languages by filtering the language code larger than 0.1GB.",
"Following (Con-neau and Lample, 2019), for the TLM and XLCO tasks, we employ 14 language pairs of parallel data that involves English.",
"We collect translation pairs from MultiUN (Ziemski et al., 2016), IIT Bombay (Kunchukuttan et al., 2018), OPUS (Tiede-mann, 2012), and WikiMatrix (Schwenk et al., 2019).",
"The size of parallel corpora is about 42GB.",
"More details about the pre-training data are described in the appendix.",
"Model Size We follow the model configurations of XLM-R (Conneau et al., 2020a).",
"For the Transformer (Vaswani et al., 2017) architecture, we use 12 layers and 768 hidden states for INFOXLM (i.e., base size), and 24 layers and 1,024 hidden states for INFOXLMLARGE (i.e., large size).",
"Hyperparameters We initialize the parameters of INFOXLM with XLM-R.",
"We optimize the model with Adam (Kingma and Ba, 2015) using a batch size of 2048 for a total of 150K steps for INFOXLM, and 200K steps for INFOXLMLARGE .",
"The same number of training examples are fed to three tasks.",
"The learning rate is scheduled with a linear decay with 10K warmup steps, where the peak learning rate is set as 0 .",
"0002 for INFOXLM, and 0 .",
"0001 for INFOXLMLARGE .",
"The momentum coefficient is set as 0 .",
"9999 and 0 .",
"999 for INFOXLM and INFOXLMLARGE , respectively.",
"The length of the queue is set as 131 , 072 .",
"The training procedure takes about 2 .",
"3 days 2 Nvidia DGX-2 stations for INFOXLM, and 5 days 16 Nvidia DGX-2 stations for INFOXLMLARGE .",
"Details about the pre-training hyperparameters can be found in the appendix.",
"We conduct experiments over three cross-lingual understanding tasks, i.e., cross-lingual natural language inference, cross-lingual sentence retrieval, and cross-lingual question answering.",
"Cross-Lingual Natural Language Inference The Cross-Lingual Natural Language Inference corpus (XNLI; Conneau et al. 2018) is a widely used cross-lingual classification benchmark.",
"The goal of NLI is to identify the relationship of an input sentence pair.",
"We evaluate the models under the following two settings.",
"(1) Cross-Lingual Transfer: fine-tuning the model with English training set and directly evaluating on multilingual test sets.",
"(2) Translate-Train-All: fine-tuning the model with the English training data and the pseudo data that are translated from English to the other languages.",
"Cross-Lingual Sentence Retrieval The goal of the cross-lingual sentence retrieval task is to extract parallel sentences from bilingual comparable corpora.",
"We use the subset of 36 language pairs of the Tatoeba dataset (Artetxe and Schwenk, 2019) for the task.",
"The dataset is collected from Tatoeba 1 , which is an open collection of multilingual parallel sentences in more than 300 languages.",
"Following (Hu et al., 2020), we use the averaged hidden vectors in the seventh Transformer layer to compute cosine similarity for sentence retrieval.",
"Cross-Lingual Question Answering We use the Multilingual Question Answering (MLQA; Lewis et al. 2020) dataset for the cross-lingual QA task.",
"MLQA provides development and test data in seven languages in the format of SQuAD v1.1 (Rajpurkar et al., 2016).",
"We follow the fine-tuning method introduced in (Devlin et al., 2019) that concatenates the question-passage pair as the input.",
"We compare INFOXLM with the following pretrained Transformer models: (1) Multilingual BERT ( MBERT ; Devlin et al. 2019) is pre-trained with MMLM on Wikipedia in 102 languages; (2) XLM (Conneau and Lample, 2019) pretrains both MMLM and TLM tasks on Wikipedia in 100",
"languages; (3) XLM-R (Conneau et al., 2020a) scales up MMLM to the large CC-100 corpus in 100 languages with much more training steps; (4) UNICODER (Liang et al., 2020) continues training XLM-R with MMLM and TLM.",
"(5) INFOXLM XLCO continues training XLM-R with MMLM and TLM, using the same pre-training datasets with INFOXLM.",
"Cross-Lingual Natural Language Inference Table 1 reports the classification accuracy on each test of XNLI under the above evaluation settings.",
"The final scores on test set are averaged over five random seeds.",
"INFOXLM outperforms all baseline models on the two evaluation settings of XNLI.",
"In the cross-lingual transfer setting, INFOXLM achieves 76.5 averaged accuracy, outperforming XLM-R (reimpl) by 1.5.",
"Similar improvements can be observed for large-size models.",
"Moreover, the ablation results XLCO show that cross-lingual contrast is helpful for zero-shot transfer in most languages.",
"We also find that INFOXLM improves the results in the translate-train-all setting.",
"2 and Table 3, we report the top-1 accuracy scores of cross-lingual sentence retrieval with the base-size",
"models.",
"The evaluation results demonstrate that INFOXLM produces better aligned cross-lingual sentence representations.",
"On the 14 language pairs that are covered by parallel data, INFOXLM obtains 77.8 and 80.6 averaged top-1 accuracies in the directions of xx en and en xx, outperforming XLM-R by 20.2 and 21.1.",
"Even on the 22 language pairs that are not covered by parallel data, INFOXLM outperforms XLM-R on 16 out of 22 language pairs, providing 8.1% improvement in averaged accuracy.",
"In comparison, the ablation variant XLCO (i.e., MMLM + TLM) obtains better results than XLM-R in Table 2, while getting worse performance than XLM-R in Table 3.",
"The results indicate that XLCO encourages the model to learn universal representations even on the language pairs without parallel supervision.",
"Cross-Lingual Question Answering Table 4 compares INFOXLM with baseline models on MLQA, where we report the F1 and the exact match (EM) scores on each test set.",
"Both INFOXLM and INFOXLMLARGE obtain the best results against the four baselines.",
"In addition, the results of the ablation variant XLCO indicate that the proposed cross-lingual contrast is benefi-cial on MLQA.",
"Models Direction ar bg zh de el fr hi ru es sw th tr ur vi Avg XLM-R xx en 36.8 67.6 60.7 89.9 53.7 74.1 54.2 72.5 74.0 18.7 38.3 61.1 36.6 68.4 57.6 INFOXLM xx en 59.0 78.6 86.3 93.9 62.1 79.4 87.1 83.8 88.2 39.5 84.9 83.3 73.0 89.6 77.8 XLCO xx en 42.9 65.5 69.5 91.1 55.6 76.4 71.6 74.9 74.8 20.5 68.1 69.8 51.6 81.8 65.3 XLM-R en xx 38.6 69.9 60.3 89.4 57.3 74.3 49.3 73.0 74.6 14.4 58.4 64.0 36.9 72.5 59.5 INFOXLM en xx 68.6 78.6 86.4 95.1 72.6 84.0 88.3 85.7 87.2 40.8 91.2 84.7 73.3 92.0 80.6 XLCO en xx 45.4 64.0 69.3 88.1 56.5 72.3 69.6 73.6 71.5 22.1 79.7 64.3 48.2 79.8 64.6 Table 2: Evaluation results on Tatoeba cross-lingual sentence retrieval.",
"To understand INFOXLM and the cross-lingual contrast task more deeply, we conduct analysis from the perspectives of cross-lingual transfer and cross-lingual representations.",
"Furthermore, we perform comprehensive ablation studies on the major components of INFOXLM, including the cross-lingual pre-training tasks, mixup contrast, the contrast layer, and the momentum contrast.",
"To reduce the computation load, we use INFO XLM15 in our ablation studies, which is trained on 15 languages for 100K steps.",
"Cross-Lingual Transfer Gap Cross-lingual transfer gap (Hu et al., 2020) is the difference between the performance on the English test set and the averaged performance on the test sets of all other languages.",
"A lower cross-lingual transfer gap score indicates more end-task knowledge from the English training set is transferred to other languages.",
"In Table 5, we compare the cross-lingual transfer gap scores of INFOXLM with baseline models on MLQA and XNLI.",
"Note that we do not include the results of XLM because it is pre-trained on 15 languages or using #M=N.",
"The results show that INFOXLM reduces the gap scores on both MLQA and XNLI, providing better cross-lingual transferability than the baselines.",
"cross-lingual transfer, learning good cross-lingual representations is also the goal of cross-lingual pre-1",
"training.",
"In order to analyze how the cross-lingual contrast task affects the alignment of the learned cross-lingual representations, we evaluate the representations of different middle layers on the Tatoeba test sets of the 14 languages that are covered by parallel data.",
"Figure 1 presents the averaged top-1 accuracy of cross-lingual sentence retrieval in the direction of xx en.",
"INFOXLM outperforms XLM-R on all of the 12 layers, demonstrating that our proposed task improves the cross-lingual alignment of the learned representations.",
"From the results of XLM-R, we observe that the model suffers from a performance drop in the last few layers.",
"The reason is that MMLM encourages the representations of the last hidden layer to be similar to token embeddings, which is contradictory with the goal of learning cross-lingual representations.",
"In Models en es de ar hi vi zh Avg MBERT * 77.7 / 65.2 64.3 / 46.6 57.9 / 44.3 45.7 / 29.8 43.8 / 29.7 57.1 / 38.6 57.5 / 37.3 57.7 / 41.6 XLM* 74.9 / 62.4 68.0 / 49.8 62.2 / 47.6 54.8 / 36.3 48.8 / 27.3 61.4 / 41.8 61.1 / 39.6 61.6 / 43.5 UNICODER 80.6 / -68.6 / -62.7 / -57.8 / -62.7 / -67.5 / -62.1 / -66.0 / -XLM-R 77.1 / 64.6 67.4 / 49.6 60.9 / 46.7 54.9 / 36.6 59.4 / 42.9 64.5 / 44.7 61.8 / 39.3 63.7 / 46.3 XLM-R (reimpl) 80.2 / 67.0 67.7 / 49.9 62.1 / 47.7 56.1 / 37.2 61.1 / 44.0 67.0 / 46.3 61.4 / 38.5 65.1 / 47.2 INFOXLM 81.6 / 68.3 69.8 / 51.6 64.3 / 49.4 60.6 / 40.9 65.2 / 47.1 70.2 / 49.0 64.8 / 41.3 68.1 / 49.6 XLCO 81.2 / 68.1 69.6 / 51.9 64.0 / 49.3 59.7 / 40.2 64.0 / 46.3 69.3 / 48.0 64.1 / 40.6 67.4 / 49.2 XLM-RLARGE 80.6 / 67.8 74.1 / 56.0 68.5 / 53.6 63.1 / 43.5 69.2 / 51.6 71.3 / 50.9 68.0 / 45.4 70.7 / 52.7 XLM-RLARGE (reimpl) 84.0 / 71.1 74.4 / 56.4 70.2 / 55.0 66.5 / 46.3 71.1 / 53.2 74.4 / 53.5 68.6 / 44.6 72.7 / 54.3 INFOXLMLARGE 84.5 / 71.6 75.1 / 57.3 71.2 / 56.2 67.6 / 47.6 72.5 / 54.2 75.2 / 54.1 69.2 / 45.4 73.6 / 55.2 Table 4: Evaluation results on MLQA cross-lingual question answering.",
"contrast, INFOXLM still provides high retrieval accuracy at the last few layers, which indicates that INFOXLM provides better aligned representations than XLM-R.",
"Moreover, we find that the performance is further improved when removing TLM, demonstrating that XLCO is more effective than TLM for aligning cross-lingual representations, although TLM helps to improve zero-shot cross-lingual transfer.",
"Effect of Cross-Lingual Pre-training Tasks To better understand the effect of the cross-lingual pre-training tasks, we perform ablation studies on the pre-training tasks of INFOXLM, by removing XLCO , TLM, or both.",
"We present the experimental results in Table 7.",
"Comparing the results of TLM and XLCO with the results of TLM XLCO , we find that both XLCO and TLM effectively improve cross-lingual transferability of the pre-trained INFOXLM model.",
"TLM is more effective for XNLI while XLCO is more effective for MLQA.",
"Moreover, the performance can be further improved by jointly learning XLCO and TLM.",
"on the universal layer improves cross-lingual pretraining.",
"As shown in Table 6, we compare the evaluation results of four variants of INFOXLM, where XLCO is applied on the layer 8 (i.e., universal layer) or on the layer 12 (i.e., the last layer).",
"We find that contrast on the layer 8 provides better results for INFOXLM.",
"However, conducting XLCO on layer 12 performs better when the TLM task is excluded.",
"The results show that maximizing context-sequence (TLM) and sequence-level (XLCO ) mutual information at the last layer tends to interfere with each other.",
"Thus, we suggest applying XLCO on the universal layer for pre-training INFOXLM.",
"pretrain a model that directly uses translation pairs for XLCO without mixup contrast ( TLM Mixup).",
"As shown in Table 7, we present the evaluation results on XNLI and MLQA.",
"We observe that mixup contrast improves the performance of INFOXLM on both datasets.",
"Effect of Momentum Contrast In order to show whether our pre-trained model benefits from momentum contrast, we pretrain a revised version of INFOXLM without momentum contrast.",
"In other words, the parameters of the key encoder are always the same as the query encoder.",
"As shown in Table 7, we report evaluation results (indicated by TLM Momentum) of removing momentum contrast on XNLI and MLQA.",
"We observe a performance descent after removing the momentum contrast from INFOXLM, which indicates that momentum contrast improves the learned language representations of INFOXLM.",
"In this paper, we present a cross-lingual pre-trained model INFOXLM that is trained with both monolingual and parallel corpora.",
"The model is motivated by the unified view of cross-lingual pretraining from an information-theoretic perspective.",
"Specifically, in addition to the masked language modeling and translation language modeling tasks, INFOXLM is jointly pre-trained with a newly introduced cross-lingual contrastive learning task.",
"The cross-lingual contrast leverages bilingual pairs as the two views of the same meaning, and encourages their encoded representations to be more similar than the negative examples.",
"Experimental results on several cross-lingual language understanding tasks show that INFOXLM can considerably improve the performance.",
"Currently, most NLP research works and applications are English-centric, which makes non-English users hard to access to NLP-related services.",
"Our work focuses on cross-lingual language model pretraining.",
"With the pre-trained model, we are able to transfer end-task knowledge from high-resource languages to low-resource languages, which helps to build more accessible NLP applications.",
"Additionally, incorporating parallel corpora into the pretraining procedure improves the training efficiency, which potentially reduces the computational cost for building multilingual NLP applications.",
"We appreciate the helpful discussions with Bo Zheng, Shaohan Huang, Shuming Ma, and Yue Cao.",
"H.H. is the corresponding author.",
"X.M. and H.H. are partly supported by National Key R&D Plan (No. 2018YFB1005100), National Natural Science Foundation of China (No. 61751201, 61602197 and 61772076), Natural Science Fund of Beijing (No. Z181100008918002), and the funds of Beijing Advanced Innovation Center for Language Resources (No. TYZ19005)."
] | [
"method",
"abstain",
"objective",
"method",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"other",
"other"
] |
[
"In modular dialogue systems, natural language understanding (NLU) and natural language generation (NLG) are two critical components, where NLU extracts the semantics from the given texts and NLG is to construct corresponding natural language sentences based on the input semantic representations.",
"However, the dual property between understanding and generation has been rarely explored.",
"The prior work (Su et al., 2019) is the first attempt that utilized the duality between NLU and NLG to improve the performance via a dual supervised learning framework.",
"However, the prior work still learned both components in a supervised manner; instead, this paper introduces a general learning framework to effectively exploit such duality, providing flexibility of incorporating both supervised and unsupervised learning algorithms to train language understanding and generation models in a joint fashion.",
"The benchmark experiments demonstrate that the proposed approach is capable of boosting the performance of both NLU and NLG.",
"1 1 Introduction Spoken dialogue systems that assist users to solve complex tasks such as booking a movie ticket have become an emerging research topic in artificial intelligence and natural language processing areas.",
"With a well-designed dialogue system as an intelligent personal assistant, people can accomplish certain tasks more easily via natural language interactions.",
"Nowadays, there are several virtual intelligent assistants, such as Apple's Siri, Google Assistant, Microsoft's Cortana, and Amazon's Alexa.",
"The recent advance of deep learning has inspired many applications of neural dialogue systems (Wen et al., 2017; Bordes et al., 2017).",
"A typical dialogue system pipeline can be divided into several 1 The source code is available at: https://github.",
"components: a speech recognizer that transcribes a user's speech input into texts, a natural language understanding module (NLU) to classify the domain along with domain-specific intents and fill in a set of slots to form a semantic frame (Tur and De Mori, 2011; Hakkani-Tur et al., 2016).",
"A dialogue state tracking (DST) module predicts the current dialogue state according to the multi-turn conversations, then the dialogue policy determines the system action for the next turn given the current dialogue state (Peng et al., 2018; Su et al., 2018a).",
"Finally, the semantic frame indicating the policy is fed into a natural language generationt (NLG) module to construct a response utterance to the user (Wen et al., 2015b; Su et al., 2018b).",
"Generally, NLU is to extract core semantic concepts from the given utterances, while NLG is to construct corresponding sentences based on the given semantic representations.",
"However, the dual property between understanding and generation has been rarely investigated, Su et al. (2019) first introduced the duality into the typical supervised learning schemes to train these two models.",
"Different from the prior work, this paper proposes a general learning framework leveraging the duality between understanding and generation, providing flexibility of incorporating not only supervised but also unsupervised learning algorithms to jointly train NLU and NLG modules.",
"The contributions can be summarized as 3-fold: This paper proposes a general learning framework using the duality between NLU and NLG, where supervised and unsupervised learning can be flexibly incorporated for joint training.",
"This work is the first attempt to exploits the dual relationship between NLU and NLG towards unsupervised learning.",
"The benchmark experiments demonstrate the effectiveness of the proposed framework.",
"This paper focuses on modeling the duality between understanding and generation towards unsupervised learning of the two components, related work is summarized below.",
"Natural Language Understanding In dialogue systems, the first component is a natural language understanding (NLU) moduleparsing user utterances into semantic frames that capture the core meaning (Tur and De Mori, 2011).",
"A typical NLU first determines the domain given input utterances, predicts the intent, and then fill the associated slots (Hakkani-Tur et al., 2016; Chen et al., 2016).",
"However, the above work focused on single-turn interactions, where each utterance is treated independently.",
"To overcome the error propagation and further improve understanding performance, contextual information has been leveraged and shown useful (Chen et al., 2015; Sun et al., 2016; Shi et al., 2015; Weston et al., 2015).",
"Also, different speaker roles provided informative signal for capturing speaking behaviors and achieving better understanding performance (Chen et al., 2017; Su et al., 2018c).",
"Natural Language Generation NLG is another key component in dialogue systems, where the goal is to generate natural language sentences conditioned on the given semantics from the dialogue manager.",
"As an endpoint of interacting with users, the quality of generated sentences is crucial for better user experience.",
"In spite of robustness and adequacy of the rule-based methods, poor diversity makes talking to a template-based machine unsatisfactory.",
"Furthermore, scalability is an issue, because designing sophisticated rules for a specific domain is time-consuming.",
"Previous work proposed a RNNLM-based NLG that can be trained on any corpus of dialogue act-utterance pairs without hand-crafted features and any semantic alignment (Wen et al., 2015a).",
"The following work based on sequence-to-sequence (seq2seq) models further obtained better performance by employing encoder-decoder structure with linguistic knowledge such as syntax trees (Sutskever et al., 2014; Su et al., 2018b).",
"Dual Learning Various tasks may have diverse goals, which are usually independent to each other.",
"However, some tasks may hold a dual form, that is, we can swap the input and target of a task to formulate another task.",
"Such structural duality emerges as one of the important relationship for further investigation.",
"Two AI tasks are of structure duality if the goal of one task is to learn a function mapping from space X to Y , while the others goal is to learn a reverse mapping from Y and X .",
"Machine translation is an example (Wu et al., 2016), translation from English to Chinese has a dual task, which is translated from Chinese to English; the goal of automatic speech recognition (ASR) is opposite to the one of text-to-speech (TTS) (Tjandra et al., 2017), and so on.",
"Previous work first exploited the duality of the task pairs and proposed supervised (Xia et al., 2017) and unsupervised (reinforcement learning) (He et al., 2016) learning frameworks.",
"These recent studies magnified the importance of the duality by revealing exploitation of it could boost the learning of both tasks.",
"Su et al. (2019) employed the dual supervised learning framework to train NLU and NLG and improve both models simultaneously.",
"Recently, Shen et al. (2019) improved models for conditional text generation using techniques from computational pragmatics.",
"The techniques formulated language production as a game between speakers and listeners, where a speaker should generate text which a listener can use to correctly identify the original input the text describes.",
"However, although the duality has been considered into the learning objective, two models in previous work are still trained separately .",
"In contrast, this work proposes a general learning framework that trains the models jointly, so that unsupervised learning methods in this research field can be better explored.",
"In this section, we describe the problem formulation and the proposed learning framework, which is illustrated in Figure 1.",
"The problems we aim to solve are NLU and NLG; for both tasks, there are two spaces: the semantics space X and the natural language space Y .",
"NLG is to generate sentences associated with the given semantics, where the goal is to learn a mapping function f : X Y that transforms semantic representations into natural language.",
"On the other hand, NLU is to capture the core meaning of sentences, where the goal is to find a function g : Y X that Dual Task !",
"Given n data pairs { ( x i , y i ) } ni =1 i.i.d. sampled from the joint space X Y .",
"A typical strategy for the optimization problem is based on maximum likelihood estimation (MLE) of the parameterized conditional distribution by the trainable parameters x y and y x as below: f ( x ; x y ) = arg max x y P ( y | x ; x y ) , g ( y ; y x ) = arg max y x P ( x | y ; y x ) .",
"The E2E NLG challenge dataset (Novikova et al., 2017) 2 is adopted in our experiments, which is a crowd-sourced dataset of 50k instances in the restaurant domain.",
"Each instance is a pair of a semantic frame containing specific slots and corresponding values and a associated natural language utterance with the given semantics.",
"For example, a semantic frame with the slot-value pairs name[Bibimbap House], food[English], priceRange[moderate], area [riverside], near [Clare Hall] corresponds to the target sentence Bibimbap House is a moderately priced restaurant who's main cuisine is English food. You will find this local gem near Clare Hall in the Riverside area. .",
"Although the original dataset is for NLG, of which the goal is to generate sentences based on the given slot-value pairs, we further formulate the NLU task as predicting slot-value pair based on the utterances, which can be viewed as a multi-label classification problem and each possible slot-value 2 http://www.macs.hw.ac.uk/ InteractionLab/E2E/ pair is treated as an individual label.",
"The formulation is similar to the prior work (Su et al., 2019).",
"Although previous work has introduced the learning schemes that exploit duality of AI tasks, most of it was based on reinforcement learning or standard supervised learning and the models of primal and dual tasks ( f and g respectively) are trained separately .",
"Intuitively, if the models of primal and dual tasks are optimally learned, a complete cycle of transforming data from the original space to another space then back to the original space should be exactly the same as the original data, which could be viewed as the ultimate goal of a dual problem.",
"In our scenario, if we generate sentences from given semantics x via the function f and transform them back to the original semantics perfectly via the function g , it implies that our generated sentences are grounded to the original given semantics and has the mathematical condition: g ( f ( x )) x.",
"Therefore, our objective is to achieve the perfect complete cycle of data transforming by training two dual models ( f and g ) in a joint manner.",
"As illustrated in Figure 1, the framework is composed of two parts: Primal Cycle and Dual Cycle .",
"Primal Cycle starts from semantic frames x , (1) first transforms the semantic representation to sentences by the function f , (2) then computes the loss by the given loss function l 1 , (3) predicts the semantic meaning from the generated sentences, Algorithm 1 Joint dual learning algorithm 1: Input : a mini-batch of n data pairs { ( x i , y i ) } ni =1 , the function of the primal task f , the function of the dual task g , the loss function for the primal task l 1 ( . ) , the loss function for the dual task l 2 ( . ) , and the learning rates 1 , 2 ; 2: repeat 3: Start from data x , transform x by function f : f ( x i ; x y ) ; (cid:46) Primal Cycle 4: Compute the loss by l 1 ( . ) ; 5: Transform the output of the primal task by function g : g ( f ( x i ; x y ); y x ) ; 6: Compute the loss by l 2 ( . ) ; 7: Update model parameters: 8: x y x y 1 x y ( (cid:80) ni =1 [ l 1 ( f ( x i ; x y )) + l 2 ( g ( f ( x i ; x y ); y x ))]) ; 9: y x y x 2 y x ( (cid:80) ni =1 [ l 2 ( g ( f ( x i ; x y ); y x ))]) ; 10: Start from data y , transform y by function g : g ( y i ; y x ) ; (cid:46) Dual Cycle 11: Compute the loss by l 2 ( . ) ; 12: Transform the output of the dual task by function f : f ( g ( y i ; y x ); x y ) ; 13: Compute the loss by l 1 ( . ) ; 14: Update model parameters: 15: y x y x 2 y x ( (cid:80) ni =1 [ l 2 ( g ( y i ; y x )) + l 1 ( f ( g ( y i ; y x ); x y ))]) ; 16: x y x y 1 x y ( (cid:80) ni =1 [ l 1 ( f ( g ( y i ; y x ); x y ))]) ; 17: until convergence (4) computes the loss by the given loss function l 2 , (5) finally train the models based on the computed loss; Dual Cycle starts from utterances and is symmetrically formulated.",
"The learning algorithm is described in Algorithm 1, which is agnostic to types of learning objective.",
"Either a supervised learning objective or an unsupervised learning objective can be conducted at the end of the training cycles, and the whole framework can be trained in an end-to-end manner.",
"As the language understanding task in our experiments is to predict corresponding slot-value pairs of utterances, which is a multi-label classification problem, we utilized the binary cross entropy loss as the supervised objective function for NLU.",
"Likewise, the cross entropy loss function is used as the supervised objective for NLG.",
"Take NLG for example, the objective of the model is to optimize the conditional probability of predicting word tokens given semantics p ( y | x ) , so that the difference between the predicted distribution and the target distribution, q ( y | x ) , can be minimized: n (cid:88) (cid:88) y q ( y | x ) log p ( y | x ) , (1) where n is the number of samples.",
"On the other hand, we can also introduce the reinforcement learning objective into our framework, the objective aims to maximize the expected value of accumulated reward.",
"In our experiments, we conduct policy gradient (REINFORCE) method (Sutton et al., 2000) for optimization, the gradient could be written as: E [ r ] = E [ r ( y ) log p ( y | x )] , (2) where the variety of reward r will be elaborated in the next section.",
"The loss function l 1 for both tasks could be (1), (2), and the combination of them.",
"Different types of rewards reflect various objectives and would result in different behaviors in the learned policy.",
"Hence, we design various reward functions to explore the model behavior, including explicit and implicit feedback.",
"Reconstruction Likelihood In our scenario, if we generate sentences based on given semantics x by the function f and could transform them back to the original semantics perfectly by the function g , it implies our generated sentences ground on the original given semantics.",
"Therefore we use the reconstruction likelihood at the end of the training cycles as a reward function: (cid:40) log p ( x | f ( x i ; x y ); y x ) Primal , log p ( y | g ( y i ; y x ); x y ) Dual .",
"Automatic Evaluation Score The goal of most NLP tasks is to predict word tokens correctly, so the loss functions used to train these models focus on the word level, such as cross entropy maximizing the continuous probability distribution of the next correct word given the preceding context.",
"However, the performance of these models is typically evaluated using discrete metrics.",
"For instance, BLEU and ROUGE measure n-gram overlaps between the generated outputs and the reference texts.",
"In order to enforce our NLG to generate better results in terms of the evaluation metrics, we utilize these automatic metrics as rewards to provide the sentence-level information.",
"Moreover, we also leverge F-score in our NLU model to indicate the understanding performance.",
"In addition to explicit signals like reconstruction likelihood and the automatic evaluation metrics, a softer feedback signal may be informative.",
"For both tasks, we design model-based methods estimating data distribution in order to provide such soft feedback.",
"Language Model For NLG, we utilize pretrained language models which estimate the whole data distribution to compute the joint probability of generated sentences, measuring their naturalness and fluency.",
"In this work, we use a simple language model based on RNN (Mikolov et al., 2010; Sundermeyer et al., 2012).",
"The language model is learned by a cross entropy objective in an unsupervised manner: p ( y ) = L (cid:89) i p ( y i | y 1 , ..., y i 1 ; y ) , (3) where y ( ) are the words in a sentence y , and L is the length of the utterance.",
"Masked Autoencoder for Distribution Estimation (MADE) For NLU, the output contains a set of discrete labels, which do not fit the sequential model scenarios such as language models.",
"Each semantic frame x in our work contains the core concept of a certain sentence, furthermore, the slot-value pairs are not independent to others, because they correspond to the same individual utterance.",
"For example, McDonald's would probably be inexpensive; therefore the correlation should be taken into account when estimating the joint distribution.",
"Following Su et al. (2019), we measure the soft feedback signal for NLU using masked autoencoder (Germain et al., 2015) to estimate the joint distribution.",
"By interrupting certain connections between hidden layers, we could enforce the variable unit x d to only depend on any specific set of variables, not necessary on x <d ; eventually we could still have the joint distribution by product rule: p ( x ) = D (cid:89) d p ( x d | S d ) , where d is the index of variable unit, D is the total number of variables, and S d is a specific set of variable units.",
"Because there is no explicit rule specifying the exact dependencies between slot-value pairs in our data, we consider various dependencies by ensembles of multiple decomposition by sampling different sets S d and averaging the results.",
"The proposed framework provides various flexibility of designing and extending the learning scheme, described as follows.",
"Straight-Through Estimator In many NLP tasks, the learning targets are discrete, so the goals of most NLP tasks are predicting discrete labels such as words.",
"In practice we perform argmax operations on the output distribution from learned models to select the most possible candidates.",
"However, such operation does not have any gradient value, forbidding the networks be trained via back-propagation.",
"Therefore, it is difficult to directly connect a primal task (NLU in our scenario) and a dual task (NLG in our scenario) and jointly train these two models due to the above issue.",
"The Straight-Through (ST) estimator (Bengio et al., 2013) is a widely applied method due to its simplicity and effectiveness.",
"The idea of Straight-Through estimator is directly using the gradients of discrete samples as the gradients of the distribution parameters.",
"Because discrete samples could be generated as the output of hard threshold functions or some operations on the continuous distribution, Bengio et al. (2013) explained the estimator by setting the gradients of hard threshold functions to 1. The structure of the Straight-Through estimator is illustrated in Figure 2. In this work, we introduce ST estimator for connecting two models, and therefore the gradient can be estimated and two models can be jointly trained in an end-to-end manner.",
"Distribution as Input In addition to employing the Straight-Through estimator, an alternative solution is to use continuous distribution as the input of models.",
"For NLU, the inputs are the word tokens from NLG, so we use the predicted distribution over the vocabulary to perform the weighted-sum of word embeddings.",
"For NLG, the model requires Straight-Through Trick Backpropaga1on Forward pass Figure 2: Straight-Through estimator.",
"semantic frame vectors predicted by NLU as the input condition; in this case, the probability distribution of slot-value pairs predicted by NLU can directly serve as the input vector.",
"By utilizing the output distribution in this way, two models can be trained jointly in an end-to-end fashion.",
"Hybrid Objective As described before, the proposed approach is agnostic to learning algorithms; in other words, we could apply different learning algorithms at the middle and end of the cycles.",
"For example, we could apply supervised learning on NLU in the first half of Primal Cycle and reinforcement learning on NLG to form a hybrid training cycle.",
"Because two models are trained jointly, the objective applied on one model would potentially impact on the behavior of the other.",
"Furthermore, we could also apply multiple objective functions including supervised or unsupervised ones to formulate multi-objective learning schemes.",
"Towards Unsupervised Learning Because the whole framework can be trained jointly and propagate the gradients, we could apply only one objective in one learning cycle at the end of it.",
"Specifically, in Algorithm 1, we can apply only l 2 in line 8 and only l 1 in line 15.",
"Such flexibility potentially enables us to train the models based on unpaired data in a unsupervised manner.",
"For example, sample unpaired data x and transform the data by function f , next, feed them into the function g , then compare the predicted results and the original input to compute the loss.",
"Likewise, we can perform the training cycle symmetrically from y .",
"It is also possible to utilize limited data and perform the autoencoding cycle described above to apply semi-supervised learning.",
"Our models are trained on the official training set and verified on the official testing set of the E2E NLG challenge dataset (Novikova et al., 2017).",
"The data preprocessing includes trimming punctuation marks, lemmatization, and turning all words into lowercase.",
"Each possible slot-value pair is treated as an individual label and the total number of labels is 79.",
"To evaluate the quality of the generated sequences regarding both precision and recall, for NLG, the evaluation metrics include BLEU and ROUGE (1, 2, L) scores with multiple references, while F1 measure is reported for evaluating NLU.",
"The proposed framework and algorithm are agnostic to model structures.",
"In our experiments, we use a gated recurrent unit (GRU) (Cho et al., 2014) with fully-connected layers at ends of GRU for both NLU and NLG, which are illustrated in the right part of Figure 1. Thus the models may have semantic frame representation as initial and final hidden states and sentences as the sequential input.",
"In all experiments, we use mini-batch Adam as the optimizer with each batch of 64 examples.",
"10 training epochs were performed without early stop, the hidden size of network layers is 200, and word embedding is of size 50.",
"The experimental results are shown in Table 1, each reported number is averaged on the official testing set from three turns.",
"Row",
"(a) is the baseline where NLU and NLG models are trained independently and separately by supervised learning.",
"The best performance in Su et al. (2019) is reported in row",
"(b), where NLU and NLG are trained separately by supervised learning with regularization terms exploiting the duality.",
"To overcome the issue of non-differentiability, we introduce Straight-Through estimator when connecting two tasks.",
"Based on our framework, another baseline for comparison is to train two models jointly by supervised loss and straight-through estimators, of which the performance is reported in row",
"(c).",
"Specifically, the cross entropy loss (1) is utilized in both l 1 and l 2 in Algorithm 1. Because the models in the proposed framework are trained Learning Scheme NLU NLG Micro-F BLEU ROUGE-1 ROUGE-2 ROUGE-L",
"jointly, the gradients are able to flow through the whole network thus two models would directly influence learning of each other.",
"Rows",
"(d)-(f) show the ablation experiments for exploring the interaction between two models ( f and g ).",
"For instance, row",
"(e) does not use ST at the output of the NLU module; instead, we feed continuous distribution over slot-value labels instead of discrete semantic frames into NLG as the input.",
"Instead of discrete word labels, row",
"(d) and row",
"(f) feed weighted sum over word embeddings based on output distributions.",
"Since the goal of NLU is to learn a many-to-one function, considering all possibility would potentially benefit learning (row",
"(d)-(f)).",
"On the contrary, the goal of NLG is to learn a one-to-many function, applying the ST estimator at the output of NLU only rather than both sides degrades the performance of generation (row",
"(e)).",
"However, this model achieves unexpected improvement in understanding by over 10%, the reason may be the following.",
"The semantics representation is very compact, a slight noise in the semantics space would possibly result in a large difference in the target space and a totally different semantic meaning.",
"Hence the continuous distribution over slot-value pairs may potentially cover the unseen mixture of semantics and further provide rich gradient signals.",
"This could also be explained from the perspective of data augmentation.",
"Moreover, connecting two models with continuous distribution at both joints further achieves improvement in both NLU and NLG (row",
"(f)).",
"Although row",
"(f) performs best in our experiments and dataset, as most AI tasks are classification problems, the proposed framework with ST estimators provides a general way to connect two tasks with duality.",
"The proposed methods also significantly outperform the previously proposed dual supervised learning framework (Su et al., 2019) on F1 score of NLU and BLEU score of NLG, demonstrating the benefit of learning NLU and NLG jointly.",
"The proposed framework provides the flexibility of applying multiple objectives and different types of learning methods.",
"In our experiments, apart from training two models jointly by supervised loss, reinforcement learning objectives are also incorporated into the training schemes (row",
"(g)-(l)).",
"The ultimate goal of reinforcement learning is to maximize the expected reward in (2).",
"In the proposed dual framework, if we take expectation over different distribution, it would reflect a different physical meaning.",
"For instance, if we receive a reward at the end of Primal Cycle and the expectation is taken over the output distribution of NLG (middle) or NLU (end), the derivatives of objective functions would differ: (cid:40) E [ r i log p ( y i | x ; x y )] RL mid , E [ r i log p ( x i | f ( x ; x y ); y x )] RL end .",
"The upper one ( RL mid ) assesses the expected reward earned by the sentences constructed by the policy of NLG, which is a direct signal for the primal task NLG.",
"The lower one ( RL end ) estimates the expected reward earned by the predicted semantics by the policy of NLU based on the state Baseline Proposed x area[riverside], eatType[pub], name[blue spice] y at the riverside there is a pub called the blue spice f ( x ; x y ) blue spice is a pub in riverside that has a price range of more than 30e in riverside there is a pub called blue spice g ( f ( x ; x y ); y x )) area[city centre], customer rating[5 out of 5], priceRange[more than 30], priceRange[cheap], name[blue spice], name[the vaults] area[riverside], eatType[pub], name[blue spice] Table 2: An example of the Primal Cycle, where the baseline model is row",
"In the proposed framework, the models of two tasks are trained jointly, thus an objective function will simultaneously influence the learning of both models.",
"Different reward designs could guide reinforcement learning agents to different behaviors.",
"To explore the impact of reinforcement learning signal, various rewards are applied on top of the joint framework (row",
"(f)): 1. Token-level likelihood (rows",
"(g) and",
"(h)), 2. Sentence/frame-level automatic evaluation metrics (rows",
"(i) and",
"(j)), 3. Corpus-level joint distribution estimation (rows",
"(k) and",
"(l)).",
"In other words, the models in rows",
"(g)-(l) have both supervised and reinforcement learning signal.",
"The results show that token-level feedback may not provide extra guidance (rows",
"(g) and",
"(h)), directly optimizing towards the evaluation metrics at the testing phase benefits learning in both tasks and performs best (rows",
"(i) and",
"(j)), and the models utilizing learning-based joint distribution estimation also obtain improvement (row",
"(k)).",
"In sum, the explicit feedback is more useful for boosting the NLG performance, because the reconstruction and automatic scores directly reflect the generation quality.",
"However, the implicit feedback is more informative for improving NLU, where MADE captures the salient information for building better NLU models.",
"The results align well with the finding in Su et al. (2019).",
"Table 2 and 3 show the selected examples of the proposed model and the baseline model in Primal and Dual Cycle.",
"As depicted in Algorithm 1, Primal Cycle is designed to start from semantic frames x , then transform the representation by the NLG model f , finally feed the generated sentences into the NLU model g and compare the results with the original input to compute loss.",
"In the example of Primal Cycle (Table 2), we can find that f ( g ( y ; y x )); x y ) equals x , which means the proposed method can successfully restore the original semantics.",
"On the other hand, Dual Cycle starts from natural language utterances, from the generated results (Table 3) we can find that our proposed method would not lose semantic concepts in the middle of the training cycle ( g ( y ; y x )) x ).",
"Based on the qualitative analysis, we can find that by considering the duality into the objective and jointly training, the proposed framework can improve the performance of NLU and NLG simultaneously.",
"Though theoretically sound and empirically validated, the formulation of the proposed framework depends on the characteristics of data.",
"Not every NLU dataset is suitable for being used as a NLG task, vice versa.",
"Moreover, though the proposed framework provides possibility of training two models in a fully unsupervised manner, it is found unstable and hard to optimize from our experiments.",
"Therefore, better dual learning algorithms and leveraging pretrained models and other learning techniques, such as adversarial learning, are worthy to explore for improving the proposed framework.",
"We leave the potential exploration as the future work.",
"This paper proposes a general learning framework leveraging the duality between language understanding and generation, providing the flexibility of incorporating supervised and unsupervised learning algorithms to jointly train two models.",
"The proposed framework provides a potential method towards unsupervised learning of both language understanding and generation models by considering their data distribution.",
"The experiments on the benchmark dataset demonstrate that the proposed approach is capable of boosting the performance of both NLU and NLG models, motivating the potential research directions in this area.",
"We thank reviewers for their insightful comments.",
"This work was financially supported from the Young Scholar Fellowship Program by Ministry of Science and Technology (MOST) in Taiwan, under Grant 109-2636-E-002-026."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"other"
] |
[
"We argue that semantic meanings of a sentence or clause can not be interpreted independently from the rest of a paragraph, or independently from all discourse relations and the overall paragraph-level discourse structure.",
"With the goal of improving implicit discourse relation classification, we introduce a paragraph-level neural networks that model inter-dependencies between discourse units as well as discourse relation continuity and patterns, and predict a sequence of discourse relations in a paragraph.",
"Experimental results show that our model outperforms the previous state-of-the-art systems on the benchmark corpus of PDTB.",
"PDTB-style discourse relations, mostly defined between two adjacent text spans (i.e., discourse units, either clauses or sentences), specify how two discourse units are logically connected (e.g., causal, contrast).",
"Recognizing discourse relations is one crucial step in discourse analysis and can be beneficial for many downstream NLP applications such as information extraction, machine translation and natural language generation.",
"Commonly, explicit discourse relations were distinguished from implicit ones, depending on whether a discourse connective (e.g., because and after) appears between two discourse units (Prasad et al., 2008a).",
"While explicit discourse relation detection can be framed as a discourse connective disambiguation problem (Pitler and Nenkova, 2009; Lin et al., 2014) and has achieved reasonable performance (F1 score > 90%), implicit discourse relations have no discourse connective and are especially difficult to identify (Lin et al., 2009, 2014; Xue et al., 2015).",
"To fill the gap, implicit discourse relation prediction has drawn significant research interest recently and progress has been made (Chen et al., 2016; Liu and Li, 2016) by modeling compositional meanings of two discourse units and exploiting word interactions between discourse units using neural tensor networks or attention mechanisms in neural nets.",
"However, most of existing approaches ignore wider paragraph-level contexts beyond the two discourse units that are examined for predicting a discourse relation in between.",
"To further improve implicit discourse relation prediction, we aim to improve discourse unit representations by positioning a discourse unit (DU) in its wider context of a paragraph.",
"The key observation is that semantic meaning of a DU can not be interpreted independently from the rest of the paragraph that contains it, or independently from the overall paragraph-level discourse structure that involve the DU.",
"Considering the following paragraph with four discourse relations, one relation between each two adjacent DUs: (1): [The Butler, Wis., manufacturer went pub-lic at $ 15.75 a share in August 1987,] DU 1 and (Explicit-Expansion) [Mr. Sim's goal then was a $ 29 per-share price by 1992.] DU 2 (Implicit-Expansion) [Strong earnings growth helped achieve that price far ahead of schedule, in August 1988.] DU 3 (Implicit-Comparison) [The stock has since softened, trading around $ 25 a share last week and closing yesterday at $ 23 in national over-the-counter trading.] DU 4 But (Explicit-Comparison) [Mr. Sim has set a fresh target of $ 50 a share by the end of reaching that goal.] DU 5 Clearly, each DU is an integral part of the paragraph and not independent from other units.",
"First , predicting a discourse relation may require understanding wider paragraph-level contexts beyond two relevant DUs and the overall discourse structure of a paragraph.",
"For example, the implicit Comparison discourse relation between DU3 and DU4 is difficult to identify without the back-141 ground information (the history of per-share price) introduced in DU1 and DU2.",
"Second , a DU may be involved in multiple discourse relations (e.g., DU4 is connected with both DU3 and DU5 with a Comparison relation), therefore the pragmatic meaning representation of a DU should reflect all the discourse relations the unit was involved in.",
"Third , implicit discourse relation prediction should benefit from modeling discourse relation continuity and patterns in a paragraph that involve easy-to-identify explicit discourse relations (e.g., Implicit-Comparison relation is followed by Explicit-Comparison in the above example).",
"Following these observations, we construct a neural net model to process a paragraph each time and jointly build meaning representations for all DUs in the paragraph.",
"The learned DU representations are used to predict a sequence of discourse relations in the paragraph, including both implicit and explicit relations.",
"Although explicit relations are not our focus, predicting an explicit relation will help to reveal the pragmatic roles of its two DUs and reconstruct their representations, which will facilitate predicting neighboring implicit discourse relations that involve one of the DUs.",
"In addition, we introduce two novel designs to further improve discourse relation classification performance of our paragraph-level neural net model.",
"First, previous work has indicated that recognizing explicit and implicit discourse relations requires different strategies, we therefore untie parameters in the discourse relation prediction layer of the neural networks and train two separate classifiers for predicting explicit and implicit discourse relations respectively.",
"This unique design has improved both implicit and explicit discourse relation identification performance.",
"Second, we add a CRF layer on top of the discourse relation prediction layer to fine-tune a sequence of predicted discourse relations by modeling discourse relation continuity and patterns in a paragraph.",
"Experimental results show that the intuitive paragraph-level discourse relation prediction model achieves improved performance on PDTB for both implicit discourse relation classification and explicit discourse relation classification.",
"Since the PDTB (Prasad et al., 2008b) corpus was created, a surge of studies (Pitler et al., 2009; Lin",
"et al., 2009; Liu et al., 2016; Rutherford and Xue, 2016) have been conducted for predicting discourse relations, primarily focusing on the challenging task of implicit discourse relation classification when no explicit discourse connective phrase was presented.",
"Early studies (Pitler et al., 2008; Lin et al., 2009, 2014; Rutherford and Xue, 2015) focused on extracting linguistic and semantic features from two discourse units.",
"Recent research (Zhang et al., 2015; Rutherford et al., 2016; Ji and Eisenstein, 2015; Ji et al., 2016) tried to model compositional meanings of two discourse units by exploiting interactions between words in two units with more and more complicated neural network models, including the ones using neural tensor (Chen et al., 2016; Qin et al., 2016; Lei et al., 2017) and attention mechanisms (Liu and Li, 2016; Lan et al., 2017; Zhou et al., 2016).",
"Another trend is to alleviate the shortage of annotated data by leveraging related external data, such as explicit discourse relations in PDTB (Liu et al., 2016; Lan et al., 2017; Qin et al., 2017) and unlabeled data obtained elsewhere (Rutherford and Xue, 2015; Lan et al., 2017), often in a multi-task joint learning framework.",
"However, nearly all the previous works assume that a pair of discourse units is independent from its wider paragraph-level contexts and build their discourse relation prediction models based on only two relevant discourse units.",
"In contrast, we model inter-dependencies of discourse units in a paragraph when building discourse unit representations; in addition, we model global continuity and patterns in a sequence of discourse relations, including both implicit and explicit relations.",
"Hierarchical neural network models (Liu and Lapata, 2017; Li et al., 2016) have been applied to RST-style discourse parsing (Carlson et al., 2003) mainly for the purpose of generating text-level hierarchical discourse structures.",
"In contrast, we use hierarchical neural network models to build context-aware sentence representations in order to improve implicit discourse relation prediction.",
"Abstracting latent representations from a long sequence of words, such as a paragraph, is a challenging task.",
"While several novel neural network models (Zhang et al., 2017b,a) have been introduced in recent years for encoding a paragraph, Recurrent Neural Network (RNN)-based 142 methods remain the most effective approaches.",
"RNNs, especially the long-short term memory (LSTM) (Hochreiter and Schmidhuber, 1997) models, have been widely used to encode a paragraph for machine translation (Sutskever et al., 2014), dialogue systems (Serban et al., 2016) and text summarization (Nallapati et al., 2016) because of its ability in modeling long-distance dependencies between words.",
"In addition, among four typical pooling methods (sum, mean, last and max) for calculating sentence representations from RNN-encoded hidden states for individual words, max-pooling along with bidirectional LSTM (Bi-LSTM) (Schuster and Paliwal, 1997) yields the current best universal sentence representation method (Conneau et al., 2017).",
"We adopted a similar neural network architecture for paragraph encoding.",
"Figure 1 illustrates the overall architecture of the discourse-level neural network model that consists of two Bi-LSTM layers, one max-pooling layer in between and one softmax prediction layer.",
"The input of the neural network model is a paragraph containing a sequence of discourse units, while the output is a sequence of discourse relations with one relation between each pair of adjacent discourse units 1 .",
"Given the words sequence of one paragraph as input, the lower Bi-LSTM layer will read the whole paragraph and calculate hidden states as word representations, and a max-pooling layer will be applied to abstract the representation of each discourse unit based on individual word representations.",
"Then another Bi-LSTM layer will run over the sequence of discourse unit representations and compute new representations by further modeling semantic dependencies between discourse units within paragraph.",
"The final softmax prediction layer will concatenate representations of two adjacent discourse units and predict the discourse relation between them.",
"model is a sequence of word vectors, one vector per word in the paragraph.",
"In this work, we used the pre-trained 300-dimension Google English word2vec embeddings 2 .",
"For each word that is not in the vocabulary of Google word2vec, we will randomly initialize a vector with each dimension sampled from the range [ 0 . 25 , 0 . 25] .",
"In addition, recognizing key entities and discourse connective phrases is important for discourse relation recognition, therefore, we concatenate the raw word embeddings with extra linguistic features, specifically one-hot Part-Of-Speech tag embeddings and one-hot named entity tag embeddings 3 .",
"Building Discourse Unit Representations: We aim to build discourse unit (DU) representations that sufficiently leverage cues for discourse relation prediction from paragraph-wide contexts, including the preceding and following discourse units in a paragraph.",
"To process long paragraph-wide contexts, we take a bottom-up two-level abstraction approach and progressively generate a compositional representation of each word first (low level) and then generate a compositional representation of each discourse unit (high level), with a max-pooling operation in between.",
"At both word-level and DU-level, we choose Bi-LSTM as our basic component for generating compositional representations, mainly considering its capability to capture long-distance dependencies between words (discourse units) and to incorporate influences of context words (discourse units) in each side.",
"Given a variable-length words sequence X = ( x 1 , x 2 , ..., x L ) in a paragraph, the word-level Bi-LSTM will process the input sequence by using two separate LSTMs, one process the word sequence from the left to right while the other follows the reversed direction.",
"Therefore, at each word position t , we obtain two hidden states h t , h t .",
"We concatenate them to get the word representation h t = [ h t , h t ] .",
"Then we apply max-pooling over the sequence of word representations for words in a discourse unit in order to get the discourse unit embedding: 2 Downloaded from https://docs.google.com/",
"uc?id=0B7XkCwpI5KDYNlNUTTlSS21pQmM 3 Our feature-rich word embeddings are of dimension 343, including 300 dimensions for word2vec embeddings + 36 dimensions for Part-Of-Speech (POS) tags + 7 dimensions for named entity tags.",
"We used the Stanford CoreNLP to generate POS tags and named entity tags.",
"MPDU [ j ] = DU end max i = DU start h i [ j ] (1) where, 1 j hidden node size (2) Next, the DU-level Bi-LSTM will process the sequence of discourse unit embeddings in a paragraph and generate two hidden states hDU t and hDU t at each discourse unit position.",
"We concatenate them to get the discourse unit representation hDU t = [ hDU t , hDU t ] .",
"The Softmax Prediction Layer: Finally, we concatenate two adjacent discourse unit representations hDU t 1 and hDU t and predict the discourse relation between them using a softmax function: y t 1 = softmax ( W y [ hDU t 1 , hDU t ] + b y ) (3) 3.2 Untie Parameters in the Softmax Prediction Layer (Implicit vs. Explicit) Previous work (Pitler and Nenkova, 2009; Lin et al., 2014; Rutherford and Xue, 2016) has reFigure 2: Untie Parameters in the Prediction Layer vealed that recognizing explicit vs. implicit discourse relations requires different strategies.",
"Note that in the PDTB dataset, explicit discourse relations were distinguished from implicit ones, depending on whether a discourse connective exists between two discourse units.",
"Therefore, explicit discourse relation detection can be simplified as a discourse connective phrase disambiguation problem (Pitler and Nenkova, 2009; Lin et al., 2014).",
"On the contrary, predicting an implicit discourse relation should rely on understanding the overall 144 contents of its two discourse units (Lin et al., 2014; Rutherford and Xue, 2016).",
"Considering the different natures of explicit vs. implicit discourse relation prediction, we decide to untie parameters at the final discourse relation prediction layer and train two softmax classifiers, as illustrated in Figure 2.",
"The two classifiers have different sets of parameters, with one classifier for only implicit discourse relations and the other for only explicit discourse relations.",
"The loss function used for the neural network training considers loss induced by both implicit relation prediction and explicit relation prediction:",
"Loss = Loss imp + Loss exp (5)",
"The , in the full system, is set to be 1, which means that minimizing the loss in predicting either type of discourse relations is equally important.",
"In the evaluation, we will also evaluate a system variant, where we will set = 0 , which means that the neural network will not attempt to predict explicit discourse relations and implicit discourse relation prediction will not be influenced by predicting neighboring explicit discourse relations.",
"Data analysis and many linguistic studies (Pitler et al., 2008; Asr and Demberg, 2012; Lascarides and Asher, 1993; Hobbs, 1985) have repeatedly shown that discourse relations feature continuity and patterns (e.g., a temporal relation is likely to be followed by another temporal relation).",
"Especially, Pitler et al. (2008) firstly reported that patterns exist between implicit discourse relations and their neighboring explicit discourse relations.",
"Motivated by these observations, we aim to improve implicit discourse relation detection by making use of easily identifiable explicit discourse relations and taking into account global patterns of discourse relation distributions.",
"Specifically, we add an extra CRF layer at the top of the softmax prediction layer (shown in figure",
"3) to fine-tune predicted discourse relations by considering their inter-dependencies.",
"The Conditional Random Fields (Lafferty et al., 2001) (CRF) layer updates a state transition matrix, which can effectively adjust the current la-Figure 3: Fine-tune Discourse Relations with a CRF layer.",
"bel depending on proceeding and following labels.",
"Both training and decoding of the CRF layer can be solved efficiently by using the Viterbi algorithm.",
"With the CRF layer, the model jointly assigns a sequence of discourse relations between each two adjacent discourse units in a paragraph, including both implicit and explicit relations, by considering relevant discourse unit representations as well as global discourse relation patterns.",
"The Penn Discourse Treebank (PDTB) : We experimented with PDTB v2.0 (Prasad et al., 2008b) which is the largest annotated corpus containing 36k discourse relations in 2,159 Wall Street Journal (WSJ) articles.",
"In this work, we focus on the top-level 4 discourse relation senses which are consist of four major semantic classes: Comparison (Comp), Contingency (Cont), Expansion (Exp) and Temporal (Temp).",
"We followed the same PDTB section partition (Rutherford and Xue, 2015) as previous work and used sections 2-20 as training set, sections 21-22 as test set, and sections 0-1 as development set.",
"Table 1 presents the data distributions we collected from PDTB.",
"Preprocessing : The PDTB dataset documents its annotations as a list of discourse relations, with each relation associated with its two discourse units.",
"To recover the paragraph context for a discourse relation, we match contents of its two annotated discourse units with all paragraphs in corresponding raw WSJ article.",
"When all the matching was completed, each paragraph was split into a sequence of discourse units, with one discourse relation (implicit or explicit) between each two ad-4 In PDTB, the sense label of discourse relation was annotated hierarchically with three levels.",
"jacent discourse units 5 .",
"Following this method, we obtained 14,309 paragraphs in total, each contains 3.2 discourse units on average.",
"Table 2 shows the distribution of paragraphs based on the number of discourse units in a paragraph.",
"We tuned the parameters based on the best performance on the development set.",
"We fixed the weights of word embeddings during training.",
"All the LSTMs in our neural network use the hidden state size of 300.",
"To avoid overfitting, we applied dropout (Hinton et al., 2012) with dropout ratio of 0.5 to both input and output of LSTM layers.",
"To prevent the exploding gradient problem in training LSTMs, we adopt gradient clipping with gradient L2-norm threshold of 5.0.",
"These parameters remain the same for all our proposed models as well as our own baseline models.",
"We chose the standard cross-entropy loss function for training our neural network model and adopted Adam (Kingma and Ba, 2014) optimizer with the initial learning rate of 5e-4 and a mini-batch size of 128 6 .",
"If one instance is annotated with two labels (4% of all instances), we use both of them in loss calculation and regard the prediction as correct if model predicts one of the annotated labels.",
"All the proposed models were imple-5 In several hundred discourse relations, one discourse unit is complex and can be further separated into two elementary discourse units, which can be illustrated as [DU1-DU2]-DU3.",
"We simplify such cases to be a relation between DU2 and DU3.",
"6 Counted as the number of discourse relations rather than paragraph instances.",
"mented with Pytorch 7 and converged to the best performance within 20-40 epochs.",
"To alleviate the influence of randomness in neural network model training and obtain stable experimental results, we ran each of the proposed models and our own baseline models ten times and report the average performance of each model instead of the best performance as reported in many previous works.",
"We compare the performance of our neural network model with several recent discourse relation recognition systems that only consider two relevant",
"relevant discourse units.",
"(Rutherford and Xue, 2015): improves implicit discourse relation prediction by creating more training instances from the Giga-word corpus utilizing explicitly mentioned discourse connective phrases.",
"(Chen et al., 2016): a gated relevance network (GRN) model with tensors to capture semantic interactions between words from two discourse units.",
"(Liu et al., 2016): a convolutional neural network model that leverages relations between different styles of discourse relations annotations (PDTB and RST (Carlson et al., 2003)) in a multi-task joint learning framework.",
"(Liu and Li, 2016): a multi-level attention-over-attention model to dynamically exploit features from two discourse units for recognizing an implicit discourse relation.",
"(Qin et al., 2017): a novel pipelined adversarial framework to enable an adaptive imitation competition between the implicit network and a rival feature discriminator with access to connectives.",
"(Lei et al., 2017): a Simple Word Interaction Model (SWIM) with tensors that captures both linear and quadratic relations between words from two discourse units.",
"(Lan et al., 2017): an attention-based LSTM neural network that leverages explicit discourse relations in PDTB and unannotated external data in a multi-task joint learning framework.",
"7 http://pytorch.org/ 146 Implicit Explicit Model Macro Acc Comp Cont Exp Temp Macro Acc (Rutherford and Xue, 2015) 40.50 57.10 ---(Liu et al., 2016) 44.98 57.27 ---(Liu and Li, 2016) 46.29 57.57 ---(Lei et al., 2017) 46.46 ----(Lan et al., 2017) 47.80 57.39 ---DU-pair level Discourse Relation Recognition (Our Own Baselines) Bi-LSTM 40.01 53.50 30.52 42.06 65.52 21.96 -+ tensors 45.36 57.18 36.88 44.85 68.70 30.74 -Paragraph level Discourse Relation Recognition Basic System Variant ( = 0 ) 47.56 56.88 37.12 46.47 67.72 38.92 -Basic System ( = 1 ) 48.10 57.52 37.33 47.89 68.39 38.80 91.93 92.89 + Untie Parameters 48.69 58.20 37.68 49.19 68.86 39.04 93.70 94.46 + the CRF Layer 48.82 57.44 37.72 49.39 67.45 40.70 93.21 93.98 Table 3: Multi-class Classification Results on PDTB.",
"On the PDTB corpus, both binary classification and multi-way classification settings are commonly used to evaluate the implicit discourse relation recognition performance.",
"We noticed that all the recent works report class-wise implicit relation prediction performance in the binary classification setting, while none of them report detailed performance in the multi-way classification setting.",
"In the binary classification setting, separate one-versus-all binary classifiers were trained, and each classifier is to identify one class of discourse relations.",
"Although separate classifiers are generally more flexible in combating with imbalanced distributions of discourse relation classes and obtain higher class-wise prediction performance, one pair of discourse units may be tagged with all four discourse relations without proper conflict resolution.",
"Therefore, the multi-way classification setting is more appropriate and natural in evaluating a practical end-to-end discourse parser, and we mainly evaluate our proposed models using the four-way multi-class classification setting.",
"Since none of the recent previous work reported class-wise implicit relation classification performance in the multi-way classification setting, for better comparisons, we re-implemented the neural tensor network architecture (so-called SWIM in (Lei et al., 2017)) which is essentially a Bi-LSTM model with tensors and report its detailed evaluation result in the multi-way classification setting.",
"As another baseline, we report the performance of a Bi-LSTM model without tensors as well.",
"Both baseline models take two relevant discourse units as the only input.",
"For additional comparisons, We also report the performance of our proposed models in the binary classification setting.",
"Multi-way Classification : The first section of table 3 shows macro average F1-scores and accuracies of previous works.",
"The second section of table 3 shows the multi-class classification results of our implemented baseline systems.",
"Consistent with results of previous works, neural tensors, when applied to Bi-LSTMs, improved implicit discourse relation prediction performance.",
"However, the performance on the three small classes (Comp, Cont and Temp) remains low.",
"The third section of table 3 shows the multi-class classification results of our proposed paragraph-level neural network models that capture inter-dependencies among discourse units.",
"The first row shows the performance of a variant of our basic model, where we only identify implicit relations and ignore identifying explicit relations by setting the in equation (5) to be 0.",
"Compared with the baseline Bi-LSTM model, the only difference is that this model considers paragraph-wide contexts and model inter-dependencies among discourse units when building representation for individual DU.",
"We can see that this model has greatly improved implicit relation classification perfor-147 Model Comp Cont Exp Temp (Chen et al., 2016) 40.17 54.76 -31.32 (Liu et al., 2016) 37.91 55.88 69.97 37.17 (Liu and Li, 2016) 36.70 54.48 70.43 38.84 (Qin et al., 2017) 40.87 54.56 72.38 36.20 (Lei et al., 2017) 40.47 55.36 69.50 35.34 (Lan et al., 2017) 40.73 58.96 72.47 38.50 Paragraph level Discourse Relation Recognition Basic System ( = 1 ) 42.68 55.17 68.94 41.03 + Untie Parameters 46.79 57.09 70.41 45.61 Table 4: Binary Classification Results on PDTB.",
"mance across all the four relations and improved the macro-average F1-score by over 7 percents.",
"In addition, compared with the baseline Bi-LSTM model with tensor, this model improved implicit relation classification performance across the three small classes, with clear performance gains of around 2 and 8 percents on contingency and temporal relations respectively, and overall improved the macro-average F1-score by 2.2 percents.",
"The second row shows the performance of our basic paragraph-level model which predicts both implicit and explicit discourse relations in a paragraph.",
"Compared to the variant system (the first row), the basic model further improved the classification performance on the first three implicit relations.",
"Especially on the contingency relation, the classification performance was improved by another 1.42 percents.",
"Moreover, the basic model yields good performance for recognizing explicit discourse relations as well, which is comparable with previous best result (92.05% macro F1-score and 93.09% accuracy as reported in (Pitler et al., 2008)).",
"After untying parameters in the softmax prediction layer, implicit discourse relation classification performance was improved across all four relations, meanwhile, the explicit discourse relation classification performance was also improved.",
"The CRF layer further improved implicit discourse relation recognition performance on the three small classes.",
"In summary, our full paragraph-level neural network model achieves the best macro-average F1-score of 48.82% in predicting implicit discourse relations, which outperforms previous neural tensor network models (e.g., (Lei et al., 2017)) by more than 2 percents and outperforms the best previous system (Lan et al., 2017) by 1 percent.",
"Binary Classification : From table 4, we can see that compared against the best previous systems, our paragraph-level model with untied parameters in the prediction layer achieves F1-score improvements of 6 points on Comparison and 7 points on Temporal, which demonstrates that paragraph-wide contexts are important in detecting minority discourse relations.",
"Note that the CRF layer of the model is not suitable for binary classification.",
"As we explained in section 4.2, we ran our models for 10 times to obtain stable average performance.",
"Then we also created ensemble models by applying majority voting to combine results of ten runs.",
"From table 5, each ensemble model obtains performance improvements compared with single model.",
"The full model achieves performance boosting of (51.84 48.82 = 3.02) and (94.17 -93.21 = 0.96) in macro F1-scores for predicting implicit and explicit discourse relations respectively.",
"Furthermore, the ensemble model achieves the best performance for predicting both implicit 148 Figure 4: Impact of Paragraph Length.",
"To understand the influence of paragraph lengths to our paragraph-level models, we divide paragraphs in the PDTB test set into several subsets based on the number of DUs in a paragraph, and then evaluate our proposed models on each subset separately.",
"From Figure 4, we can see that our paragraph-level models (the latter three) overall outperform DU-pair baselines across all the subsets.",
"As expected, the paragraph-level models achieve clear performance gains on long paragraphs (with more than 5 DUs) by extensively modeling mutual influences of DUs in a paragraph.",
"But somewhat surprisingly, the paragraph-level models achieve noticeable performance gains on short paragraphs (with 2 or 3 DUs) as well.",
"We hypothesize that by learning more appropriate discourse-aware DU representations in long paragraphs, our paragraph-level models reduce bias of using DU representations in predicting discourse relations, which benefits discourse relation prediction in short paragraphs as well.",
"For the example (1), the baseline neural tensor model predicted both implicit relations wrongly (Implicit-Contingency between DU2 and DU3; Implicit-Expansion between DU3 and DU4), while our paragraph-level model predicted all the four discourse relations correctly, which indicates that paragraph-wide contexts play a key role in implicit discourse relation prediction.",
"For another example: (2): [Marshall came clanking in like Marley's ghost dragging those chains of brigades and air wings and links with Arab despots.] DU 1 (Implicit-Temporal) [He wouldn't leave] DU 2 until (Explicit-Temporal) [Mr. Cheney promised to do whatever the Pentagon systems analysts told him.] DU 3 Our basic paragraph-level model wrongly predicted the implicit discourse relation between DU1 and DU2 to be Implicit-Comparison, without being able to effectively use the succeeding Explicit-Temporal relation.",
"On the contrary, the full model corrected this mistake by modeling discourse relation patterns with the CRF layer.",
"We have presented a paragraph-level neural network model that takes a sequence of discourse units as input, models inter-dependencies between discourse units as well as discourse relation continuity and patterns, and predicts a sequence of discourse relations in a paragraph.",
"By building wider-context informed discourse unit representations and capturing the overall discourse structure, the paragraph-level neural network model outperforms the best previous models for implicit discourse relation recognition on the PDTB dataset.",
"We acknowledge the support of NVIDIA Corporation for their donation of one GeForce GTX TI-TAN X GPU used for this research."
] | [
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"objective",
"objective",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"objective",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"other"
] |
[
"In this paper, we propose a novel representation for text documents based on aggregating word embedding vectors into document embeddings.",
"Our approach is inspired by the Vector of Locally-Aggregated Descriptors used for image representation, and it works as follows.",
"First, the word embeddings gathered from a collection of documents are clustered by k-means in order to learn a codebook of semnatically-related word embeddings.",
"Each word embedding is then associated to its nearest cluster centroid (codeword).",
"The Vector of Locally-Aggregated Word Embeddings (VLAWE) representation of a document is then computed by accumulating the differences between each codeword vector and each word vector (from the document) associated to the respective codeword.",
"We plug the VLAWE representation, which is learned in an unsupervised manner, into a classifier and show that it is useful for a diverse set of text classification tasks.",
"We compare our approach with a broad range of recent state-of-the-art methods, demonstrating the effectiveness of our approach.",
"Furthermore, we obtain a considerable improvement on the Movie Review data set, reporting an accuracy of 93 .",
"3% , which represents an absolute gain of 10% over the state-of-the-art approach.",
"Our code is available at https://github.com/raduionescu/vlawe-boswe/.",
"In recent years, word embeddings (Bengio et al., 2003; Collobert and Weston, 2008; Mikolov et al., 2013; Pennington et al., 2014) have had a huge impact in natural language processing (NLP) and related fields, being used in many tasks including sentiment analysis (Dos Santos and Gatti, 2014; Fu et al., 2018), information retrieval (Clin-chant and Perronnin, 2013; Ye et al., 2016) and word sense disambiguation (Bhingardive et al.,",
"2015; Butnaru et al., 2017; Chen et al., 2014; Ia-cobacci et al., 2016), among many others.",
"Starting from word embeddings, researchers proposed various ways of aggregating word embedding vectors to obtain efficient sentence-level or document-level representations (Butnaru and Ionescu, 2017; Cheng et al., 2018; Clinchant and Perronnin, 2013; Conneau et al., 2017; Cozma et al., 2018; Fu et al., 2018; Hill et al., 2016; Kiros et al., 2015; Kusner et al., 2015; Le and Mikolov, 2014; Shen et al., 2018; Torki, 2018; Zhao et al., 2015; Zhou et al., 2016, 2018).",
"Although the mean (or sum) of word vectors is commonly adopted because of its simplicity (Mitchell and Lapata, 2010), it seems that more complex approaches usually yield better performance (Cheng et al., 2018; Conneau et al., 2017; Cozma et al., 2018; Fu et al., 2018; Hill et al., 2016; Kiros et al., 2015; Torki, 2018; Zhao et al., 2015; Zhou et al., 2016, 2018).",
"To this end, we propose a simple yet effective approach for aggregating word embeddings into document embeddings.",
"Our approach is inspired by the Vector of Locally-Aggregated Descriptors (VLAD) (Jegou et al., 2010, 2012) used in computer vision to efficiently represent images for various image classification and retrieval tasks.",
"To our knowledge, we are the first to adapt and use VLAD in the text domain.",
"Our document-level representation is constructed as follows.",
"First, we apply a pre-trained word embedding model, such as GloVe (Penning-ton et al., 2014), on all the words from a set of training documents in order to obtain a set of training word vectors.",
"The word vectors are clustered by k-means in order to learn a codebook of semnatically-related word embeddings.",
"Each word embedding is then associated to its nearest cluster centroid (codeword).",
"The Vector of Locally-Aggregated Word Embeddings (VLAWE) representation of a text document is then computed by accumulating the differences between each codeword vector and each word vector that is both present in the document and associated to the respective codeword.",
"Since our approach considers cluster centroids as reference for building the representation, it can easily accommodate new words, not seen during k-means training, simply by associating them to the nearest cluster centroids.",
"Thus, VLAWE is robust to vocabulary distribution gaps between training and test, which can appear when the training set is particularly smaller or from a different domain.",
"Certainly, the robustness holds as long as the word embeddings are pre-trained on a very large set of documents, e.g. the entire Wikipedia.",
"We plug the VLAWE representation, which is learned in an unsupervised manner, into a classifier, namely Support Vector Machines (SVM), and show that it is useful for a diverse set of text classification tasks.",
"We consider five benchmark data sets: Reuters-21578 (Lewis, 1997), RT-2k (Pang and Lee, 2004), MR (Pang and Lee, 2005), TREC (Li and Roth, 2002) and Subj (Pang and Lee, 2004).",
"We compare VLAWE with recent state-of-the-art methods (Butnaru and Ionescu, 2017; Cheng et al., 2018; Fu et al., 2018; Hill et al., 2016; Iyyer et al., 2015; Kim, 2014; Kiros et al., 2015; Le and Mikolov, 2014; Liu et al., 2017; Shen et al., 2018; Torki, 2018; Xue and Zhou, 2009; Zhao et al., 2015; Zhou et al., 2016, 2018), demonstrating the effectiveness of our approach.",
"Furthermore, we obtain a considerable improvement on the Movie Review (MR) data set, surpassing the state-of-the-art approach of Cheng et al. (2018) by almost 10% .",
"The rest of the paper is organized as follows.",
"We present related works on learning document-level representations in Section",
"2. We describe the Vector of Locally-Aggregated Word Embeddings in Section",
"3. We present experiments and results on various text classification tasks in Section",
"4. Finally, we draw our conclusion in Section",
"5. 2 Related Work There are various works (Butnaru and Ionescu, 2017; Cheng et al., 2018; Conneau et al., 2017; Fu et al., 2018; Hill et al., 2016; Iyyer et al., 2015; Kim, 2014; Kiros et al., 2015; Kusner et al., 2015; Le and Mikolov, 2014; Clinchant and Perronnin, 2013; Shen et al., 2018; Torki, 2018; Zhao et al., 2015; Zhou et al., 2018) that propose to build effective sentence-level or document-level representations based on word embeddings.",
"While most of these approaches are based on deep learning (Cheng et al., 2018; Conneau et al., 2017; Hill et al., 2016; Iyyer et al., 2015; Kim, 2014; Kiros et al., 2015; Le and Mikolov, 2014; Zhao et al., 2015; Zhou et al., 2018), there have been some approaches that are inspired by computer vision research, namely by the bag-of-visual-words (But-naru and Ionescu, 2017) and by Fisher Vectors (Clinchant and Perronnin, 2013).",
"The relationship between the bag-of-visual-words, Fisher Vectors and VLAD is discussed in (Jegou et al., 2012).",
"The discussion can be transferred to describe the relantionship of our work and the closely-related works of Butnaru and Ionescu (2017) and Clinchant and Perronnin (2013).",
"The Vector of Locally-Aggregated Descriptors (VLAD) (J egou et al., 2010, 2012) was introduced in computer vision to efficiently represent images for various image classification and retrieval tasks.",
"We propose to adapt the VLAD representation in order to represent text documents instead of images.",
"Our adaptation consists of replacing the Scale-Invariant Feature Transform (SIFT) image descriptors (Lowe, 2004) useful for recognizing object patterns in images with word embeddings (Mikolov et al., 2013; Pennington et al., 2014) useful for recognizing semantic patterns in text documents.",
"We coin the term Vector of Locally-Aggregated Word Embeddings (VLAWE) for the resulting document representation.",
"The VLAWE representation is derived as follows.",
"First, each word in the collection of training documents is represented as a word vector using a pre-trained word embeddings model.",
"The result is a set X = { x 1 , x 2 , ..., x n } of n word vectors.",
"As for the VLAD model, the next step is to learn a codebook { 1 , 2 , ..., k } of representative meta-word vectors (codewords) using k-means.",
"Each codeword i is the centroid of the cluster C i X : i = 1 | C i | (cid:88) x t C i x t , i { 1 , 2 , ..., k } , (1) where | C i | is the number of word vectors assigned to cluster C i and k is the number of clusters.",
"Since word embeddings carry semantic information by projecting semantically-related words in the same region of the embedding space, it means that the resulting clusters contain semantically-related words.",
"The formed centroids are stored in a randomized forest of k-d trees to reduce search cost, as described in (Philbin et al., 2007; Ionescu et al., 2013; Ionescu and Popescu, 2014, 2015a).",
"Each word embedding x t is associated to a single cluster C i , such that the Euclidean distance between x t and the corresponding codeword i is minimum, for all i { 1 , 2 , ..., k } .",
"For each document D and each codeword i , the differences x t i of the vectors x t C i D and the codeword i are accumulated into column vectors: v i,D = (cid:88) x t C i D x t i , (2) where D X is the set of word embeddings in a given text document.",
"The final VLAWE embedding for a given document D is obtained by stacking together the d -dimensional residual vectors v i,D , where d is equal to the dimension of the word embeddings: D = v 1 ,D v 2 ,D ... v k,D .",
"Therefore, the VLAWE document embedding is has k d components.",
"The VLAWE vector D undergoes two normalization steps.",
"First, a power normalization is performed by applying the following operator independently on each component (element): f ( z ) = sign ( z ) | z | , (4) where 0 1 and | z | is the absolute value of z .",
"Since words in natural language follow the Zipf's law (Powers, 1998), it seems natural to apply the power normalization in order to reduce the influence of highly frequent words, e.g. common words or stopwords, which can corrupt the representation.",
"As Jegou et al. (2012), we empirically observed that this step consistently improves the quality of the representation.",
"The power normalized document embeddings are then L 2 -normalized.",
"After obtaining the normalized VLAWE representations, we employ a classification method to learn a discriminative model for each specific text classification task.",
"We exhibit the performance of VLAWE on five public data sets: Reuters-21578 (Lewis, 1997), RT-2k (Pang and Lee, 2004), MR (Pang and Lee,",
"2005), TREC (Li and Roth, 2002) and Subj (Pang and Lee, 2004).",
"The Reuters-21578 data set (Lewis, 1997) contains articles collected from Reuters newswire.",
"Following Joachims (1998) and Yang and Liu (1999), we select the categories (topics) that have at least one document in the training set and one in the test set, leading to a total of 90 categories.",
"We use the ModeApte evaluation (Xue and Zhou, 2009), in which unlabeled documents are eliminated, leaving a total of 10787 documents.",
"The collection is already divided into 7768 documents for training and 3019 documents for testing.",
"The RT-2k data set (Pang and Lee, 2004) consists of 2000 movie reviews taken from the IMDB movie review archives.",
"There are 1000 positive reviews rated with four or five stars, and 1000 negative reviews rated with one or two stars.",
"The task is to discriminate between positive and negative reviews.",
"The Movie Review (MR) data set (Pang and Lee, 2005) consists of 5331 positive and 5331 negative sentences.",
"Each sentence is selected from one movie review.",
"The task is to discriminate between positive and negative sentiment.",
"TREC (Li and Roth, 2002) is a question type classification data set, where questions are divided into 6 classes.",
"The collection is already divided into 5452 questions for training and 500 questions for testing.",
"The Subjectivity (Subj) (Pang and Lee, 2004) data set contains 5000 objective and 5000 subjective sentences.",
"The task is to classify a sentence as being either subjective or objective.",
"In the experiments, we used the pre-trained word embeddings computed with the GloVe toolkit provided by Pennington et al. (2014).",
"The pre-trained GloVe model contains 300 -dimensional vectors for 2 .",
"2 million words and phrases.",
"Most of the steps required for building the VLAWE representation, such as the k-means clustering and the randomized forest of k-d trees, are implemented using the VLFeat library (Vedaldi and Fulkerson, 2008).",
"We set the number of clusters (size of the codebook) to k = 10 , leading to a VLAWE representation of k d = 10 300 = 3000 components.",
"Similar to Jegou et al. (2012), we set = 0 .",
"5 for the power normalization step in Equation (4), which consistently leads to near-optimal results on all data sets.",
"In the learning stage, we employ the Method Reuters-21578 RT-2k MR TREC Subj Average of word embeddings (baseline) 85 .",
"Support Vector Machines (SVM) implementation provided by LibSVM (Chang and Lin, 2011).",
"We set the SVM regularization parameter to C = 1 in all our experiments.",
"In the SVM, we use the linear kernel.",
"For optimal results, the VLAWE representation is combined with the BOSWE representation (Butnaru and Ionescu, 2017), which is based on the PQ kernel (Ionescu and Popescu, 2013, 2015b).",
"We follow the same evaluation procedure as Kiros et al. (2015) and Hill et al. (2016), using 10 fold cross-validation when a train and test split is not pre-defined for a given data set.",
"As evaluation metrics, we employ the micro-averaged F 1 measure for the Reuters-21578 data set and the standard classification accuracy for the RT-2k, the MR, the TREC and the Subj data sets, in order to fairly compare with the related art. 4.3 Results We compare VLAWE with several state-of-the-art methods (Butnaru and Ionescu, 2017; Cheng et al., 2018; Fu et al., 2018; Hill et al., 2016; Iyyer et al., 2015; Kim, 2014; Kiros et al., 2015; Le and Mikolov, 2014; Liu et al., 2017; Shen et al., 2018; Torki, 2018; Xue and Zhou, 2009; Zhao et al., 2015; Zhou et al., 2016, 2018) as well as two baseline methods, namely the average of word embeddings and the standard bag-of-words (BOW).",
"The corresponding results are presented in Table 1.",
"First, we notice that our approach outperforms both baselines on all data sets, unlike other related methods (Le and Mikolov, 2014; Hill et al., 2016).",
"In most cases, our improvements over the baselines are higher than 5% .",
"On the Reuters-21578 data set, we surpass the closely-related approach of Butnaru and Ionescu (2017) by around 2% .",
"On Method MR VLAWE ( k = 2 ) 93 .",
"the RT-2k data set, we surpass the related works of Fu et al. (2018) and Butnaru and Ionescu (2017) by around 4% .",
"To our knowledge, our accuracy of 94 .",
"1% on RT-2k (Pang and Lee, 2004) surpasses all previous results reported in literature.",
"On the MR data set, we surpass most related works by more than 10% .",
"To our knowledge, the best accuracy on MR reported in previous literature is 83 .",
"6% , and it is obtained by Cheng et al. (2018).",
"We surpass the accuracy of Cheng et al. (2018) by almost 10% , reaching an accuracy of 93 .",
"3% using VLAWE.",
"On the TREC data set, we reach the third best performance, after methods such as (Cheng et al., 2018; Zhou et al., 2016, 2018).",
"Our performance on TREC is about 2% lower than the state-of-the-art accuracy of 96 .",
"1% .",
"On the Subj data set, we obtain an accuracy of 95 .",
"0% .",
"There are two state-of-the-art methods (Cheng et al., 2018; Zhao et al., 2015) reporting better performance on Subj.",
"Compared to the best one of them (Cheng et al., 2018), our accuracy is 1% lower.",
"Overall, we consider that our results are noteworthy.",
"The k-means clustering algorithm and, on some data sets, the cross-validation procedure can induce accuracy variations due to the random choices involved.",
"We have conducted experiments to determine how large are the accuracy variations.",
"We observed that the accuracy can decrease by up to 1% , which does not bring any significant differences to the results reported in Table 1.",
"Even for a small number of clusters, e.g. k = 10 , the VLAWE document representation can grow up to thousands of features, as the number of features is k d , where d = 300 is the dimensionality of commonly used word embeddings.",
"However, there are several document-level representations that usually have a dimensionality much smaller than k d .",
"Therefore, it is desirable to obtain a more compact VLAWE representation.",
"We hereby propose two approaches that lead to more compact representations.",
"The first Figure 1: Accuracy on MR for different numbers of k-means clusters.",
"one is simply based on reducing the number of clusters.",
"By setting k = 2 for instance, we obtain a 600 -dimensional representation.",
"The second one is based on applying Principal Component Analysis (PCA), to reduce the dimension of the feature vectors.",
"Using PCA, we propose to reduce the size of the VLAWE representation to 300 components.",
"In Table 2, the resulting compact representations are compared against the full VLAWE representation on the MR data set.",
"Although the compact VLAWE representations provide slightly lower results compared to the VLAWE representation based on 3000 components, we note that the differences are insignificant.",
"Furthermore, both compact VLAWE representations are far above the state-of-the-art method (Cheng et al., 2018).",
"In Figure 1, we illustrate the performance variation on MR, when using different values for k .",
"We notice that the accuracy tends to increase slightly, as we increase the number of clusters from 2 to 30 .",
"Overall, the VLAWE representation seems to be robust to the choice of k , always surpassing the state-of-the-art approach (Cheng et al., 2018).",
"We proposed a novel representation for text documents which is based on aggregating word embeddings using k-means and on computing the residuals between each word embedding allocated to a given cluster and the corresponding cluster centroid.",
"Our experiments on five benchmark data sets prove that our approach yields competitive results with respect to the state-of-the-art methods.",
"We thank the reviewers for their useful comments.",
"This research is supported by University of Bucharest, Faculty of Mathematics and Computer Science, through the 2019 Mobility Fund."
] | [
"objective",
"method",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"objective",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"method",
"other",
"result",
"abstain",
"method",
"abstain",
"result",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"other",
"other"
] |
[
"Most chatbot literature that focuses on improving the fluency and coherence of a chatbot, is dedicated to making chatbots more humanlike.",
"However, very little work delves into what really separates humans from chatbots humans intrinsically understand the effect their responses have on the interlocutor and often respond with an intention such as proposing an optimistic view to make the interlocutor feel better.",
"This paper proposes an innovative framework to train chatbots to possess humanlike intentions.",
"Our framework includes a guiding chatbot and an interlocutor model that plays the role of humans.",
"The guiding chatbot is assigned an intention and learns to induce the interlocutor to reply with responses matching the intention, for example, long responses, joyful responses, responses with specific words, etc.",
"We examined our framework using three experimental setups and evaluated the guiding chatbot with four different metrics to demonstrate flexibility and performance advantages.",
"Additionally, we performed trials with human interlocutors to substantiate the guiding chatbot's effectiveness in influencing the responses of humans to a certain extent.",
"Code will be made available to the public.",
"Humans have evolved to become sensitive to their social interactions.",
"The more they interact, the more they generally learn what to say and what not to say to light up people's mood or to avoid upsetting others.",
"In this paper, we aimed to train a chatbot to emulate these human-like qualities by making it learn from interactive conversation.",
"A chatbot that understands the effect its utterances have on the interlocutor could be a significant step towards achieving human-level chatbots.",
"A chatbot that understands the effect of its utterances on the interlocutor is also critical in real-world applications.",
"For instance, as shown in Figure 1, given a context from the interlocutor, both Figure 1: An example of the dialogue to show that how the chatbot interacts with the interlocutor, and how our chatbot affects the interlocutor's response when assigned the intention, making people respond joyful.",
"responses \"I did. They were really nice and fun and smart people.\" and \"I did. I was so bummed out. I was so lonely.\" were relevant and reasonable responses, and were equally suitable for a typical chatbot.",
"However, we could give an intention to the proposed chatbot (guiding chatbot), such as making the interlocutor feel joyful.",
"In this way, the chatbot would respond in a positive way to induce joy in the interlocutor.",
"Much literature combine Reinforcement Learn-ing(RL) (Kaelbling et al., 1996) with transformer-based (Vaswani et al., 2017) models to control the chatbot's output.",
"Gupta et al. (2019) proposed models to concentrate on crucial keyphrases presented in the context.",
"Their models tended to generate outputs that were more coherent and specific to the conditionals, which leaded to more non-generic words.",
"By training with a combination of the above criteria, their approach leaded to more diverse and interesting responses.",
"However, these previous works focused on controlling the chatbot's responses and completely neglected the interlocutor in their training.",
"In this paper, we made extensive use of the interlocutor's responses as interactive experiences to train our guiding chatbot to influence the interlocutor with intentions.",
"We introduce a novel training framework, in which there were two conversational models that simulated chatbot-interlocutor interaction.",
"One model acted as the interlocutor model, while the other was the guiding chatbot to be trained.",
"The interlocutor took the guiding chatbot's output as its input and generated corresponding responses.",
"The learning framework was given a controllable factor, which represented the intention it had.",
"We defined reward functions according to three different controllable factors, sentence length, emotion, and specific words, to make the guiding chatbot learn to induce the interlocutor model to generate desired responses using RL.",
"To evaluate our guiding chatbot, we designed several experiments to examine the rewards corresponding to three controllable factors, and empirical results demonstrate that our guiding chatbot can influence humans' responses.",
"Moreover, we found that training with more interlocutor models together improved the guiding chatbot's performance on the human evaluation experiment.",
"Furthermore, we analyzed recent off-the-shelf chatbots based on experimental results, aiming to find hidden tendencies these chatbot models had, such as cursing more or being more irritative.",
"The most common chatbot model is sequence-to-sequence based (Sutskever et al., 2014).",
"Recently, numerous researchers applied transformer to build coherent chatbots by retrieval-based (Zhou et al., 2016; Wu et al., 2019; Yang et al., 2018; Henderson et al., 2017; Yan et al., 2016) and generative-based (Ritter et al., 2011; Serban et al., 2016; Shang et al., 2015; Tammewar et al., 2018) approaches.",
"Despite the decent fluency and coherence these chatbots achieved, they still hardly converse like a human.",
"The reason might be that they are essentially devoid of emotions.",
"Furthermore, some used RL to improve their chatbot's performance (Serban et al., 2016; Williams, 1992) and others combined RL with GPT-2 (Radford et al., 2019) models to control the sentiment of their chatbot's response to make it more user-friendly (Han et al., 2019; Lee et al., 2018).",
"Beyond the viewpoint of sentiment, the Empathet-icDialogues (ED) dataset was collected (Rashkin et al., 2019) to train a chatbot that could recognize the feeling of the interlocutor and know how to reply accordingly (Lin et al., 2020).",
"However, these researchers neglected what really separated humans from chatbots humans understand the impact their responses have on the interlocutor and often responded with intentions and expectations.",
"Note that it is not just about being empathetic as human's intentions could vary widely.",
"One previous work also considered interlocutor responses (Shin et al., 2019).",
"It used a sentiment predictor to predict the interlocutor's sentiment given the chatbot's response, and also trained the chatbot with RL.",
"Unlike this previous work, our proposed framework explicitly modeled the possible responses of interlocutors.",
"Explicitly modeling interlocutor responses give the proposed framework more flexibility.",
"For example, in addition to steering the interlocutor's sentiment as in this paper, the framework could be used to build a chatbot that induce the interlocutor to become more talkative by setting its learning target to be making the interlocutor generate longer sentences.",
"Moreover, we also developed techniques to preserve the dialogue's coherence, so our chatbot could still generate fluent and appropriate responses in addition to having a particular intention.",
"Apart from influencing the interlocutor, the proposed framework also served as a way to analyze the underlying inclination of the off-the-shelf chatbots playing the role of interlocutor.",
"Through the interaction, we could know what factors are apt to influence these off-the-shelf chatbots.",
"Holzinger et al. (2017) claimed that the appealing performance of recent robust and SOTA models belied a potential problem of black-box models: these models lacked an explicit declarative knowledge representation.",
"Hence, calling for a transparent representation, they dug into explaining trained models.",
"In contrast to the previous contributions, we tried to explain the implied tendency of a chatbot, which was not obvious to recognize.",
"According to the experiments, we were capable of telling whether the off-the-shelf black box chatbot possessed certain predispositions, such as tending to swear more or having a short temper.",
"The proposed framework is shown in Figure 2.",
"It consisted of two conversational models: the guiding chatbot and the interlocutor model.",
"The interlocutor and guiding chatbot simulated the dialogue between a human and a chatbot.",
"The guiding chatbot aimed to generate a response that maximize Figure 2: The framework that we proposed to teach the guiding chatbot how to achieve the intention assigned by the controllable factors.",
"rewards according to different controllable factors; the interlocutor models produced responses based on the guiding chatbot's response in order to simulate a human's response.",
"Therefore, grounded in different controllable factors, we examined corresponding rewards to optimize the guiding chatbot to influence the interlocutor.",
"Interlocutor The model I represented the interlocutor.",
"I could be any off-the-shelf chatbot whose parameters were fixed during training; that is, it was unnecessary to know its parameters in the framework.",
"I was only used during the training phase to train the guiding chatbot via interaction.",
"In the testing phase, the guiding chatbot will interact with real human beings.",
"The interlocutor models' settings will be described in Section 5.3.",
"Guiding Chatbot The guiding chatbot model C was the chatbot we trained to induce desired responses in the interlocutor.",
"We built the guiding chatbot model C based on DialoGPT (Zhang et al., 2020).",
"To train model C , given the input sentence x , our chatbot C generated a sentence C ( x ) .",
"The generated sentence C ( x ) then became the input for I , and I output its response I ( C ( x )) .",
"We defined the reward R for C based on C ( x ) and I ( C ( x )) , and C was trained to maximize the value of R by the policy gradient.",
"The definition of the reward R depended on the controllable factors, that is, the intention of the guiding chatbot (how the guiding chatbots wanted the interlocutor to respond).",
"The definition of our reward functions is in Section 3.3, and the controllable factors are in Section 4.",
"We introduce two kinds of reward functions: intention reward RI and coherence reward RC .",
"The final reward that the guiding chatbot C learned to maximize will be a combination of RI and RC .",
"Intention To influence the interlocutor, the guiding chatbot C ought to learn from the interlocutor's reaction.",
"To be more specific, we collected responses I ( C ( x )) from the off-the-shelf chatbots when interacting with our guiding chatbot.",
"Then the intention reward RI was obtained by evaluating the interlocutor's responses, that is, I ( C ( x )) , based on the controllable factors of guiding chatbot C .",
"Using the intention reward allowed the guiding chatbot to induce the interlocutor to perform specifically according to the controllable factors, namely our intentions.",
"The formulation of RI depended on the controllable factors.",
"To observe the effectiveness of guiding these interlocutor models, in this paper, we had three controllable factors, which were equal to our intentions: to extend the sentence length, to make the interlocutor speak with a particular emotion, and to induce the interlocutor to speak specific words.",
"Exact formulation of rewards for different controllable factors will be given in Section 4.",
"Coherence Using the intention reward as the only reward leaded to a drawback that the guiding chatbot ignored the coherence between the input x and the generated response C ( x ) .",
"To avoid this problem, an extra constraint on the guiding chatbot to maintain coherent responses was necessary: we applied another conversational model C (cid:48) that served as a constraint maintaining coherence.",
"Here we used the open-domain GPT-2 model as the C (cid:48) .",
"To be more specific, we estimated the difference in generated probability between C and C (cid:48) and minimized the estimated difference.",
"As a result, C would be less likely to produce responses unrelated to input x coherent to responses generated by C (cid:48) .",
"The additional reward RC is defined as below.",
"RC was the likelihood that C (cid:48) generated the sentence C ( x ) given the input sentence x .",
"This term served as a kind of regularization that avoids drift during training.",
"To sum up, the total reward is defined as: R = R I + (1 ) RC , (2) where is the hyper-parameter.",
"Below are the three types of controllable factors studied in this paper.",
"RI in Section 3.3 could be either RL for sentence length, RE for emotion, or RW for specific words, introduced below.",
"Sentence Length A chatbot that could inspire the interlocutor to become more talkative is desirable in many real world applications.",
"We aimed to observe whether our chatbot was able to make the interlocutor more talkative, and extend the length of conversations.",
"Hence, we counted the sentence length of interlocutor models' responses as RL .",
"By optimizing this reward, we anticipated that the guiding chatbot might extend sentence length from the interlocutor.",
"Emotion We studied whether our chatbot was capable of inducing the interlocutor to respond with different emotions.",
"We selected eight emotions, including anger, anxiety, contentment, disgust, hope, joy, sadness, surprise.",
"We selected the eight emotions such that two emotions are located in each of the four different quadrants of the Valence-Arousal coordinate (Russell, 1980).",
"To ascertain the emotion of sentences, we established an Emotion Detector.",
"The Emotion Detector was an emotion classifier used to classify emotion given an input sentence.",
"We trained the Emotion Detector on the EnpatheticDialogue (ED) dataset (Rashkin et al., 2019).",
"For each sentence, the Emotion Detector will employ a Valence-Arousal (VA) projection grounded on the Valence-Arousal coordinate (Russell, 1980; G. Paltoglou, 2013).",
"Given an input sequence, the Emotion Detector would output a two-dimensional vector representing se-quence's emotion, defined as emotional valence 1 .",
"More details related to the Valence-Arousal Coordinate will be discussed in Section 5.2.",
"We utilized the BERT (Devlin et al., 2019) architecture, a pretrained contextualized embedding model, to improve language understanding.",
"Next, we fine-tuned the BERT model on an emotional classification task to enhance the model's capability of categorizing each emotion.",
"The accuracy of our emotion detector was up to 82%, and, therefore, we could obtain a detected emotion and its emotional valence given an input sentence.",
"The Emotion Detector takes I ( C ( x )) as input and predicted its emotional valence according to the VA coordinate.",
"Therefore, we could calculate the Mean Square Error (MSE) between the emotional valence of the interlocutor models' responses and the target emotion's emotional valence as the reward RE .",
"Specific Words We aimed to induce the interlocutor to speak with words from specific groups.",
"These word groups, including Bad Words, Sports, and Food, were collected from Google's team 2 and the Enchanted Learning website 3 .",
"To provoke the interlocutor to respond to the sentence including the specific words we want, we calculated the frequency of the specific words in a sentence.",
"We counted the frequency of interlocutor models' responses that contain words in these word groups as RW .",
"We anticipated that the interlocutor can generate a sentence that contains more words from the specific group and still be coherent as well as fluent.",
"EmpatheticDialouges Dataset Rashkin et (2019) created an innovative dataset with around",
"1 e.g. fear=[-0.12, 0.79], joy=[0.85, 0.15] 2 https://gist.github.com/jamiew/ 1112488 3 https://www.enchantedlearning.com/ home.shtml",
"25K conversations, each consisting of a speaker and a listener.",
"The participants, acting as the speaker, initiated the talks, and the psychologists, serving as the listener, responded to the speaker empathetically.",
"The dataset covers 32 different emotion labels including positive, negative, and neutral emotions.",
"They firmly ensure that each emotion in the dataset was evenly distributed.",
"Nonetheless, a few emotion classes were quite similar, such as \"sentimental\" and \"nostalgic\".",
"Thus, we merged these equivalent emotion classes into one emotion class.",
"In Valence-Arousal Coordinate study (Russell, 1980; G. Paltoglou, 2013), researchers assigned emotional values to nineteen kinds of emotions.",
"We performed supervised training of the Emotion Detector based on these known emotions on the ED dataset.",
"Each emotion could be represented as a two-dimensional vector.",
"Therefore, we could map each emotion to the coordinate on the VA space.",
"RL Training Details We applied the Policy gradient (Sutton et al., 2000) as our RL algorithm.",
"To implement an RL training chatbot, we applied the DialoGPT model, which fine-tuned the GPT-2 model on 147M multi-turn dialogues from Red-dit discussion threads.",
"The GPT-2 model was a transformer-based model with 36 layers, 20 attention heads in each layer, 345M parameters, and an embedding size was 1024.",
"This model was trained on the WebText dataset and 50,257 tokens with invertible byte pair encoding to preserve capitalization and punctuation.",
"In our training procedure, we fine-tuned the DialoGPT model on the ED dataset based on the reward function mentioned in Section 3.3.",
"The Publicly available Google bot (Vinyals and Le, 2015) 4 was trained on the dataset proposed by Danescu-Niculescu-Mizil and Lee (2011) with 220,579 conversational exchanges between 10,292 pairs.",
"The whole corpus was split into training and testing sets.",
"4 https://github.com/Conchylicultor/ DeepQA The same DialoGPT model mentioned in Section 5.3 was used here to act as the interlocutor.",
"The weights of the model were fixed.",
"A BERT-based Retrieval chatbot trained on the ED dataset.",
"Given input sentences, the chatbot chose the corresponding response from the candidate pool.",
"The BERT encoder first embedded the sentences into sentence embedding and then computed cosine similarity between the input sentences and all candidates to select the most likely option.",
"The candidate pool was comprised of all sentences in the ED dataset, which contained approximately 100K sentences.",
"Aside from the reward scores related to the intentions, we also reported the following three metrics in the experiments.",
"Conditional Perplexity The Conditional Perplexity here was to measure the dialogue coherence between the output sentence and input sentence x .",
"The equation is shown below.",
"CP P L was the conditional perplexity, which was equal to the inverse of the product of each word's probability in the sentence C ( x ) given the input sentence x .",
"T was the length of the sentence C ( x ) .",
"Perplexity Here we employed the pretrained GPT-2 language model to judge if the output sentence C ( x ) was an acceptable sentence.",
"The computation of Perplexity (Chen et al., 1998) is shown below.",
"Self-BLEU While BLEU score (Papineni et al., 2002) is usually used to measure the correctness in machine translation, Self-BLEU (Zhu et al., 2018) was used here to measure the diversity of chatbot responses; we calculated the average BLEU score between sentences in our testing result as the Self-BLEU score.",
"For human evaluation, we recruited participants online.",
"There were 19 participants; most of them were graduate or undergraduate students.",
"Each participant was given several conversations, including an opening sentence and a corresponding response.",
"They were asked to try to understand the conversation and provide a response to reply to the conversation.",
"Therefore, we were able to collect numerous participants' responses to calculate rewards.",
"Moreover, participants were asked to score the relevance of the guiding chatbot's response to the opening sentence.",
"This task was rated on a Lik-ert scale(Likert, 1932), ranging from 1 to 5: Score 1 means a firm disagreement, Score 3 meant neutral, and Score 5 meant an undoubted approval.",
"Finally, we counted rewards from humans' responses corresponding to the methods mentioned in Section 4.",
"The first controllable factor was sentence length.",
"We aimed to guide the interlocutor to say more words in a single sentence.",
"Table 1 reveals that our chatbot possessed the ability to encourage the interlocutor to be more talkative.",
"The guiding chatbot interacted with the Google model while training could induce the interlocutor model to increase its sentence length from 3 to 10 words on average.",
"However, as the sentence length increased, the conditional perplexity rose simultaneously.",
"The result reflected that the guiding chatbot trained with the Google model was forced to generate weird sentences so that the interlocutor model would produce a longer sentence.",
"In contrast, although the guiding chatbot trained with the Retrieval model suffered from the same problem, the conditional perplexity increased only slightly, from 50.3 to 76.55, and the sentence length was much longer.",
"Still, the high Self-BLEU3 score indicates that our chatbot might encounter a low-diversity problem.",
"Therefore, the guiding chatbot trained with the GPT-2 model was the most desirable and stable chatbot to extend the interlocutor's sentence length.",
"The second task was to induced the interlocutor to speak with a particular emotion.",
"These emotions included anger, anxiety, contentment, disgust, hope, joy, sadness, surprise .",
"We examined the MSE loss between these emotions and the detected emotions of test sentences.",
"Fig. 3a demonstrated that after training, all three interlocutors had similar performance in each emotion.",
"Furthermore, Table 1 indicates that all guiding chatbots trained with any interlocutor model significantly decreased the MSE loss against baseline performance.",
"As a result, in-dependent of the choice of interlocutor model, our chatbot could successfully guide the interlocutor to speak with a specific emotion.",
"terlocutors that interacted with our model without any fine-tuning.",
"Table 2 shows that all three interlocutors responded more with positive emotions than with negative emotions.",
"Then, we evaluated how our chatbot realizes the way to influence the interlocutor.",
"Figure 3a shows the difference between the MSE scores of the ground truth sentences and the MSE scores of the test sentences.",
"We found that the improvements for negative emotions are greater than those of positive emotions.",
"Table 2 shows that the average MSE scores of negative emotions is greater than positive emotions.",
"According to the Fig. 3a, the Google model was easier to guided to reply with negative emotions, such as anxiety, sadness, and disgust.",
"In comparison, the GPT-2 model was more easily encouraged to speak with positive emotion, such as joy, surprise, hope, and contentment.",
"We attribute this phenomenon to the datasets underpinning each of these chatbots.",
"The Google model was trained on the Cornell Movie Dialogue dataset, whereas the GPT-2 model was fine-tuned using the ED dataset.",
"The movie dataset is full of simple, dramatic, exaggerated sentences.",
"On the other hand, the ED dataset, designed to arouse the participants' sympathy tends be more positive.",
"Furthermore, the Fig. 3a also displays that our chatbot performs exceptionally well on inducing the interlocutor speak with anxiety.",
"The difference in the Google model's reward was up to 0.7, which means that we can significantly induce the interlocutor to speak with anxious emotion.",
"In another set of trials, our chatbot managed to make the interlocutor sentences contain certain groups of words, such as Food, Sports, Jobs, and Bad Words.",
"We calculated the frequency of a word in a specific group.",
"Table 1 shows that the ground truth's reward was close to 0, which suggests that the interlocutor models barely spoke words in the \"Food\" group before being exposure to by our guiding chatbot.",
"Fig. 3b shows that our chatbot could successfully influence the interlocutor to talk about a sentence containing a word from the \"Sports\" group and \"Food\" group.",
"On average, after interacting with the guiding chatbot, the Google model spoke 0.7 more words in the \"Job\" group, and the Retrieval model was induced to say 0.6 more words in the \"Food\" group.",
"However, since the rewards of the ground truth are all near 0, Figure 3b indicates that fine-tuning the guiding chatbot using the RL approach can lead the interlocutor to say words they did not previously say.",
"We also found that the guiding chatbot trained with the GPT-2 model could only weakly induce the interlocutors to use words from the \"Bad Word\" group.",
"This is almost certainly because bad words rarely appear in the ED dataset.",
"The guiding chatbot trained with the Google model was more likely to induce the Google model interlocutor to say words in the \"Bad Word\" groups.",
"We further an-Interlocutor while training Interlocutor while testing Sentence Length Emotion (Anxiety) Specific Words (Food) RL Relevance RE Relevance RW Relevance Human 5.82 3.10 0.41 3.10 0.05 3.10 GPT-2 6.05 2.10 0.27 3.89 0.16 2.63 Google Human 2.74 2.31 0.47 4.21 0.05 2.42 Ret 5.90 1.52 0.46 3.68 0.21 1.47 GPT-2 + Google + Ret Human 7.21 2.79 0.39 3.21 0.68 1.53 Table 3: Human Evaluation Results.",
"alyzed the Cornell Movies dataset and found that, there are 24547 bad words out of 220579 sentences.",
"We likewise concluded that dramatic utterances in the Cornell Movies dataset brought about the tendency for the interlocutor to say more bad words.",
"Having proven that our guiding chatbot can significantly improve all three rewards against ground truth while training with a given interlocutor model, we experimented with the more formidable task of having the guiding chatbot consider all three interlocutor models at once.",
"Table 1 demonstrates that the guiding chatbot could increase the performance, which indicates that the guiding chatbot could learn more experiences when interacting with more and different interlocutor models.",
"While interacting with more models, the guiding chatbot can improve the \"Emotion\" and \"Specific words\" rewards against the guiding chatbot that was only trained with a single interlocutor model.",
"Although the \"Sentence Length\" reward subtly decreased, the rewards still surpassed the ground truth reward, showing that the guiding chatbot could influence the interlocutor.",
"Moreover, since we could not assume that our interlocutor models are capable of representing all kinds of humans, we conducted an experiment to evaluate our guiding chatbot all-around.",
"The detailed procedures are as follow: we tested our guiding chatbot on the interlocutor model that our guiding chatbot had seen before during training.",
"For example, the guiding chatbot was trained with the GPT-2 and Google models but would be tested with the Retrieval model.",
"Results in Table 1 shows that all guiding chatbots trained with different interlocutor models could improve the rewards in three controllable factors.",
"Also, we found that while testing on the Retrieval interlocutor model, this model was more likely to be induced to speak longer sentences than other interlocutor models.",
"It is mainly because retrieving a longer response is easier than generating.",
"Human evaluation results sufficiently verify the guiding chatbot's effectiveness of influencing humans' responses to certain extents.",
"Since the performances of the \"anxiety\" emotion and \"Food\" group were relatively well, shown in Table 1, we focused on these factors when conducting the human evaluation.",
"Table 3 shows that the guiding chatbot could significantly induce humans to speak with anxiety, as well as maintain, or even enhance, the relevance within a conversation.",
"This performance was consistent with the results in Table 1, in which the guiding chatbot acquired the ability to gain better rewards.",
"Nonetheless, the results of \"Sentence Length\" and \"Specific Words\" can hardly show a promising effect.",
"Although the reward gained improvement slightly, humans generally felt the guiding chatbot's response irrelevant: as the reward increased, the relevance decreased dramatically.",
"This result demonstrates that the guiding chatbot might learn a tricky approach to gain higher rewards during training, but this method was not fully adaptive to humans.",
"For instance, when training the guiding chatbot to influence the interlocutor to speak the sentence with the \"Food\" group, the guiding chatbot usually ended up with \"What is your favorite food?\", ignoring the context.",
"In contrast, the guiding chatbot could not only increase RE reward but also improve the coherence between responses of the guiding chatbot and the interlocutor models.",
"We analyzed the effects bring by RC .",
"We trained a guiding chatbot model without RC reward on aforementioned experimental settings in Section 5.3 and observed that the model was more prone to giving low diversity responses that were irrelevant to the context.",
"In our experiments, the Self-BLEU3 score was near 0.99 and the CPPL was over 10000 without RC reward.",
"This paper introduced a novel framework that aims to train a guiding chatbot to influence the interlocutor.",
"We designed three different controllable factors for the guiding chatbot to induce the interlocutor to reply with responses matching the intention.",
"We managed to prolong the length of the interlocutor's responses, influence the interlocutor to reflect with a particular emotion, and induce the interlocutor to use some specific words more frequently.",
"Furthermore, we further enhanced the performance of the guiding chatbot by training it with more interlocutor models.",
"Experiment results show that our proposed framework can successfully train chatbot with intentions.",
"In this paper, we proposed a learning framework that trains chatbots to influence humans.",
"We defined several rewards to reflect different behaviors that we want to induce to humans.",
"We undertook this work because we envisioned a future in which a chatbot can become a digital companion for humans.",
"To that end, we need the chatbot to be able to understand a human's mental state and reply with appropriate responses.",
"As a concrete example, chatbots could act as healthcare or relationship coaches for people who could not afford such services.",
"Having a healthcare chatbot to talk to at anytime could alleviate the workload of nurses and therapists.",
"Moreover, since our framework is reward-agnostic that could be optimize for any reward, we also expect that the experts could customize the profession reward definitions in their fields to bring the technique to higher level usage.",
"However, we also acknowledge the potential that this technique could be misused.",
"Using our framework, ill-intentioned people could train chatbots with negative intentions and could threaten the stability of our society.",
"For example, we have iden-tified the following means by which a malicious actor could take advantage of our proposed technology: Emotional Manipulation : One could train chatbots with the intention of arousing negative emotions such as anxiety, sadness, or anger to influence human's mental state.",
"Social Antagonism : One could train chatbots with the Specific Words Intention Reward to induce the interlocutors to exhibit gender biases or use racist terms to purposefully destabilize society.",
"Political Interference : One could train chatbots with the malicious intentions of manipulating the public's political opinion.",
"propose the following methods to counter them.",
"Intention Classifier : We could train a dialogue classifier that classifies whether a chatbot is purposefully influencing humans.",
"We believe this is technically achievable as we could find many works that aim to distinguish whether a sentence is generated by humans or not (Gao et al., 2020).",
"To further refine this work, we could easily collect training datasets for this classifier by interacting with chatbots trained by our framework and other general-purpose chatbots.",
"By doing this, we could inform humans when we detect that the chatbot they are conversing with is being manipulative.",
"Special Token : In the future, biomimetic technologies could blur the boundary between a living being and an artifact.",
"We suggest that if the chatbot model generates the sentences, the sentence needs to be labeled with some special flag to tell people whether the chatbot generates the sentence with the intention.",
"For instance, we can add <chatbot | intention> before any chatbot's response with the intention to inform people that a chatbot is trying to influence them.",
"This will make users aware that they are interacting with a chatbot and can undermine the effectiveness of a malevolent attack.",
"Safety Layer : Inspired by (Adiwardana et al., 2020), we could use a safety layer (e.g., an additional classifier) to filter out sensitive or toxic responses from chatbots during inference.",
"Future Work To avoid malicious actors taking our framework and train their own chatbot.",
"The development of the Intention Classifier become an essential research topic.",
"In future work, we would set the development of the Intention Classifier as the top priority.",
"The functions of the Intention Classifier are not only detect the intention of a dialogue system, it can also have an ability to generalize to any other dialogue systems.",
"With the power of Meta-Learning (Finn et al., 2017) the classifier is expected to train on a dialogue system with few data and could have the ability to detect whether sentences generated by the dialogue system are with intention.",
"As developers of emerging technologies, we also take responsibility for defining the boundaries of these technologies.",
"We will continue to refine the aforementioned methods to ensure that the proposed methodology improves public welfare as we intend it to."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"objective",
"other",
"abstain",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"objective",
"method",
"method",
"method",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain"
] |
[
"Reasoning about tabular information presents unique challenges to modern NLP approaches which largely rely on pre-trained contextualized embeddings of text.",
"In this paper, we study these challenges through the problem of tabular natural language inference.",
"We propose easy and effective modifications to how information is presented to a model for this task.",
"We show via systematic experiments that these strategies substantially improve tabular inference performance.",
"Natural Language Inference (NLI) is the task of determining if a hypothesis sentence can be inferred as true, false, or undetermined given a premise sentence (Dagan et al., 2013).",
"Contextual sentence embeddings such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), applied to large datasets such as SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018), have led to near-human performance of NLI systems.",
"In this paper, we study the harder problem of reasoning about tabular premises, as instantiated in datasets such as TabFact (Chen et al., 2019) and InfoTabS (Gupta et al., 2020).",
"This problem is similar to standard NLI, but the premises are Wikipedia tables rather than sentences.",
"Models similar to the best ones for the standard NLI datasets struggle with tabular inference.",
"Using the InfoTabS dataset as an example, we present a focused study that investigates",
"(a) the poor performance of existing models,",
"(b) connections to information deficiency in the tabular premises, and,",
"(c) simple yet effective mitigations for these problems.",
"We use the table and hypotheses in Figure 1 as a running example through this paper, and re*The first two authors contributed equally to the work.",
"The first author was a remote intern at University of Utah during the work.",
"fer to the left column as its keys.",
"1 Tabular inference is challenging for several reasons:",
"(a) Poor table representation : The table does not explicitly state the relationship between the keys and values.",
"(b) Missing implicit lexical knowledge due to limited training data: This affects interpreting words like fewer' , and over' in H1 and H2 respectively.",
"(c) Presence of distracting information : All keys except No.",
"of listings are unrelated to the hypotheses H1 and H2.",
"(d) Missing domain knowledge about keys : We need to interpret the key Volume in the financial context for this table.",
"In the absence of large labeled corpora, any modeling strategy needs to explicitly address these problems.",
"In this paper, we propose effective approaches for addressing them, and show that they lead to substantial improvements in prediction quality, especially on adversarial test sets.",
"This focused study makes the following contributions: 1. We analyse why the existing state-of-the-art BERT class models struggle on the challenging task of NLI over tabular data.",
"2. We propose solutions to overcome these challenges via simple modifications to inputs using existing language resources.",
"1 Keys in the InfoTabS tables are similar to column headers in the TabFact database-style tables.",
"3. Through extensive experiments, we show significant improvements to model performance, especially on challenging adversarial test sets.",
"The updated dataset, along with associated scripts, are available at https://github.com/ utahnlp/knowledge_infotabs .",
"We examine the issues highlighted in 1 and propose simple solutions to mitigate them below.",
"Better Paragraph Representation (BPR): One way to represent the premise table is to use a universal template to convert each row of the table into sentence which serves as input to a BERT-style model.",
"Gupta et al. (2020) suggest that in a table titled t , a row with key k and value v should be converted to a sentence using the template: The k of t are v .",
"Despite the advantage of simplicity, the approach produces ungrammatical sentences.",
"In our example, the template converts the Founded row to the sentence The Founded of New York Stock Exchange are May 17, 1792; 226 years ago. .",
"We note that keys are associated with values of specific entity types such as MONEY , DATE , CARDINAL , and BOOL , and the entire table itself has a category.",
"Therefore, we propose type-specific templates, instead of using the universal one.",
"2 In our example, the table category is Organization and the key Founded has the type DATE .",
"A better template for this key is t was k on v , which produces the more grammatical sentence \"New York Stock Exchange was Founded on May 17, 1792; 226 years ago.\" .",
"Furthermore, we observe that including the table category information i.e. New York Stock Exchange is an Organization. helps in better premise context understanding.",
"3 Appendix A provides more such templates.",
"Implicit Knowledge Addition (KG implicit): Tables represent information implicitly ; they do not employ connectives to link their cells.",
"As a result, a model trained only on tables struggles to make lexical inferences about the hypothesis, such as the difference between the meanings of before' and after' , and the function of negations.",
"This is surprising, because the models have the benefit of being pre-trained on large textual corpora.",
"The construction of the template sentences based on entity type is a one-time manual step.",
"3 This category information is provided in the InfoTabS and TabFact datasets.",
"For other datasets, it can be inferred easily by clustering over the keys of the training tables.",
"Recently, Andreas (2020) and Pruksachatkun et al. (2020) showed that we can pre-train models on specific tasks to incorporate such implicit knowledge.",
"Eisenschlos et al. (2020) use pre-training on synthetic data to improve the performance on the TabFact dataset.",
"Inspired by these, we first train our model on the large, diverse and human-written MultiNLI dataset.",
"Then, we fine tune it to the InfoTabS task.",
"Pre-training with MultiNLI data exposes the model to diverse lexical constructions.",
"Furthermore, it increases the training data size by 433 K (MultiNLI) example pairs.",
"This makes the representation better tuned to the NLI task, thereby leading to better generalization.",
"Distracting Rows Removal (DRR) Not all premise table rows are necessary to reason about a given hypothesis.",
"In our example, for the hypotheses H1 and H2, the row corresponding to the key No.",
"of listings is sufficient to decide the label for the hypothesis.",
"The other rows are an irrelevant distraction.",
"Further, as a practical concern, when longer tables are encoded into sentences as described above, the resulting number of tokens is more than the input size restrictions of existing models, leading to useful rows potentially being cropped.",
"Appendix F shows one such example on the InfoTabS.",
"Therefore, it becomes important to prune irrelevant rows.",
"To identify relevant rows, we employ a simpli-fied version of the alignment algorithm used by Yadav et al. (2019, 2020) for retrieval in reading comprehension.",
"First, every word in the hypothesis sentence is aligned with the most similar word in the table sentences using cosine similarity.",
"We use fastText (Joulin et al., 2016; Mikolov et al., 2018) embeddings for this purpose, which preliminary experiments revealed to be better than other embeddings.",
"Then, we rank rows by their similarity to the hypothesis, by aggregating similarity over content words in the hypothesis.",
"Yadav et al. (2019) used inverse document frequency for weighting words, but we found that simple stop word pruning was sufficient.",
"We took the top k rows by similarity as the pruned representative of the table for this hypothesis.",
"The hyper-parameter k is selected by tuning on a development set.",
"Appendix B gives more details about these design choices.",
"keys improves a model's ability to disambiguate and understand them.",
"We expand the pruned table premises with contextually relevant key information from existing resources such as WordNet (definitions) or Wikipedia (first sentence, usually a definition).",
"4 To find the best expansion of a key, we use the sentential form of a row to obtain the BERT embedding (on-the-fly) for its key.",
"We also obtain the BERT embeddings of the same key from WordNet examples (or Wikipedia sentences).",
"5 Finally, we concatenate the WordNet definition (or the Wikipedia sentence) corresponding to the highest key embedding similarity to the table.",
"As we want the contextually relevant definition of the key, we use the BERT embeddings rather than noncontextual ones (e.g., fastText).",
"For example, the key volume can have different meanings in various contexts.",
"For our example, the contextually best definition is In capital markets, volume , is the total number of a security that was traded during a given period of time. rather than the other definition In thermodynamics, the volume of a system is an extensive parameter for describing its thermodynamic state. .",
"Our experiments are designed to study the research question: Can today's large pre-trained models exploit the information sources described in 2 to better reason about tabular information?",
"Datasets Our experiments uses InfoTabS, a tabular inference dataset from Gupta et al. (2020).",
"The dataset is heterogeneous in the types of tables and keys, and relies on background knowledge and common sense.",
"Unlike the TabFact dataset (Chen et al., 2019), it has all three inference labels, namely entailment, contradiction and neutral.",
"Importantly, for the purpose of our evaluation, it has three test sets.",
"In addition to the usual development set and the test set (called 1 ), the dataset has two adversarial test sets: a contrast set 2 that is lexically similar to 1 , but with minimal changes in the hypotheses 4 Usually multi-word keys are absent in WordNet, in this case we use Wikipedia.",
"The WordNet definition of each word in the key is used if the multi-word key is absent in Wikipedia.",
"5 We prefer using WordNet examples over definition for BERT embedding because",
"(a) an example captures the context in which key is used, and",
"(b) the definition may not always contain the key tokens.",
"and flip entail-contradict label, and a zero-shot set 3 which has long tables from different domains with little key overlap with the training set.",
"Models For a fair comparison with earlier baselines, we use RoBERTa-large (RoBERTa L ) for all our experiments.",
"We represent the premise table by converting each table row into a sentence, and then appending them into a paragraph, i.e. the Para representation of Gupta et al. (2020).",
"Hyperparameters Settings 6 For the distracting row removal (+DRR) step, we have a hyperparameter k .",
"We experimented with k { 2 , 3 , 4 , 5 , 6 } , by predicting on +DRR development premise on model trained on orignal training set (i.e. BPR), as shown in Table 1. The development accuracy increases significantly as k increases from 2 to 4 and then from 4 to 6 , increases marginally ( 1 . 5% improvement).",
"Since our goal is to remove distracting rows, we use the lowest hyperparameter with good performance i.e. k = 4 .",
"7 .",
"Table 2 shows the results of our experiments.",
"BPR As shown in Table 2, with BPR, we observe that the RoBERTa L model improves performance on all dev and test sets except 3 .",
"There are two main reasons behind this poor performance on 3 .",
"First, the zero-shot 3 data includes unseen keys.",
"The number of keys common to 3 and the training set is 94 , whereas for, dev, 1 and 2 it is 334 , 312 , and 273 respectively (i.e., 3-5 times more).",
"Second, despite being represented by better sentences, due to the input size restriction of RoBERTa L some relevant rows are still ignored.",
"KG implicit We observe that implicit knowledge addition via MNLI pre-training helps the model reason and generalize better.",
"From Table 2, we can see significant performance improvement in the dev and all three test sets.",
"DRR This leads to significant improvement in the 3 set.",
"We attribute this to two primary reasons: First, 3 tables are longer ( 13 . 1 keys per table on average, vs. 8 . 8 keys on average in the others), and DRR is important to avoid automatically removing keys from the bottom of a table due to the limitations in RoBERTa L model's input size.",
"Without these relevant rows, the model incorrectly predicts the neutral label.",
"Second, 3 is a zero-shot dataset and has significant proportion of unseen keys which could end up being noise for the model.",
"The slight decrease in performance on the dev, 1 and 2 sets can be attributed to model utilising spurious patterns over irrelevant keys for prediction.",
"8 We validated this experimentally by testing the original premise trained model on the DRR test tables.",
"Table 5 in the Appendix C shows that without pruning, the model focuses on irrelevant rows for prediction.",
"KG explicit With explicit contextualized knowledge about the table keys, we observe a marginal improvement in dev, 1 test sets and a significant performance gain on the 2 and 3 test sets.",
"Improvement in the 3 set shows that adding external knowledge helps in the zero-shot setting.",
"With 2 , the model can not utilize spurious lexical correlations 9 due to its adversarial nature, and is forced to use the relevant keys in the premise tables, thus 8 Performance drop of dev and 2 is also marginal i.e. (dev: 79.57 to 78.77, 1 : 78.27 to 78.13, 2 : 71.87 to 70.90), as compared to InfoTabS WMD-top3 i.e (dev: 75.5 to 72.55, 1 : 74.88 to 70.38, 2 : 65.44 to 62.55), here WMD-top3 performance numbers are taken from Gupta et al. (2020).",
"9 The hypothesis-only baseline for 2 is 48.5 % vs. 1 : 60.5 % and dev: 60.5 % (Gupta et al., 2020) adding explicit information about the key improves performance more for 2 than 1 or dev.",
"Appendix F shows some qualitative examples.",
"We perform an ablation study as shown in table 3, where instead of doing all modification sequentially one after another (+), we do only one modification at a time to analyze its effects.",
"Through our ablation study we observe that:",
"(a) DRR improves performance on the dev, 1 , and 2 sets, but slightly degrades it on the 3 set.",
"The drop in performance on 3 is due to spurious artifact deletion as explained in details in Appendix E.",
"(b) KG explicit gives performance improvement in all sets.",
"Furthermore, there is significant boost in performance of the adversarial 2 and 3 sets.",
"10",
"(c) Similarly, KG implicit shows significant improvement in all test sets.",
"The large improvements on the adversarial sets 2 and 3 sets, suggest that the model can now reason better.",
"Although, implicit knowledge provides most performance gain, all modifications are needed to obtain the best performance for all sets (especially on the 3 set).",
"11 Premise Dev 1 2 3 Para 75.55 74.88 65.55 64.94 DRR 76.39 75.78 67.22 64.88 KG explicit 77.16 75.38 67.88 65.50 KG implicit 79.06 78.44 71.66 67.55 Table 3: Ablation results with individual modifications.",
"Recently, there have been many papers which study several NLP tasks on semi-structured tabular data.",
"These include tabular NLI and fact verification tasks such as TabFact (Chen et al., 2019), and InfoTabS (Gupta et al., 2020), various question answering and semantic parsing tasks (Pasupat and Liang, 2015; Krishnamurthy et al., 2017; Abbas et al., 2016; Sun et al., 2016; Chen et al., 2020; Lin et al., 2020, inter alia ), and table-to-text generation and its evaluation (e.g., Parikh et al., 2020; Radev et al., 2020).",
"Several, models for better representation of tables such as TAPAS (Herzig 10 The KG explicit step is performed only for relevant keys (after DRR).",
"11 We show in Appendix D, Table 6, that implicit knowledge addition to a non-sentential table representation i.e. Struc (Chen et al., 2019; Gupta et al., 2020) leads to performance improvement as well.",
"et al., 2020), TaBERT (Yin et al., 2020), and Tab-Struc (Zhang et al., 2020) were recently proposed.",
"Yu et al. (2018, 2021) and Eisenschlos et al. (2020) study pre-training for improving tabular inference, similar to our MutliNLI pre-training.",
"The proposed modifications in this work are simple and intuitive.",
"Yet, existing table reasoning papers have not studied the impact of such input modifications.",
"Furthermore, much of the recent work focuses on building sophisticated neural models, without explicit focus on how these models (de-signed for raw text) adapt to the tabular data.",
"In this work, we argue that instead of relying on the neural network to magically work for tabular structures, we should carefully think about the representation of semi-structured data, and the incorporation of both implicit and explicit knowledge into neural models.",
"Our work highlights that simple pre-processing steps are important, especially for better generalization, as evident from the significant improvement in performance on adversarial test sets with the same RoBERTa models.",
"We recommend that these pre-processing steps should be standardized across table reasoning tasks.",
"We introduced simple and effective modifications that rely on introducing additional knowledge to improve tabular NLI.",
"These modifications governs what information is provided to a tabular NLI and how the given information is presented to the model.",
"We presented a case study with the recently published InfoTabS dataset and showed that our proposed changes lead to significant improvements.",
"Furthermore, we also carefully studied the effect of these modifications on the multiple test-sets, and why a certain modification seems to help a particular adversarial set.",
"We believe that our study and proposed solutions will be valuable to researchers working on question answering and generation problems involving both tabular and textual inputs, such as tabular/hybrid question answering and table-to-text generation, especially with difficult or adversarial evaluation.",
"Looking ahead, our work can be extended to include explicit knowledge for hypothesis tokens as well.",
"To increase robustness, we can also integrate structural constraints via data augmentation through NLI training.",
"Moreover, we expect that structural information such as position encoding could also help better represent tables.",
"We thank members of the Utah NLP group for their valuable insights and suggestions at various stages of the project; and reviewers their helpful comments.",
"We also thank the support of NSF grants #1801446 (SATC) and #1822877 (Cyberlearning) and a generous gift from Verisk Inc."
] | [
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"objective",
"other",
"other",
"abstain",
"abstain",
"method",
"result",
"abstain",
"objective",
"method",
"objective",
"objective",
"abstain",
"abstain",
"other",
"other"
] |
[
"State-of-the-art unsupervised multilingual models (e.g., multilingual BERT) have been shown to generalize in a zero-shot crosslingual setting.",
"This generalization ability has been attributed to the use of a shared subword vocabulary and joint training across multiple languages giving rise to deep multilingual abstractions.",
"We evaluate this hypothesis by designing an alternative approach that transfers a monolingual model to new languages at the lexical level.",
"More concretely, we first train a transformer-based masked language model on one language, and transfer it to a new language by learning a new embedding matrix with the same masked language modeling objectivefreezing parameters of all other layers.",
"This approach does not rely on a shared vocabulary or joint training.",
"However, we show that it is competitive with multilingual BERT on standard cross-lingual classification benchmarks and on a new Cross-lingual Question Answering Dataset (XQuAD).",
"Our results contradict common beliefs of the basis of the generalization ability of multilingual models and suggest that deep monolingual models learn some abstractions that generalize across languages.",
"We also release XQuAD as a more comprehensive cross-lingual benchmark, which comprises 240 paragraphs and 1190 question-answer pairs from SQuAD v1.1 translated into ten languages by professional translators.",
"Multilingual pre-training methods such as multilingual BERT (mBERT, Devlin et al., 2019) have been successfully used for zero-shot cross-lingual transfer (Pires et al., 2019; Conneau and Lample, 2019).",
"These methods work by jointly training a Work done as an intern at DeepMind.",
"transformer model (Vaswani et al., 2017) to perform masked language modeling (MLM) in multiple languages, which is then fine-tuned on a downstream task using labeled data in a single language typically English.",
"As a result of the multilingual pre-training, the model is able to generalize to other languages, even if it has never seen labeled data in those languages.",
"Such a cross-lingual generalization ability is surprising, as there is no explicit cross-lingual term in the underlying training objective.",
"In relation to this, Pires et al. (2019) hypothesized that: ...having word pieces used in all languages (numbers, URLs, etc), which have to be mapped to a shared space forces the co-occurring pieces to also be mapped to a shared space, thus spreading the effect to other word pieces, until different languages are close to a shared space.",
"...mBERT's ability to generalize cannot be attributed solely to vocabulary memorization, and that it must be learning a deeper multilingual representation.",
"Cao et al. (2020) echoed this sentiment, and Wu and Dredze (2019) further observed that mBERT performs better in languages that share many subwords.",
"As such, the current consensus of the crosslingual generalization ability of mBERT is based on a combination of three factors:",
"(i) shared vocabulary items that act as anchor points;",
"(ii) joint training across multiple languages that spreads this effect; which ultimately yields",
"(iii) deep cross-lingual representations that generalize across languages and tasks.",
"In this paper, we empirically test this hypothesis by designing an alternative approach that violates all of these assumptions.",
"As illustrated in Figure 1, our method starts with a monolingual transformer trained with MLM, which we transfer to a new language by learning a new embedding matrix through MLM in the new language while freezing parameters of all other layers.",
"This approach only learns new lexical parameters and does not rely on shared Python [MASK] an interpreted [MASK] language Python is an interpreted programming language pos 0 pos 1 tok 1 pos 2 MASK tok 2 pos N tok N seg A seg A seg A seg B ... ... ...",
"vocabulary items nor joint learning.",
"However, we show that it is competitive with joint multilingual pre-training across standard zero-shot cross-lingual transfer benchmarks (XNLI, MLDoc, and PAWS-X).",
"We also experiment with a new Cross-lingual Question Answering Dataset (XQuAD), which consists of 240 paragraphs and 1190 question-answer pairs from SQuAD v1.1 (Rajpurkar et al., 2016) translated into ten languages by professional translators.",
"Question answering as a task is a classic probe for language understanding.",
"It has also been found to be less susceptible to annotation artifacts commonly found in other benchmarks (Kaushik and Lipton, 2018; Gururangan et al., 2018).",
"We believe that XQuAD can serve as a more comprehensive cross-lingual benchmark and make it publicly available at https://github.",
"com/deepmind/xquad .",
"Our results on XQuAD show that the monolingual transfer approach can be made competitive with mBERT by learning second language-specific transformations via adapter modules (Rebuffi et al., 2017).",
"Our contributions in this paper are as follows:",
"(i) we propose a method to transfer monolingual representations to new languages in an unsupervised fashion ( 2) 1 ;",
"(ii) we show that neither a shared subword vocabulary nor joint multilingual training is necessary for zero-shot transfer and find that the effective vocabulary size per language is an important factor for learning multilingual models ( 3 and 4);",
"(iii) we show that monolingual models learn abstractions that generalize across languages ( 5); and",
"(iv) we present a new cross-lingual question answering dataset ( 4).",
"1 This is particularly useful for low-resource languages, since many pre-trained models are currently in English.",
"In this section, we propose an approach to transfer a pre-trained monolingual model in one language L 1 (for which both task supervision and a monolingual corpus are available) to a second language L 2 (for which only a monolingual corpus is available).",
"The method serves as a counterpoint to existing joint multilingual models, as it works by aligning new lexical parameters to a monolingually trained deep model.",
"As illustrated in Figure 1, our proposed method consists of four steps: 1. Pre-train a monolingual BERT (i.e. a transformer) in L 1 with masked language modeling (MLM) and next sentence prediction (NSP) objectives on an unlabeled L 1 corpus.",
"2. Transfer the model to a new language by learning new token embeddings while freezing the transformer body with the same training objectives (MLM and NSP) on an unlabeled L 2 corpus.",
"3. Fine-tune the transformer for a downstream task using labeled data in L 1 , while keeping the L 1 token embeddings frozen .",
"4. Zero-shot transfer the resulting model to L 2 by swapping the L 1 token embeddings with the L 2 embeddings learned in Step 2. We note that, unlike mBERT, we use a separate subword vocabulary for each language, which is trained on its respective monolingual corpus, so the model has no notion of shared subwords.",
"However, the special [CLS] , [SEP] , [MASK] , [PAD] , and [UNK] symbols are shared across languages, and fine-tuned in Step 3. 2 We observe further improvements on several downstream tasks using the following extensions to the above method.",
"Language-specific position embeddings.",
"The basic approach does not take into account different word orders commonly found in different languages, as it reuses the position embeddings in L 1 for L 2 .",
"We relax this restriction by learning a separate set of position embeddings for L 2 in Step 2 (along with L 2 token embeddings).",
"3 We treat the [CLS] symbol as a special case.",
"In the original implementation, BERT treats [CLS] as a regular word with its own position and segment embeddings, even if it always appears in the first position.",
"However, this does not provide any extra capacity to the model, as the same position and segment embeddings are always added up to the [CLS] embedding.",
"Following this observation, we do not use any position and segment embeddings for the [CLS] symbol.",
"Noised fine-tuning.",
"The transformer body in our proposed method is only trained with L 1 embeddings as its input layer, but is used with L 2 embeddings at test time.",
"To make the model more robust to this mismatch, we add Gaussian noises sampled from the standard normal distribution to the word, position, and segment embeddings during the fine-tuning step (Step 3).",
"Adapters.",
"We also investigate the possibility of allowing the model to learn better deep representations of L 2 , while retaining the alignment with L 1 using residual adapters (Rebuffi et al., 2017).",
"Adapters are small task-specific bottleneck layers that are added between layers of a pre-trained model.",
"During fine-tuning, the original model parameters are frozen, and only parameters of the adapter modules are learned.",
"In Step 2, when we transfer the L 1 transformer to L 2 , we add a feed-forward adapter module after the projection following multi-headed attention and after the two feed-forward layers in each transformer layer, similar to Houlsby et al. (2019).",
"Note that the original transformer body is still frozen, and only parameters of 2 The rationale behind this is that special symbols are generally task dependent, and given that the fine-tuning in downstream tasks is done exclusively in English, we need to share these symbols to zero-shot transfer to other languages.",
"3 We also freeze the L 1 position embeddings in Step 3 accordingly, and the L 2 position embeddings are plugged in together with the token embeddings in Step 4. the adapter modules are trainable (in addition to the embedding matrix in L 2 ).",
"Our goal is to evaluate the performance of different multilingual models in the zero-shot cross-lingual setting to better understand the source of their generalization ability.",
"We describe the models that we compare ( 3.1), the experimental setting ( 3.2), and the results on three classification datasets: XNLI ( 3.3), MLDoc ( 3.4) and PAWS-X ( 3.5).",
"We discuss experiments on our new XQuAD dataset in 4. In all experiments, we fine-tune a pre-trained model using labeled training examples in English, and evaluate on test examples in other languages via zero-shot transfer.",
"Joint multilingual models ( JOINTMULTI ).",
"A multilingual BERT model trained jointly on 15 languages 4 .",
"This model is analogous to mBERT and closely related to other variants like XLM.",
"Joint pairwise bilingual models ( JOINTPAIR ).",
"A multilingual BERT model trained jointly on two languages (English and another language).",
"This serves to control the effect of having multiple languages in joint training.",
"At the same time, it provides a joint system that is directly comparable to the monolingual transfer approach in 2, which also operates on two languages.",
"Cross-lingual word embedding mappings ( CLWE ).",
"The method we described in 2 operates at the lexical level, and can be seen as a form of learning cross-lingual word embeddings that are aligned to a monolingual transformer body.",
"In contrast to this approach, standard cross-lingual word embedding mappings first align monolingual lexical spaces and then learn a multilingual deep model on top of this space.",
"We also include a method based on this alternative approach where we train skip-gram embeddings for each language, and map them to a shared space using VecMap (Artetxe et al., 2018).",
"5 We then train an English BERT model using MLM and NSP on top of the frozen mapped embeddings.",
"The model is 4 We use all languages that are included in XNLI (Conneau et al., 2018b).",
"5 We use the orthogonal mode in VecMap and map all languages into English.",
"then fine-tuned using English labeled data while keeping the embeddings frozen.",
"We zero-shot transfer to a new language by plugging in its respective mapped embeddings.",
"( MONOTRANS ).",
"Our method described in 2. We use English as L 1 and try multiple variants with different extensions.",
"Vocabulary.",
"We perform subword tokenization using the unigram model in SentencePiece (Kudo and Richardson, 2018).",
"In order to understand the effect of sharing subwords across languages and the size of the vocabulary, we train each model with various settings.",
"We train 4 different JOINTMULTI models with a vocabulary of 32k, 64k, 100k, and 200k subwords.",
"For JOINTPAIR , we train one model with a joint vocabulary of 32k subwords, learned separately for each language pair, and another one with a disjoint vocabulary of 32k subwords per language, learned on its respective monolingual corpus.",
"The latter is directly comparable to MONOTRANS in terms of vocabulary, in that it is restricted to two languages and uses the exact same disjoint vocabulary with 32k subwords per language.",
"For CLWE , we use the same subword vocabulary and investigate two choices:",
"(i) the number of embedding dimensions300d (the standard in the crosslingual embedding literature) and 768d (equivalent to the rest of the models); and",
"(ii) the self-learning initializationweakly supervised (based on identically spelled words, Sgaard et al., 2018) and unsupervised (based on the intralingual similarity distribution, Artetxe et al., 2018).",
"Pre-training data.",
"We use Wikipedia as our training corpus, similar to mBERT and XLM (Con-neau and Lample, 2019), which we extract using the WikiExtractor tool.",
"6 We do not perform any lowercasing or normalization.",
"When working with languages of different corpus sizes, we use the same upsampling strategy as Conneau and Lample (2019) for both the subword vocabulary learning and the pre-training.",
"Training details.",
"Our implementation is based on the BERT code from Devlin et al. (2019).",
"For adapters, we build on the code by Houlsby et al. (2019).",
"We use the model architecture of 6 https://github.com/attardi/ wikiextractor BERTBASE , similar to mBERT.",
"We use the LAMB optimizer (You et al., 2020) and train on 64 TPUv3 chips for 250,000 steps using the same hyperparameters as You et al. (2020).",
"We describe other training details in Appendix A. Our hyperparameter configuration is based on preliminary experiments on the development set of the XNLI dataset.",
"We do not perform any exhaustive hyperparameter search, and use the exact same settings for all model variants, languages, and tasks.",
"Evaluation setting.",
"We perform a single training and evaluation run for each model, and report results in the corresponding test set for each downstream task.",
"For MONOTRANS , we observe stability issues when learning language-specific position embeddings for Greek, Thai and Swahili.",
"The second step would occasionally fail to converge to a good solution.",
"For these three languages, we run Step 2 of our proposed method ( 2) three times and pick the best model on the XNLI development set.",
"In natural language inference (NLI), given two sentences (a premise and a hypothesis), the goal is to decide whether there is an entailment , contradiction , or neutral relationship between them (Bowman et al., 2015).",
"We train all models on the MultiNLI dataset (Williams et al., 2018) in English and evaluate on XNLI (Conneau et al., 2018b)a cross-lingual NLI dataset consisting of 2,500 development and 5,000 test instances translated from English into 14 languages.",
"JOINTMULTI is comparable with the literature.",
"Our best JOINTMULTI model is substantially better than mBERT, and only one point worse (on average) than the unsupervised XLM model, which is larger in size.",
"A larger vocabulary is beneficial.",
"JOINTMULTI variants with a larger vocabulary perform better.",
"More languages do not improve performance.",
"JOINTPAIR models with a joint vocabulary perform comparably with JOINTMULTI .",
"7 mBERT covers 102 languages and has a shared vocabulary of 110k subwords.",
"XLM covers 15 languages and uses a larger model size with a shared vocabulary of 95k subwords, which contributes to its better performance.",
"A shared subword vocabulary is not necessary for joint multilingual pre-training.",
"The equivalent JOINTPAIR models with a disjoint vocabulary for each language perform better.",
"CLWE performs poorly.",
"Even if it is competitive in English, it does not transfer as well to other languages.",
"Larger dimensionalities and weak supervision improve CLWE , but its performance is still below other models.",
"MONOTRANS is competitive with joint learning.",
"The basic version of MONOTRANS is 3.3 points worse on average than its equivalent JOINTPAIR model.",
"Language-specific position embeddings and noised fine-tuning reduce the gap to only 1.1 points.",
"Adapters mostly improve performance, except for low-resource languages such as Urdu, Swahili, Thai, and Greek.",
"In subsequent experiments, we include results for all variants of MONOTRANS and JOINTPAIR , the best CLWE variant (768d ident), and JOINTMULTI with 32k and 200k voc.",
"In MLDoc (Schwenk and Li, 2018), the task is to classify documents into one of four different genres: corporate/industrial , economics , govern-ment/social , and markets .",
"The dataset is an improved version of the Reuters benchmark (Klemen-tiev et al., 2012), and consists of 1,000 training and 4,000 test documents in 7 languages.",
"models tend to perform better, and the best overall results are from CLWE .",
"We believe that this can be attributed to:",
"(i) the superficial nature of the task itself, as a model can rely on a few keywords to identify the genre of an input document without requiring any high-level understanding and",
"(ii) the small size of the training set.",
"Nonetheless, all of the four model families obtain generally similar results, corroborating our previous findings that joint multilingual pre-training and a shared vocabulary are not needed to achieve good performance.",
"PAWS is a dataset that contains pairs of sentences with a high lexical overlap (Zhang et al., 2019).",
"The task is to predict whether each pair is a paraphrase or not.",
"While the original dataset is only in English, PAWS-X (Yang et al., 2019) provides human translations into six languages.",
"We evaluate our models on this dataset and show our results in Table 2. Similar to experiments on other datasets, MONOTRANS is competitive with the best joint variant, with a difference of only 0.6 points when we learn language-specific position embeddings.",
"Our classification experiments demonstrate that MONOTRANS is competitive with JOINTMULTI and JOINTPAIR , despite being multilingual at the embedding layer only (i.e. the transformer body is trained",
"exclusively on English).",
"One possible explanation for this behaviour is that existing cross-lingual benchmarks are flawed and solvable at the lexical level.",
"For example, previous work has shown that models trained on MultiNLIfrom which XNLI was derivedlearn to exploit superficial cues in the data (Gururangan et al., 2018).",
"To better understand the cross-lingual generalization ability of these models, we create a new Crosslingual Question Answering Dataset (XQuAD).",
"Question answering is a classic probe for natural language understanding (Hermann et al., 2015) and has been shown to be less susceptible to annotation artifacts than other popular tasks (Kaushik and Lipton, 2018).",
"In contrast to existing classification benchmarks, extractive question answering requires identifying relevant answer spans in longer context paragraphs, thus requiring some degree of structural transfer across languages.",
"XQuAD consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set of SQuAD v1.1 8 together with their translations into ten languages: Spanish, German, Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, and Hindi.",
"Both the context paragraphs and the questions are translated by professional human translators from Gengo 9 .",
"In order to facilitate easy annotations of answer spans, we choose the most frequent answer for each question and mark its beginning and end in the context paragraph using special symbols, instructing translators to keep these symbols in the relevant positions in 8 We choose SQuAD 1.1 to avoid translating unanswerable questions.",
"their translations.",
"Appendix B discusses the dataset in more details.",
"We show F 1 scores on XQuAD in Table 3 (we include exact match scores in Appendix C).",
"Similar to our findings in the XNLI experiment, the vocabulary size has a large impact on JOINTMULTI , and JOINTPAIR models with disjoint vocabularies perform the best.",
"The gap between MONOTRANS and joint models is larger, but MONOTRANS still performs surprisingly well given the nature of the task.",
"We observe that learning language-specific position embeddings is helpful in most cases, but completely fails for Turkish and Hindi.",
"Interestingly, the exact same pre-trained models (after Steps 1 and 2) do obtain competitive results in XNLI ( 3.3).",
"In contrast to results on previous tasks, adding adapters to allow a transferred monolingual model to learn higher level abstractions in the new language significantly improves performance, resulting in a MONOTRANS model that is comparable to the best joint system.",
"Joint multilingual training.",
"We demonstrate that sharing subwords across languages is not necessary for mBERT to work, contrary to a previous hypothesis by Pires et al. (2019).",
"We also do not observe clear improvements by scaling the joint training to a large number of languages.",
"Rather than having a joint vs. disjoint vocabulary or two vs. multiple languages, we find that an important factor is the effective vocabulary size per language .",
"When using a joint vocabulary, only a subset of the tokens is effectively shared, while the en es de el ru tr ar vi th zh hi avg mBERT 88.9 75.5 70.6 62.6 71.3 55.4 61.5 69.5 42.7 58.0 59.2 65.0 CLWE 768d ident 84.2 58.0 51.2 41.1 48.3 24.2 32.8 29.7 23.8 19.9 21.7 39.5 JOINTMULTI 32k voc 79.3 59.5 60.3 49.6 59.7 42.9 52.3 53.6 49.3 50.2 42.3 54.5 200k voc 82.7 74.3 71.3 67.1 70.2 56.6 64.8 67.6 58.6 51.5 58.3 65.7 JOINTPAIR Joint voc 82.8 68.3 73.6 58.8 69.8 53.8 65.3 69.5 56.3 58.8 57.4 64.9 Disjoint voc 83.3 72.5 72.8 67.3 71.7 60.5 66.5 68.9 56.1 60.4 56.7 67.0 MONOTRANS Token emb 83.9 67.9 62.1 63.0 64.2 51.2 61.0 64.1 52.6 51.4 50.9 61.1 + pos emb 84.7 73.1 65.9 66.5 66.2 16.2 59.5 65.8 51.5 56.4 19.3 56.8 + noising 82.1 68.4 68.2 67.3 67.5 17.5 61.2 65.9 57.5 58.5 21.5 57.8 + adapters 82.1 70.8 70.6 67.9 69.1 61.3 66.0 67.0 57.5 60.5 61.9 66.8 Table 3: XQuAD results (F1).",
"rest tends to occur in only one language.",
"As a result, multiple languages compete for allocations in the shared vocabulary.",
"We observe that multilingual models with larger vocabulary sizes obtain consistently better results.",
"It is also interesting that our best results are generally obtained by the JOINTPAIR systems with a disjoint vocabulary, which guarantees that each language is allocated 32k subwords.",
"As such, we believe that future work should treat the effective vocabulary size as an important factor.",
"Transfer of monolingual representations.",
"MONOTRANS is competitive even in the most challenging scenarios.",
"This indicates that joint multilingual pre-training is not essential for cross-lingual generalization, suggesting that monolingual models learn linguistic abstractions that generalize across languages.",
"To get a better understanding of this phenomenon, we probe the representations of MONOTRANS .",
"As existing probing datasets are only available in English, we train monolingual representations in non-English languages and transfer them to English.",
"We probe representations from the resulting English models with the Word in Context (WiC; Pilehvar and Camacho-Collados, 2019), Stanford Contextual Word Similarity (SCWS; Huang et al., 2012), and the syntactic evaluation (Marvin and Linzen, 2018) datasets.",
"We provide details of our experimental setup in Appendix D and show a summary of our results in Table 4. The results indicate that monolingual semantic representations learned from non-English languages transfer to English to a degree.",
"On WiC, models transferred from non-English languages are comparable with models trained on English.",
"On SCWS, while there are more variations, models trained on other languages still perform surprisingly well.",
"In contrast, we observe larger gaps in the syntactic evaluation dataset.",
"This suggests that transferring syntactic abstractions is more challenging than semantic abstractions.",
"We leave a more thorough investigation of whether joint multilingual pre-training reduces to learning a lexical-level alignment for future work.",
"CLWE .",
"CLWE modelsalthough similar in spirit to MONOTRANS are only competitive on the easiest and smallest task (MLDoc), and perform poorly on the more challenging ones (XNLI and XQuAD).",
"While previous work has questioned evaluation methods in this research area (Glavas et al., 2019; Artetxe et al., 2019), our results provide evidence that existing methods are not competitive in challenging downstream tasks and that mapping between two fixed embedding spaces may be overly restrictive.",
"For that reason, we think that designing better integration techniques of CLWE to downstream models is an important future direction.",
"Lifelong learning.",
"Humans learn continuously and accumulate knowledge throughout their lifetime.",
"In contrast, existing multilingual models focus on the scenario where all training data for all languages is available in advance.",
"The setting to transfer a monolingual model to other languages is suitable for the scenario where one needs to incorporate new languages into an existing model, while no longer having access to the original data.",
"Such a scenario is of significant practical interest, since models are often released without the data they are trained on.",
"In that regard, our work provides a baseline for multilingual lifelong learning.",
"Unsupervised lexical multilingual representations.",
"A common approach to learn multilingual representations is based on cross-lingual word embedding mappings.",
"These methods learn a set of monolingual word embeddings for each language and map them to a shared space through a linear transformation.",
"Recent approaches perform this mapping with an unsupervised initialization based on heuristics (Artetxe et al., 2018) or adversarial training (Zhang et al., 2017; Conneau et al., 2018a), which is further improved through self-learning (Artetxe et al., 2017).",
"The same approach has also been adapted for contextual representations (Schus-ter et al., 2019).",
"Unsupervised deep multilingual representations.",
"In contrast to the previous approach, which learns a shared multilingual space at the lexical level, state-of-the-art methods learn deep representations with a transformer.",
"Most of these methods are based on mBERT.",
"Extensions to mBERT include scaling it up and incorporating parallel data (Conneau and Lample, 2019), adding auxiliary pretraining tasks (Huang et al., 2019), and encouraging representations of translations to be similar (Cao et al., 2020).",
"Concurrent to this work, Tran (2020) propose a more complex approach to transfer a monolingual BERT to other languages that achieves results similar to ours.",
"However, they find that post-hoc embedding learning from a random initialization does not work well.",
"In contrast, we show that monolingual representations generalize well to other languages and that we can transfer to a new language by learning new subword embeddings.",
"Contemporaneous work also shows that a shared vocabulary is not important for learning multilingual representations (K et al., 2020; Wu et al., 2019), while Lewis et al. (2019) propose a question answering dataset that is similar in spirit to ours but covers fewer languages and is not parallel across all of them.",
"We compared state-of-the-art multilingual representation learning models and a monolingual model that is transferred to new languages at the lexical level.",
"We demonstrated that these models perform comparably on standard zero-shot crosslingual transfer benchmarks, indicating that neither a shared vocabulary nor joint pre-training are necessary in multilingual models.",
"We also showed that a monolingual model trained on a particular language learns some semantic abstractions that are generalizable to other languages in a series of probing experiments.",
"Our results and analysis contradict previous theories and provide new insights into the basis of the generalization abilities of multilingual models.",
"To provide a more comprehensive benchmark to evaluate cross-lingual models, we also released the Cross-lingual Question Answering Dataset (XQuAD).",
"We thank Chris Dyer and Phil Blunsom for helpful comments on an earlier draft of this paper and Tyler Liechty for assistance with datasets."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"objective",
"objective",
"result",
"result",
"objective",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"abstain",
"abstain",
"result",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"other",
"objective",
"objective",
"result",
"objective",
"method",
"other"
] |
[
"Complete Multi-lingual Neural Machine Translation (C-MNMT) achieves superior performance against the conventional MNMT by constructing multi-way aligned corpus, i.e., aligning bilingual training examples from different language pairs when either their source or target sides are identical.",
"However, since exactly identical sentences from different language pairs are scarce, the power of the multi-way aligned corpus is limited by its scale.",
"To handle this problem, this paper proposes \"Extract and Generate\" (EAG), a two-step approach to construct large-scale and high-quality multi-way aligned corpus from bilingual data.",
"Specifically, we first extract candidate aligned examples by pairing the bilingual examples from different language pairs with highly similar source or target sentences; and then generate the final aligned examples from the candidates with a well-trained generation model.",
"With this two-step pipeline, EAG can construct a large-scale and multi-way aligned corpus whose diversity is almost identical to the original bilingual corpus.",
"Experiments on two publicly available datasets i.e., WMT-5 and OPUS-100, show that the proposed method achieves significant improvements over strong baselines, with +1.1 and +1.4 BLEU points improvements on the two datasets respectively.",
"Multilingual Neural Machine Translation (MMMT) (Dong et al., 2015; Firat et al., 2017; Johnson et al., 2017; Aharoni et al., 2019) has achieved promising results on serving translations between multiple language pairs with one model.",
"With sharing parameters of the model, MNMT can facilitate information sharing between similar languages and make it possible to translate between low-resource Equal contribution.",
"and zero-shot language pairs.",
"Since the majority of available MT training data are English-centric, i.e., English either as the source or target language, most non-English language pairs do not see a single training example when training MNMT models (Freitag and Firat, 2020).",
"Therefore, the performance of MNMT models on non-English translation directions still left much to be desired: 1) Lack of training data leads to lower performance for non-English language pairs(Zhang et al., 2021); 2) MNMT models cannot beat the pivot-based baseline systems which translate non-English language pairs by bridging through English (Cheng et al., 2016; Habash and Hu, 2009).",
"Recently, Freitag and Firat (2020) re-kindle the flame by proposing C-MNMT, which trains the model on the constructed multi-way aligned corpus.",
"Specifically, they extract the multi-way aligned examples by aligning training examples from different language pairs when either their source or target sides are identical (i.e., pivoting through English, for German English and English French to extract German-French-English examples).",
"Since they directly extract the multi-way aligned examples from the bilingual corpus, we refer to their approach as the extraction-based approach.",
"Despite improving the performance, the scale of multi-way aligned corpus extracted by Freitag and Firat (2020) is always limited compared to English-centric bilingual corpus, e.g., only 0.3M German-Russian-English multi-way aligned corpus extracted from 4.5M German-English and 33.5M English-Russian bilingual corpus.",
"A simple idea for remedying this problem is to add the roughly-aligned corpus by extracting the training examples when either their source or target sides are highly similar.",
"However, our preliminary experiments show that the performance of the model decreases dramatically when we train the model with appending the roughly-aligned corpus.",
"1 One possible solution, referred 1 Detailed descriptions about the preliminary experiment 8141 to as the generation-based approach, is to generate the multi-way aligned examples by distilling the knowledge of the existing NMT model, e.g., extracting German-English-French synthetic three-way aligned data by feeding the English-side sentences of German-English bilingual corpus into the English-French translation model.",
"Although the generation-based approach can theoretically generate non-English corpus with the same size as original bilingual corpus, its generated corpus has very low diversity as the search space of the beam search used by NMT is too narrow to extract diverse translations (Wu et al., 2020; Sun et al., 2020; Shen et al., 2019), which severely limits the power of the generation-based approach.",
"In order to combine advantages of the two branches of approaches mentioned above, we propose a novel two-step approach, named EAG (Extract and Generate), to construct large-scale and high-quality multi-way aligned corpus for C-MNMT.",
"Specifically, we first extract candidate aligned training examples from different language pairs when either their source or target sides are highly similar; and then we generate the final aligned examples from the pre-extracted candidates with a well-trained generation model.",
"The motivation behind EAG is two-fold: 1) Although identical source or target sentences between bilingual examples from different language pairs are scarce, highly similar sentences in source or target side are more wide-spread; 2) Based on the pre-extracted candidate aligned examples which have highly similar source or target sentences, EAG can generate the final aligned examples by only refining the sentences partly with a few modifications.",
"Therefore, the non-English corpus constructed by EAG has almost identical diversity to the original bilingual corpus.",
"Experiments on the publicly available data sets, i.e., WMT-5 and OPUS-100, show that the proposed method achieves substantial improvements over strong baselines.",
"Bilingual NMT Neural machine translation (Sutskever et al., 2014; Cho et al., 2014; Vaswani et al., 2017) achieves great success in recent years due to its end-to-end learning approach and large-scale bilingual corpus.",
"Given a set of sentence pairs D = { ( x, y ) ( X Y ) } , the NMT model is trained to learn the parameter by maximizing can be found in Section 5.1.",
"MNMT Considering training a separate model for each language pair is resource consuming, MNMT (Dong et al., 2015; Johnson et al., 2017; Gu et al., 2020) is introduced to translate between multiple language pairs using a single model (John-son et al., 2017; Ha et al.; Lakew et al., 2018).",
"We mainly focus on the mainstream MNMT model proposed by Johnson et al. (2017), which only introduces an artificial token to the input sequence to indicate which target language to translate.",
"C-MNMT C-MNMT is proposed to build a complete translation graph for MNMT, which contains training examples for each language pair (Freitag and Firat, 2020).",
"A challenging task remaining is how to get direct training data for non-English language pairs.",
"In Freitag and Firat (2020), non-English training examples are constructed by pairing the non-English sides of two training examples with identical English sides.",
"However, this method can't get large-scale training examples since the quantity of exactly identical English sentences from different language pairs is small.",
"Another feasible solution is to generate training examples with pivot-based translation where the source sentence cascades through the pre-trained source English and English target systems to generate the target sentence (Cheng et al., 2016).",
"Despite a large quantity of corpus it can generate, its generated corpus has very low diversity (Wu et al., 2020; Sun et al., 2020; Shen et al., 2019).",
"The proposed EAG has a two-step pipeline.",
"The first step is to extract the candidate aligned examples from the English-centric bilingual corpus.",
"The second step is to generate the final aligned examples from the candidates extracted in the first step.",
"Different from Freitag and Firat (2020) who extract non-English training examples by aligning the English-centric bilingual training examples with identical English sentences, we extract the candidate aligned examples by pairing two English-centric training examples with highly similar English sentences.",
"Various metrics have been proposed to measure the superficial similarity of two sentences, such as TF-IDF (Aizawa, 2003; Huang et al., 2011), edit distance (Xiao et al., 2008; Deng 8142 Figure 1: Examples constructed by EAG.",
"et al., 2013), etc.",
"In this paper, we take edit distance as the measurement to decide the superficial similarity of two English sentences.",
"Three main considerations are behind.",
"Firstly, since edit distance measures the similarity of two sentences with the minimum number of operations to transform one into the other, it tends to extract sentences with similar word compositions and sentence structures.",
"Secondly, since edit distance only utilizes three operations, i.e., removal, insertion, or substitution, it is easier to mimic these operations in the process of generating the final aligned examples (we leave the explanation in the next subsection).",
"Finally, unlike TF-IDF which only considers word bags in two sentences, edit distance also considers the word order in each sentence.",
"Formally, given two English-centric bilingual corpora from two different language pairs { X 1 , Y 1 } and { X 2 , Y 2 } , where X 1 and X 2 are English sides, Y 1 and Y 2 belong to language L a and L b respectively.",
"For sentence pair ( x 1 i , y 1 i ) { X 1 , Y 1 } and ( x 2 j , y 2 j ) { X 2 , Y 2 } , we take ( x 1 i , y 1 i , x 2 j , y 2 j ) as a candidate aligned example if the two English sentences x 1 i and x 2 j meets: f ed ( x 1 i , x 2 j ) min ( | x 1 i | , | x 2 j | ) , (0 , 1) (1) where f ed refers to the function of edit distance calculation, | x | represents the length of the sentence x , is the similarity threshold which can be set by users beforehand to control the similarity of sentences in the candidate aligned examples.",
"With setting = 0 , we can directly extract the same multi-way aligned examples with Freitag and Firat (2020).",
"With larger , more candidate aligned examples can be extracted for looser restriction.",
"Accordingly, there are more noises in the extracted candidate aligned examples.",
"In the extracted candidate aligned example ( x 1 i , y 1 i , x 2 j , y 2 j ) , ( x 2 j , y 2 j ) is not well aligned to ( x 1 i , y 1 i ) if f ed ( x 1 i , x 2 j ) does not equal to zero.",
"To construct the final three-way aligned example, we search for one sentence pair ( x 2 j , y 2 j ) in the language pair { X 2 , Y 2 } , where x 2 j has the same meaning to x 1 i (thus ( x 1 i , y 1 i , y 2 j ) is a three-way aligned example).",
"Unfortunately, it is very difficult for us to directly find such a sentence pair in the large search space.",
"However, considering x 2 j and x 1 i are both in English, we can take an extreme case where x 2 j is identical to x 1 i in the superficial form.",
"Now, the remained question is that we need to search for the sentence y 2 j in language L b , which has the same meaning to x 1 i .",
"By comparing ( x 1 i , y 2 j ) with ( x 2 j , y 2 j ) , as x 1 i can be transformed from x 2 j with the operations performed by edit distance, it is naturally to suppose that we can find such a y 2 j which can be transformed from y 2 j with these operations similarly.",
"Therefore, we can limit the search space for y 2 j with two restrictions: Firstly, sentence y 2 j has the same meaning with x 1 i ; Secondly, y 2 j is transformed from y 2 j with the operations performed by edit distance.",
"Considering the restrictions mentioned above, we apply an NMT model m to search and generate y 2 j .",
"There are two main questions left to be resolved: how to train such a model m and how to generate y 2 j with a well-trained m .",
"Training Motivated by the recent success of self-supervised training (Devlin et al., 2018; Conneau and Lample, 2019; Song et al., 2019; Yang et al., 2020) in natural language processing, we automatically construct the training corpus for m from the candidate aligned examples.",
"Given the candidate aligned example ( x 1 i , y 1 i , x 2 j , y 2 j ) , the training ex-8143 ample for m is built as: ([ x 2 j ; y 2 j ] , y 2 j ) (2) where y 2 j is the target sentence, the concatenation of x 2 j and y 2 j is the source-side input.",
"y 2 j is the noisy form of y 2 j which we build by mimicking the operations of edit distance, i.e, performing insertion, removal, or substitution on some pieces of y 2 j randomly.",
"Specifically, with probability , each position of sentence y 2 j can be noised by either removed directly, inserted or substituted with any other words in the dictionary W b , which is constructed from the corpus Y 2 .",
"With the self-constructed training examples, the model m is trained to generate the target sentence, which is recovered from the right-side of the concatenated input with the operations performed by edit distance, and has the same meaning to the left-side of the input.",
"Generating With a well-trained m , we generate the final aligned examples by running the inference step of m .",
"Formally, for the final aligned example ( x 1 i , y 1 i , y 2 j ) , the sentence y 2 j is calculated by: y 2 j = m ([ x 1 i ; y 2 j ]) (3) where [ ; ] represents the operation of concatenation, and m ( x ) refers to running the inference step of m with x fed as input.",
"With this generation process, y 2 j is not only has the same meaning to x 1 i (thus also aligned to y 1 i ), but also keeps the word composition and sentence structure similar to y 2 j .",
"Therefore, EAG can construct the final aligned corpus for each non-English language pair, and keep the diversity of the constructed corpus almost identical to the original English-centric corpus.",
"For a clear presentation, Algorithm 1 in Appendix A.2 summarizes the process of generating the final aligned examples.",
"We also provide a toy example in Figure 1 to illustrate how the proposed EAG works.",
"For fair comparison, we evaluate our methods on the publicly available dataset WMT-5, which is used by Freitag and Firat (2020).",
"Additionally, we test the scalability of our method by further conducting experiments on Opus-100, which contains English-centric bilingual data from 100 language pairs (Zhang et al., 2020).",
"In the extraction process, we run our extraction code on the CPU with 24 cores and 200G memory.",
"2 In the generation process, we take transformer-big (Vaswani et al., 2017) as the configuration for m , and m is trained with the self-constructed examples mentioned in Section 3.2 on eight V100 GPU cards.",
"3 We choose Transformer as the basic structure for our model and conduct experiments on two standard configurations, i.e, transformer-base and transformer-big.",
"All models are implemented based on the open-source toolkit fairseq (Ott et al., 2019) and trained on the machine with eight V100 GPU cards.",
"4 All bilingual models are trained for 300,000 steps and multi-lingual models are trained for 500,000 steps.",
"We add a language token at the beginning of the input sentence to specify the required target language for all of the multi-lingual models.",
"For the hyper-parameters and , we set them as 0.5 and 0.3 by default and also investigate how their values produce effects on the translation performance.",
"Following Freitag and Firat (2020), we take WMT13EnEs, WMT14EnDe, WMT15EnFr, WMT18EnCs and WMT18EnRu as the training data, the multi-way test set released by WMT2013 evaluation campaign (Bojar et al., 2014) as the test set.",
"The size of each bilingual training corpus (the non-English corpus constructed by Freitag and Firat (2020) included) is presented in Table 1.",
"For the bilingual translation task, the source and target languages are jointly tokenized into 32,000 subword units with BPE (Sennrich et al., 2016).",
"The multi-lingual models use a vocabulary of 64,000 sub-word units tokenized from the combination of all the training corpus.",
"Similar to Freitag and Firat (2020), we use a temperature-based data sampling strategy to over-sample low-resource language pairs in standard MNMT models and low-resource target-languages in C-MNMT models (temperature T = 5 for both cases).",
"We use BLEU scores (Papineni et al., 2002) to measure the model performance and all BLEU scores are calculated with sacreBLEU (Post, 2018).",
"5 2 Intel(R) Xeon(R) Platinum 8255C CPU @ 2.50GHz 3 Detained training process for m can be found in the Appendix A.1.",
"Table 2 shows the training data after constructing non-English examples from English-centric corpus by the proposed EAG.",
"By comparing Table 2 with Table 1, we can find that EAG can construct much more multi-way aligned non-English training examples than Freitag and Firat (2020), e.g., EAG constructs 1.4M bilingual training corpus for the language pair German Russian which is almost up to 4 times more than the corpus extracted by Freitag and Firat (2020).",
"In all, EAG constructs no less than 1M bilingual training examples for each non-English language pair.",
"In order to properly and thoughtfully evaluate the proposed method, we take the following five kinds of baseline systems for comparison:",
"Bilingual systems (Vaswani et al., 2017) Apart from training bilingual baseline models on the original English-centric WMT data, we also train bilingual models for non-English language pairs on the direct bilingual examples extracted by Freitag and Firat (2020).",
"Bridging (pivoting) systems (Cheng et al., 2016) In the bridging or pivoting system, the source sentence cascades through the pre-trained source English and English target systems to generate the target sentence.",
"Extraction-based C-MNMT systems (Freitag and Firat, 2020) Freitag and Firat (2020) construct the multi-way aligned examples by directly extracting and pairing bilingual examples from different language pairs with identical English sentences.",
"Generation-based C-MNMT systems The generation-based C-MNMT baselines construct non-English bilingual examples by distilling the knowledge of the system which cascades the source English and English target models.",
"Different from the bridging baselines which just feed the test examples into the cascaded system and then measure the performance on the test examples, the generation-based C-MNMT baselines feed the non-English sides of the bilingual training examples into the cascaded systems and then get the non-English bilingual training examples by pairing the inputs and outputs.",
"The combination of the generated non-English corpus and original English-centric corpus is used to train the C-MNMT model.",
"We first report the results of our implementations and then present the comparisons with previous works.",
"In our implementations, we take the transformer-base as the basic model structure since it takes less time and computing resources for training.",
"To make a fair comparison with previous works, we conduct experiments on transformer-big which is used by baseline models.",
"Results of our implementation Table 3 shows the results of our implemented systems.",
"Apart from the average performance of the translation directions from each language to others, we also report the average performance on the English-centric and non-English language pairs.",
"6 As shown in Table 3, we can find that the proposed EAG achieves better performance than all of the baseline systems.",
"Compared to the extraction-based C-MNMT, the proposed method achieves an improvement up to 1.1 BLEU points on non-English language pairs.",
"6 Readers can find the detailed results for each language pair in the Appendix A.3.",
"The generation-based C-MNMT performs worse than the extraction-based one even if it generates much larger corpus.",
"Since there is no any training example for non-English language pairs in standard MNMT, the standard MNMT system achieves inferior performance to the pivot and bilingual systems on non-English translation directions.",
"However, with the constructed non-English training examples, EAG achieves 3.2 and 7.5 BLEU points improvements compared with the pivot and bilingual systems respectively.",
"Results compared with previous works Table 4 shows the results of the proposed EAG.",
"We can find that the proposed EAG surpasses Freitag and Firat (2020) almost on all of the translation directions, and achieves an improvement with up to 2.4 BLEU points on the Russian-to-German direction.",
"sampling from the OPUS collection (Tiedemann, 2012).",
"Opus-100 is an English-centric dataset which contains 100 languages on both sides and up to 1M training pairs for each language pair.",
"To evaluate the performance of non-English language pairs, Zhang et al. (2020) sample 2000 sentence pairs of test data for each of the 15 pairings of Arabic, Chinese, Dutch, French, German, and Russian.",
"Following Zhang et al. (2020), we report the sacreBLEU on the average of the 15 non-English language pairs.",
"7 The statistics about the non-English corpus constructed by Freitag and Firat (2020) and EAG are presented in Table",
"5. We can find that EAG is able to construct much more bilingual corpus for non-English language pairs (almost nine times more than Freitag and Firat (2020) for each language pair).",
"We use a vocabulary of 64,000 sub-word units for all of the multi-lingual models, which is tokenized from the combination of all the training corpus with SentencePiece.",
"Results Apart from the baselines mentioned above, we also compare with other two systems proposed by Zhang et al. (2020) and Fan et al. (2020).",
"Zhang et al. (2020) propose the online 7 Signature: BLEU+case.mixed+numrefs.1+smooth.exp+ tok.13a+version.1.4.1 8146 System non-English MNMT system 4.5 pivot system 13.1 generation-based C-MNMT 13.8 extraction-based C-MNMT 16.5 Zhang et al. (2020) 14.1 Fan et al. (2020) 18.4 EAG 17.9 Table 6: Results on Opus-100.",
"Fan et al. (2020) build a C-MNMT model, named m 2 m 100 , which is trained on 7.5B training examples built in house.",
"Following Zhang et al. (2020), we take the transformer-base as the basic model structure for the experiments and results are reported in Table",
"6. We can find that EAG achieves comparable performance to Fan et al. (2020) which utilizes much more data than ours.",
"This is not a fair comparison as the data used Fan et al. (2020) is 75 times as much as ours.",
"Additionally, our model surpasses all other baseline systems and achieves +1.4 BLEU points improvement compared with the extraction-based C-MNMT model.",
"We analyze the proposed method on Opus-100 and take the transformer-base as the model structure.",
"The similarity threshold and the noise ratio are important hyper-parameters in EAG.",
"In this section, we want to test how these two hyper-parameters affect the final translation performance and how they work with each other.",
"We investigate this problem by studying the translation performance with different and , where we vary and from 0 to 0.7 with the interval 0.2.",
"We report the average BLEU score for the translation directions from Arabic to other five languages on the development sets built in house.",
"Figure 2 shows the experimental results.",
"With = 0 , it means that the generation process is not applied and we directly train the NMT model with the extracted roughly aligned examples.",
"And this is the setting of our motivated experiments mentioned in Section 1.",
"We can find that, the final performance drops sharply when we directly train the model with the roughly aligned sentence pairs.",
"For each curve in Figure 2, we can find that the model achieves the best performance when the is around , and then the performance decreases with growing.",
"A relatively unexpected result is that the model usually achieves the best performance when = 0 .",
"5 rather than when = 0 .",
"7 (with a larger , m is trained to handle more complex noise).",
"We conjecture the main reason is that the noise in the training data when = 0 .",
"7 is beyond the capacity of m , which makes m converge poorly during training.",
"Overall, with set as 0.5 and set as 0.3, the model achieves the best performance.",
"We test how the ability of m affects the final performance.",
"Apart from the Transformer-base, i.e., the default setting used in EAG, we also test other two settings, namely Transformer-big (Vaswani et al., 2017) and Transformer-deep (20 encoder layers and 4 decoder layers).",
"With different settings, m is expected to perform different abilities in the generation process.",
"The experimental results are presented in Table",
"7. We can find that if we remove m , the final performance drops dramatically (compar-ing #0 with #1).",
"This shows that the generation step 8147 Figure 3: Examples constructed by EAG.",
"plays a significant role in the proposed EAG.",
"However, by comparing among #0, #2 and #3, we can find that the ability for m shows little effect on the final performance.",
"Taking all of #0, #1, #2 and #3 into consideration, we can reach a conclusion that, the generation step is very important for EAG and a simple generation model, i.e., a baseline NMT model, is enough to achieve strong performance.",
"We are very curious about how EAG works with back-translation (BT).",
"To investigate this problem, we utilize the extraction-based C-MNMT model to decode the non-English monolingual sentences in the candidate aligned examples extracted by EAG, and then get the synthetic non-English sentence pairs by pairing the decoded results with the original sentences.",
"The reversed sentence pairs are appended into the training corpus for the MNMT models.",
"The experimental results are presented in Table",
"8. We find that BT improves the performances of both the two systems.",
"Additionally, BT can work as a complementary to the proposed EAG.",
"Figure 3 presents some examples constructed by EAG, each of which includes the extracted candidate aligned example and the generated sentence for Arabic Chinese.",
"The extracted candidate aligned example contains two bilingual examples, which are extracted from Arabic English and Chinese English respectively.",
"In Figure 3 the two bilingual examples in case one are extracted as a candidate aligned example as their English sentences have high similarity.",
"And there is a composition gap between x 1 and x 2 since \"Bobby Jordan\" is mentioned in x 1 , but not in x 2 .",
"By comparing the generated Chinese sentence y 2 with the original sentence y 2 , we can find that y 2 is modified from y 2 by inserting the Chinese words \", which has the same meaning with \"Bobby Jordan\". Therefore, the generated y 2 is aligned to x 1 and y 1 . In case 2, the Chinese word (out)\" in y 2 has been replaced with Chinese words \" (justice)\" in y 2 , which makes the y 2 aligned to x 1 and y 1 .",
"Case 3 in Figure 3 behaves similarly.",
"While achieving promising performance, the proposed approach still has some weaknesses in the real application: 1) The two-step pipeline performed by EAG is somewhat time-consuming compared to Freitag and Firat (2020); 2) The generated multi-way aligned examples by EAG are sometimes not strictly aligned as the generation process does not always perform perfectly.",
"In this paper, we propose a two-step approach, i.e., EAG, to construct large-scale and high-quality multi-way aligned corpus from English-centric bilingual data.",
"To verify the effectiveness of the proposed method, we conduct extensive experiments on two publicly available corpora, WMT-5 and Opus-100.",
"Experimental results show that the proposed method achieves substantial improvements over strong baselines consistently.",
"There are three promising directions for the future work.",
"Firstly, we plan to test whether EAG is applicable to the domain adaptation problem in NMT.",
"Sec-8148 ondly, we are interested in applying EAG to other related tasks which need to align different example pairs.",
"Finally, we want to investigate other model structures for the generation process.",
"The authors would like to thank the anonymous reviewers of this paper, and the anonymous reviewers of the previous version for their valuable comments and suggestions to improve our work."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"other",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"other"
] |
[
"In relation extraction with distant supervision, noisy labels make it difficult to train quality models.",
"Previous neural models addressed this problem using an attention mechanism that attends to sentences that are likely to express the relations.",
"We improve such models by combining the distant supervision data with an additional directly-supervised data, which we use as supervision for the attention weights.",
"We find that joint training on both types of supervision leads to a better model because it improves the model's ability to identify noisy sentences.",
"In addition, we find that sigmoidal attention weights with max pooling achieves better performance over the commonly used weighted average attention in this setup.",
"Our proposed method 1 achieves a new state-of-the-art result on the widely used FB-NYT dataset.",
"Early work in relation extraction from text used directly supervised methods, e.g., Bunescu and Mooney (2005), which motivated the development of relatively small datasets with sentence-level annotations such as ACE 2004/2005, BioInfer and SemEval 2010 Task 8.",
"Recognizing the difficulty of annotating text with relations, especially when the number of relation types of interest is large, others (Mintz et al., 2009; Craven and Kumlien, 1999) introduced the distant supervision approach of relation extraction, where a knowledge base (KB) and a text corpus are used to automatically generate a large dataset of labeled bags of sentences (a set of sentences that might express the relation) which are then used to train a relation classifier.",
"The large number of labeled instances produced with distant supervision make it a practical alternative to manual annotations.",
"However, distant supervision implicitly assumes that all the KB facts are mentioned in the text (at least one of the sentences in each bag expresses the relation) and that all relevant facts are in the KB (use entities that are not related in the KB as negative examples).",
"These two assumptions are generally not true, which introduces many noisy examples in the training set.",
"Although many methods have been proposed to deal with such noisy training data (e.g., Hoffmann et al., 2011; Surdeanu et al., 2012; Roth et al., 2013; Fan et al., 2014; Zeng et al., 2015; Jiang et al., 2016; Liu et al., 2017), a rather obvious approach has been understudied: combine distant supervision data with additional direct supervision.",
"Intuitively, directly supervising the model can improve its performance by helping it identify which of the input sentences for a given pair of entities are more likely to express a relation.",
"A straightforward way to combine distant and direct supervision is to concatenate instances from both datasets into one large dataset.",
"We show in Section 4.2 that this approach doesn't help the model.",
"Pershina et al. (2014) also observed similar results; instead, they train a graphical model on the distantly supervised instances while using the directly labeled instances to supervise a subcomponent of the model.",
"We discuss prior work in more detail in Section 5.",
"In our paper, we demonstrate a similar approach with neural networks.",
"Specifically, our neural model attends over sentences to distinguish between sentences that are likely to express some relation between the entities and sentences that do not.",
"We use the additional direct supervision to supervise these attention weights.",
"We train this model jointly on both types of supervision in a multitask learning setup.",
"In addition, we experimentally find that sigmoidal attention weights with max pooling achieves better perfor-sentence encoder sentence encoder sentence encoder Sentence-level supervision Bag-level supervision Entity 1 (e1) Entity 2 (e2) Relation type Steve Jobs Apple founder_of Steve Jobs Apple ceo_of > Jobs and Wozniak co-founded Apple in 1976 to sell Wozniak'sApple Ipersonal computer.",
"We propose an effective neural network model for improving distant supervision by combining it with a directly supervised data in the form of sentence-level annotations.",
"The model is trained jointly on both types of supervision in a multitask learning setup, where the direct supervision data is employed as supervision for attention weights.",
"We show experimentally that our model setup benefits from sigmoidal attention weights with max pooling over the commonly used softmax-based weighted averaging attention.",
"Our best model achieves a new state-of-the-art result on the FB-NYT dataset, previously used by Lin et al. (2016); Vashishth et al. (2018).",
"Specifically, combining both forms of supervision achieves a 4.4% relative AUC increase than our baseline without the additional supervision.",
"The following section defines the notation we use, describes the problem and provides an overview of our approach.",
"Our goal is to predict which relation types are expressed between a pair of entities ( e 1 , e 2 ), given all sentences in which both entities are mentioned in a large collection of unlabeled documents.",
"Following previous work on distant supervision, we use known tuples ( e 1 , r, e 2 ) in a knowledge base K to automatically annotate sentences where both entities are mentioned with the relation type r .",
"In particular, we group all sentences s with one or more mentions of an entity pair ( e 1 , e 2 ) into a bag of sentences B e 1 ,e 2 , then automatically annotate this bag with the set of relation types L distant = { r R : ( e 1 , r, e 2 ) K} , where R is the set of relations we are interested in.",
"We use positive instances' to refer to cases where | L | > 0 , and negative instances' when | L | = 0 .",
"In this paper, we leverage an existing dataset of direct supervision for relations.",
"Each direct supervision instance consists of a token sequence s containing mentions of an entity pair ( e 1 , e 2 ) and one relation type (or no relation').",
"We do not require that the entities or relation types in the direct supervision annotations align with those in the KB.",
"Furthermore, we replace the relation label associated with each sentence with a binary indicator of 1 if the sentence expresses one of the relationships of interest and 0 otherwise.",
"Figure 1 illustrates how we modify neural architectures commonly used in distant supervision, e.g., Lin et al. (2016); Liu et al. (2017) to effectively incorporate direct supervision.",
"The model consists of two components: 1) A sentence encoder (displayed in blue) reads a sequence of tokens and their relative distances from e 1 and e 2 , and outputs a vector s representing the sentence encoding, as well as P ( e 1 e 2 | s ) representing the probability that the two entities are related given this sentence.",
"2) The bag encoder (dis-played in green) reads the encoding of each sentence in the bag for the pair ( e 1 , e 2 ) and predicts P ( r = 1 | e 1 , e 2 ) , r R .",
"We combine both types of supervision in a multi-task learning setup by minimizing the weighted sum of the cross entropy losses for P ( e 1 e 2 | s ) and P ( r = 1 | e 1 , e 2 ) .",
"By sharing the parameters of sentence encoders used to compute either loss, the sentence encoders become less susceptible to the noisy bag labels.",
"The bag encoder also benefits from the direct supervision by using the supervised distribution P ( e 1 e 2 | s ) to decide the weight of each sentence in the bag.",
"The model predicts a set of relation types L pred R given a pair of entities e 1 , e 2 and a bag of sentences B e 1 ,e 2 .",
"In this section, we first describe the sentence encoder part of the model (Fig-ure 2, bottom), then describe the bag encoder (Fig-ure 2, top), then we explain how the two types of supervision are jointly used for training the model end-to-end.",
"Given a sequence of words w 1 , . . . , w | s | in a sentence s , a sentence encoder translates this sequence into a fixed length vector s .",
"Input Representation.",
"The input representation is illustrated graphically with a table at the bottom of Figure 2. We map word token i in the sentence w i to a pre-trained word embedding vector w i .",
"2 Another crucial input signal is the position of entity mentions in each sentence s B e 1 ,e 2 .",
"Following Zeng et al. (2014), we map the distance between each word in the sentence and the entity mentions 3 to a small vector of learned parameters, namely d e 1 i and d e 2 i .",
"We find that adding a dropout layer with a small probability ( p = 0 . 1 ) before the sentence encoder reduces overfitting and improves the results.",
"To summarize, the input layer for a sentence s is a sequence of vectors: v i = [ w i ; d e 1 i ; d e 2 i ] , for i 1 , . . . , | s | Word Composition.",
"Word composition is illustrated with the block CNN in the bottom part of Figure 2, which represents a convolutional neural network (CNN) with multiple filter sizes.",
"The outputs of the max pool operations for different 2 Following Lin et al. (2016), we do not update the word embeddings while training the model.",
"3 If an entity is mentioned more than once in the sentence, we use the distance from the word to the closest entity mention.",
"Distances greater than 30 are mapped to the embedding for distance = 30.",
"smaller vector using one feed forward linear layer.",
"Sentence encoding s is computed as follows: c x = CNN x ( v 1 , . . . , v | s | ) , for x { 2 , 3 , 4 , 5 } s = W 1 [ c 2 ; c 3 ; c 4 ; c 5 ] + b 1 , where CNN x is a standard convolutional neural network with filter size x , W 1 and b 1 are model parameters and s is the sentence encoding.",
"We feed the sentence encoding s into a ReLU layer followed by a sigmoid layer with output size 1, representing P ( e 1 e 2 | s ) , as illustrated in Figure 2 (bottom): P ( e 1 e 2 | s ) = (1) p = ( W 3 ReLU ( W 2 s + b 2 ) + b 3 ) , where is the sigmoid function and W 2 , b 2 , W 3 , b 3 are model parameters.",
"Given a bag B e 1 ,e 2 of n 1 sentences, we compute their encodings s 1 , . . . , s n as described earlier and feed them into the bag encoder, which combines the information in all of the sentence encodings and predicts the probability P ( r = 1 | e 1 , e 2 ) , r R .",
"The bag encoder also incorporates the signal p = P ( e 1 e 2 | s ) from Equation 1 as an estimate of the degree to which sentence s expresses some relation between e 1 and e 2 .",
"Attention.",
"To aggregate the sentence encodings s 1 , . . . , s n into a fixed length vector that captures the important features in the bag, we use attention.",
"Attention has two steps: (1) computing weights for the sentences and (2) aggregating the weighted sentences.",
"Weights can be uniform, or computed using a sigmoid or softmax.",
"Weighted sentences can be aggregated using average pooling or max pooling.",
"Prior work (Jiang et al., 2016; Lin et al., 2016; Ji and Smith, 2017) have explored some of these combinations but not all of them.",
"In the ablation experiments, we try all combinations and we find that the (sigmoid, max pooling) attention gives the best result.",
"We discuss the intuition behind this in Section 4.2.",
"For the rest of this section, we will explain the architecture of our network assuming a (sigmoid, max pooling) attention.",
"Given the encoding s j and an unnormalized weight u j for each sentence s j B e 1 ,e 2 , the bag encoding g is a vector with the same dimensionality as s j .",
"With (sigmoid, max pooling) attention, each sentence vector is multiplied by the corresponding weight, then we do a dimension-wise max pooling (taking the maximum of each dimension across all sentences, not the other way around).",
"The k -th element of the bag encoding g is computed as: g j [ k ] = max j 1 ,...,n { s j [ k ] ( u j ) } .",
"As shown in Figure 2, we do not directly use the p from Equation 1 as attention weights.",
"Instead, we found it useful to feed it into more nonlinearities.",
"The unnormalized attention weight for s j is computed as: u j = W 7 ReLU ( W 6 p + b 6 ) + b 7 .",
"of relations in the distant supervision setting, although our formulation is closer to that of Yang et al. (2015) who used point-wise multiplication of entity embeddings: m = e 1 (cid:12) e 2 , where (cid:12) is point-wise multiplication, and e 1 and e 2 are the embeddings of e 1 and e 2 , respectively.",
"In order to improve the coverage of entity embeddings, we use pretrained GloVe vectors (Pennington et al., 2014) (same embeddings used in the input layer).",
"For entities with multiple words, like Steve Jobs, the vector for the entity is the average of the GloVe vectors of its individual words.",
"If the entity is expressed differently across sentences, we average the vectors of the different mentions.",
"As discussed in Section 4.2, this leads to big improvement in the results, and we believe there is still big room for improvement from having better representation for entities.",
"We feed the output m as additional input to the last block of our model.",
"Output Layer.",
"The final step is to use the bag encoding g and the entity pair encoding m to predict a set of relations L pred which is a standard multilabel classification problem.",
"We concatenate g and m and feed them into a feedforward layer with ReLU non-linearity, followed by a sigmoid layer with an output size of |R| : t = ReLU ( W 4 [ g ; m ] + b 4 ) P ( r = 1 | e 1 , e 2 ) = ( W 5 t + b 5 ) , where r is a vector of Bernoulli variables each of which corresponds to one of the relations in R .",
"This is the final output of the model.",
"To train the model on the distant supervision data, we use binary cross-entropy loss between the model predictions and the labels obtained with distant supervision, i.e., DistSupLoss = (cid:88) log P ( r = r distant | e , e )",
"In addition to the distant supervision, we want to improve the results by incorporating an additional direct supervision.",
"A straightforward way to combine them is to create singleton bags for direct supervision labels, and add the bags to those obtained with distant supervision.",
"However, results in Section 4.2 show that this approach does not improve the results.",
"Instead, a better use of the direct supervision is to improve the model's ability to predict the potential usefulness of a sentence.",
"According to our analysis of baseline models, distinguishing between positive and negative examples is the real bottleneck in the task.",
"Therefore, we use the direct supervision data to supervise P ( e 1 e 2 | s ) .",
"This supervision serves two purposes: it improves our encoding of each sentence, and improves the weights used by the attention to decide which sentences should contribute more to the bag encoding.",
"It also has the side ben-efit of not requiring the same set of relation types as that of the distant supervision data, because we only care about if there exists some relevant relation or not between the entities.",
"where D is all the direct supervision data and all distantly-supervised negative examples.",
"4 We jointly train the model on both types of supervision.",
"The model loss is a weighted sum of the direct supervision and distant supervision losses, loss = 1 + 1 DistSupLoss + + 1 DirectSupLoss (2) where is a parameter that controls the contribution of each loss, tuned on a validation set.",
"This section discusses datasets, metrics, configu-rations and the models we are comparing with.",
"Distant Supervision Dataset (DistSup).",
"The FB-NYT dataset 5 introduced in Riedel et al. (2010) was generated by aligning Freebase facts with New York Times articles.",
"The dataset has 52 relations with the most common being loca-tion, nationality, capital, place lived and neighborhood of.",
"They used the articles of 2005 and 2006 for training, and 2007 for testing.",
"Recent prior work (Lin et al., 2016; Liu et al., 2017; Huang and Wang, 2017) changed the original dataset.",
"They used all articles for training except those from 2007 , which they left for testing as in Riedel et al. (2010).",
"We use the modified dataset 4 We note that the distantly supervised negative examples may still be noisy.",
"which was made available by Lin et al. (2016).",
"6 The table below shows the dataset size.",
"Direct Supervision Dataset (DirectSup).",
"Our direct supervision dataset was made available by Angeli et al. (2014) and it was collected in an active learning framework.",
"The dataset consists of sentences annotated with entities and their relations.",
"It has 22,766 positive examples for 41 relation types in addition to 11,049 negative examples.",
"To use this dataset as supervision for P ( e 1 e 2 | s ) , we replace the relation types of positive examples with 1 s and label negative examples with 0 s.",
"Metrics.",
"Prior work used precision-recall (PR) curves to show results on the FB-NYT dataset.",
"In this multilabel classification setting, the PR curve is constructed using the model predictions on all entity pairs in the test set for all relation types sorted by the confidence scores from highest to lowest.",
"Different thresholds correspond to different points on the PR curve.",
"We use the area under the PR curve (AUC) for early stopping and hyperparameter tuning.",
"Following previous work on this dataset, we only keep points on the PR curve with recall below 0 .",
"4 , focusing on the high-precision low-recall part of the PR curve.",
"As a result, the largest possible value for AUC is 0 .",
"4 .",
"Configurations.",
"The FB-NYT dataset does not have a validation set for hyperparameter tuning and early stopping.",
"Liu et al. (2017) use the test set for validation, Lin et al. (2016) use 3-fold cross validation, and Vashishth et al. (2018) split the training set into 80% training and 20% testing.",
"In our experiments, we use 90% of the training set for training and keep the other 10% for validation.",
"The main hyperparameter we tune is lambda (sec-tion 4.3).",
"The pre-trained word embeddings we use are 300-dimensional GloVe vectors, trained on 42B tokens.",
"Since we do not update word embeddings while training the model, we define our vocabulary as any word which appears in the training, validation or test sets with frequency greater than two.",
"When a word with a hyphen (e.g., five-star') is not 6 https://github.com/thunlp/NRE in the GloVe vocabulary, we average the embeddings of its subcomponents.",
"Otherwise, all OOV words are assigned the same random vector (nor-mal with mean 0 and standard deviation 0.05).",
"Our model is implemented using PyTorch and AllenNLP (Gardner et al., 2017) and trained on machines with P100 GPUs.",
"Each run takes five hours on average.",
"We train for a maximum of 50 epoch, and use early stopping with patience = 3 .",
"Each dataset is split into minibatches of size 32 and randomly shuffled before every epoch.",
"We use the Adam optimizer with its default PyTorch parameters.",
"We run every configuration with three random seeds and report the PR curve for the run with the best validation AUC.",
"In the controlled experiments, we report the mean and standard deviation of the AUC across runs.",
"Compared Models.",
"Our best model (Section 3) is trained on the DistSup and DirectSup datasets in our multitask setup and it uses (sigmoid, max pooling) attention.",
"Baseline is the same model described in Section 3 but trained only on the DistSup dataset and uses the more common (softmax, average pooling) attention.",
"This baseline is our implementation of the PCNN+ATT model (Lin et al., 2016) with two main differences; they use piecewise convolutional neural networks (PC-NNs Pennington et al., 2014) instead of CNNs, and we add entity embeddings before the output layer.",
"7 We also compare our results to the state of the art model RESIDE (Vashishth et al., 2018), which uses graph convolution over dependency parse trees, OpenIE extractions and entity type constraints.",
"Figure 3 summarizes the main results of our experiments.",
"First, we note that our baseline outperforms PCNN+ATT (Lin et al., 2016) despite using the same training data (DistSup) and the same form of attention (softmax, average pooling), which confirms that we are building on a strong baseline.",
"The improved results in our baseline are due to using CNNs instead of PCNNs, and using entity embeddings.",
"7 Contrary to the results in Pennington et al. (2014), we found CNNs to give better results than PCNNs in our experiments.",
"Lin et al. (2016) also compute unnormalized attention weights as o j = s j A q where s j is the sentence encoding, A is a diagonal matrix and q is the query vector.",
"In our experiments, we found that implementing it as a feedforward layer with output size = 1 works better.",
"All our results use the feedforward implementation.",
"Adding DirectSup in our multitask learning setup and using (sig-moid, max pooling) attention gives us the best result, outperforming our baseline that doesn't use either by 4.4% relative AUC increase, and achieves a new state-of-the-art result outperforming (Vashishth et al., 2018).",
"We note that the improved results reported here conflate additional supervision and model improvements.",
"Next, we report the results of controlled experiments to tease apart the contributions of different components.",
"Table 1 summarizes results of our controlled experiments showing the impact of how the training data is used, and the impact of different config-urations of the attention (computing weights and aggregating vectors).",
"The model can be trained on DistSup only, DistSup + DirectSup together as one dataset with DirectSup expressed as singleton bags, or DistSup + DirectSup in our multitask setup.",
"Attention weights can be uniform, or computed using softmax or sigmoid.",
"8 Sentence vectors are aggregated by weighting them then averaging (average pooling) or weighting them then taking the max of each dimension (max pooling).",
"(Uniform weights, average pooling) and (softmax, average pooling) were used by Lin et al. (2016), (sigmoid, average pooling) was proposed by Ji and Smith (2017) but for a different task, and (uni-form weights, max pooling) is used by Jiang et al. (2016).",
"To the best of our knowledge, (softmax, max pooling) and (sigmoid, max pooling) have not been explored before.",
"Pooling type.",
"Results in Table 1 show that aggregating sentence vectors using max pooling generally works better than average pooling.",
"This is because max pooling might be better at picking out useful features (dimensions) from each sentence.",
"Supervision signal.",
"The second dimension of comparison is the use of the supervision signal used to train the model.",
"The table shows that training on DistSup + DirectSup, where the DirectSup dataset is simply used as additional bags, can hurt the performance.",
"We hypothesize that this is because the DirectSup data change the distribution of relation types in the training set from the test set.",
"However, using DirectSup as supervision for the attention weights in our multitask learning setup leads to considerable improvements (1% and 3% relative AUC increase using softmax and sigmoid respectively) because it leads to better attention weights and improves the model's ability to filter noisy sentences.",
"Attention weight computation.",
"Finally, comparing uniform weights, softmax and sigmoid.",
"We found the result to depend on the available level of supervision.",
"With DistSup only, the results of all three are comparable with softmax being slightly better.",
"However, when we have good attention weights (as provided by the multitask learning), softmax and sigmoid work better than uniform weights where sigmoid gives the best result with 6% relative AUC increase.",
"Sigmoid works better 8 We also tried normalizing sigmoid weights as suggested in (Rei and Sgaard, 2018), but this did not work better than regular sigmoid or softmax.",
"than softmax, because softmax assumes that exactly one sentence is correct by forcing the probabilities to sum to 1. This assumption is not correct for this task, because zero or many sentences could potentially be relevant.",
"On the other hand, sigmoidal attention weights does not make this assumption, which gives rise to more informative attention weights in cases where all sentences are not useful, or when multiple ones are.",
"This makes the sigmoidal attention weights a better modeling for the problem (assuming reliable attention weights).",
"Although we did not spend much time tuning hy-perparameters, we made sure to carefully tune (Equation 2) which balances the contribution of the two losses of the multitask learning.",
"Early experiments showed that DirectSupLoss is typically smaller than DistSupLoss, so we experimented with { 0 , 0 .",
"5 , 1 , 2 , 4 , 8 , 16 , 32 , 64 } .",
"Figure 4 shows AUC results for different values of , where each point is the average of three runs.",
"It is clear that picking the right value for has a big impact on the final result.",
"An example of a positive bag is shown in Table 2. Our best model (Multitask, sigmoid, max pooling) assigns the most weight to the first sentence while the baseline (DistSup, softmax, average pooling) assigns the most weight to the last sentence (which is less informative for the relation between the two entities).",
"Also, the baseline does not use the other two sentences because their weights are dominated by the last one.",
"term distant supervi-sion' was coined by Mintz et al. (2009) who used relation instances in a KB to identify sentences in",
"a text corpus where two related entities are mentioned, then developed a classifier to predict the relation.",
"Researchers have since extended this approach further (e.g., Takamatsu et al., 2012; Min et al., 2013; Riedel et al., 2013; Koch et al., 2014).",
"A key source of noise in distant supervision is that sentences may mention two related entities without expressing the relation between them.",
"Hoffmann et al. (2011) used multi-instance learning to address this problem by developing a graphical model for each entity pair which includes a latent variable for each sentence to explicitly indicate the relation expressed by that sentence, if any.",
"Our model can be viewed as an extension of Hoffmann et al. (2011) where the sentence-bound latent variables can also be directly supervised in some of the training examples.",
"Neural Models for Distant Supervision.",
"More recently, neural models have been effectively used to model textual relations (e.g., Hashimoto et al., 2013; Zeng et al., 2014; Nguyen and Grishman, 2015).",
"Focusing on distantly supervised models, Zeng et al. (2015) proposed a neural implementation of multi-instance learning to leverage multiple sentences which mention an entity pair in distantly supervised relation extraction.",
"However, their model picks only one sentence to represent an entity pair, which wastes the information in the neglected sentences.",
"Jiang et al. (2016) addresses this limitation by max pooling the vector encodings of all input sentences for a given entity pair.",
"Lin et al. (2016) independently proposed to use attention to address the same limitation, and Du et al. (2018) improved by using multilevel self-attention.",
"To account for the noise in distant supervision labels, Liu et al. (2017); Luo et al. (2017); Wang et al. (2018) suggested different ways of using soft labels that do not necessarily agree with the distant supervision labels.",
"Ye et al. (2017) proposed a method for leveraging dependencies between different relations in a pairwise ranking framework, while Han et al. (2018) arranged the relation types in a hierarchy aiming for better generalization for relations that do not have enough training data.",
"To improve using additional resources, Vashishth et al. (2018) used graph convolution over dependency parse, OpenIE extractions and entity type constraints, and Liu et al. (2018) used parse trees to prune irrelevant information from the sentences.",
"Combining Direct and Distant Supervision.",
"Despite the substantial amount of work on both directly and distantly supervised relation extraction, the question of how to combine both signals has not received the same attention.",
"Pershina et al. (2014) trained MIML-RE from (Sur-deanu et al., 2012) on both types of supervision by locking the latent variables on the sentences to the supervised labels.",
"Angeli et al. (2014) and Liu et al. (2016) presented active learning models that select sentences to annotate and incorporate in the same manner.",
"Pershina et al. (2014) and Liu et al. (2016) also tried simple baseline of including the labeled sentences as singleton bags.",
"Pershina et al. (2014) did not find this beneficial, which agrees with our results in Section 4.2, while Liu et al. (2016) found the addition of singleton bags to work well.",
"Our work is addressing the same problem, but combining both signals in a state-of-the-art neural network model, and we do not require the two datasets to have the same set of relation types.",
"We improve neural network models for relation extraction by combining distant and direct supervision data.",
"Our network uses attention to attend to relevant sentences, and we use the direct supervision to improve attention weights, thus improving the model's ability to find sentences that are likely to express a relation.",
"We also found that sigmoidal attention weights with max pooling achieves better performance than the commonly used weighted average attention.",
"Our model combining both forms of supervision achieves a new state-of-the-art result on the FB-NYT dataset with a 4.4% relative AUC increase than our baseline without the additional supervision.",
"Acknowledgments All experiments were performed on beaker.",
"Computations on beaker.org were supported in part by credits from Google Cloud.",
"Yangfeng Ji and Noah A. Smith.",
"2017.",
"Neural discourse structure for text categorization.",
"Xiaotian Jiang, Quan Wang, Peng Li, and Bin Wang.",
"2016.",
"Relation extraction with multi-instance multilabel convolutional neural networks.",
"In COLING .",
"Mitchell Koch, John Gilmer, Stephen Soderland, and Daniel S. Weld.",
"2014.",
"Type-aware distantly supervised relation extraction with linked arguments.",
"In EMNLP .",
"Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun.",
"2016.",
"Neural relation extraction with selective attention over instances.",
"In ACL .",
"Angli Liu, Stephen Soderland, Jonathan Bragg, Christopher H. Lin, Xiao Ling, and Daniel S. Weld.",
"2016.",
"Effective crowd annotation for relation extraction.",
"In NAACL .",
"Tian Yu Liu, Kexiang Wang, Baobao Chang, and Zhi-fang Sui.",
"2017.",
"A soft-label method for noise-tolerant distantly supervised relation extraction.",
"In EMNLP .",
"Tianyi Liu, Xinsong Zhang, Wanhao Zhou, and Wei-jia Jia.",
"2018.",
"Neural relation extraction via inner-sentence noise reduction and transfer learning.",
"In EMNLP .",
"Bingfeng Luo, Yansong Feng, Zheng Wang, Zhanxing Zhu, Songfang Huang, Rui Yan, and Dongyan Zhao.",
"2017.",
"Learning with noise: Enhance distantly supervised relation extraction with dynamic transition matrix.",
"In ACL .",
"Bonan Min, Ralph Grishman, Li Wan, Chang Wang, and David Gondek.",
"2013.",
"Distant supervision for relation extraction with an incomplete knowledge base.",
"In HLT-NAACL .",
"Mike Mintz, Steven Bills, Rion Snow, and Daniel Ju-rafsky.",
"2009.",
"Distant supervision for relation extraction without labeled data.",
"In ACL .",
"Thien Huu Nguyen and Ralph Grishman.",
"2015.",
"Relation extraction: Perspective from convolutional neural networks.",
"In VS@HLT-NAACL .",
"Jeffrey Pennington, Richard Socher, and Christopher D. Manning.",
"2014.",
"Glove: Global vectors for word representation.",
"In EMNLP .",
"Maria Pershina, Bonan Min, Wei Xu, and Ralph Gr-ishman.",
"2014.",
"Infusion of labeled data into distant supervision for relation extraction.",
"In ACL .",
"Marek Rei and Anders Sgaard.",
"2018.",
"Zero-shot sequence labeling: Transferring knowledge from sentences to tokens.",
"In NAACL-HLT .",
"Sebastian Riedel, Limin Yao, and Andrew D McCallum.",
"2010.",
"Modeling relations and their mentions without labeled text.",
"In ECML/PKDD .",
"Sebastian Riedel, Limin Yao, Andrew D McCallum, and Benjamin M. Marlin.",
"2013.",
"Relation extraction with matrix factorization and universal schemas.",
"In HLT-NAACL .",
"org ."
] | [
"abstain",
"abstain",
"result",
"result",
"result",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"result",
"objective",
"objective",
"result",
"abstain",
"result",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Pooling is an important technique for learning text representations in many neural NLP models.",
"In conventional pooling methods such as average, max and attentive pooling, text representations are weighted summations of the L 1 or L norm of input features.",
"However, their pooling norms are always fixed and may not be optimal for learning accurate text representations in different tasks.",
"In addition, in many popular pooling methods such as max and attentive pooling some features may be over-emphasized, while other useful ones are not fully exploited.",
"In this paper, we propose an A ttentive P ooling with L earnable N orms (APLN) approach for text representation.",
"Different from existing pooling methods that use a fixed pooling norm, we propose to learn the norm in an end-to-end manner to automatically find the optimal ones for text representation in different tasks.",
"In addition, we propose two methods to ensure the numerical stability of the model training.",
"The first one is scale limiting, which re-scales the input to ensure non-negativity and alleviate the risk of exponential explosion.",
"The second one is re-formulation, which decomposes the exponent operation to avoid computing the real-valued powers of the input and further accelerate the pooling operation.",
"Experimental results on four benchmark datasets show that our approach can effectively improve the performance of attentive pooling.",
"In recent years, neural network based methods are widely used in the natural language processing (NLP) field to learn text representations (Yang et al., 2016; Peters et al., 2018).",
"In these methods, pooling is a core technique to build the text representation vector from a collection of input feature vectors by summarizing their information (Lai et al., 2015).",
"Thus, an effective pooling method Sentiment Classification Average Pooling The movie is good, but not to my taste Max Pooling The movie is good, but not to my taste Attentive Pooling The movie is good, but not to my taste News Topic Classification Average Pooling Fire on Queensland Island Takes Heavy Toll on Wildlife Max Pooling Fire on Queensland Island Takes Heavy Toll on Wildlife Attentive Pooling Fire on Queensland Island Takes Heavy Toll on Wildlife Figure 1: The pooling weights of several different pooling methods on the representations produced by an LSTM network.",
"that can select salient features accurately will facilitate many NLP methods (Ma et al., 2017).",
"Among existing pooling methods, average pooling is a representative one which takes the average of the L 1 norm of input features (Tang et al., 2014, 2015a,b).",
"However, average pooling equally regards the input representation vector at each position and ignores their different informativeness for learning text representation, which may not be optimal (Johnson and Zhang, 2015).",
"Thus, other pooling methods such as max pooling (Col-lobert et al., 2011; Kim, 2014) and attentive pooling (Yang et al., 2016; Zhou et al., 2016; Cui et al., 2017; Devlin et al., 2019; Wu et al., 2019b) are widely used in neural NLP models.",
"For example, Kim (2014) proposed to apply max pooling to the contextual word representations learned by CNN networks to build the representations of the entire sentence.",
"Yang et al. (2016) proposed to use attentive pooling at both word and sentence levels to learn informative sentence and document representations by selecting important words and sentences.",
"However, these pooling methods use fixed average norms, i.e., L 1 norm for average and attentive pooling and L norm for max pooling, to build text representations, which may not be optimal when handling different tasks.",
"Our work is motivated by the following observations.",
"First, different contexts usually have different informativeness for learning text representations.",
"For example, in Fig. 1 1 , the word but is very important for inferring the sentiment polarity of this sentence, while The is uninformative.",
"Thus, modeling the different informativeness of contexts and attending to them differently may help learn more informative text representations.",
"Second, different tasks and even different datasets have different characteristics.",
"For example, in Fig. 1, sentiment and negation words may be the key clues for inferring the sentiment polarity of the first sentence, while the global contexts may be useful for understanding the topic of the second sentence.",
"Thus, using a fixed pooling norm for universal text representation learning is probably not optimal.",
"Third, in popular pooling methods such as max pooling and attentive pooling, some contexts may be over-emphasized, and other useful contextual information is not fully-respected.",
"For example, as shown in Fig. 1, the sentiment word good is highlighted, but other useful clues such as but and not do not gain sufficient attentions, which may not be optimal for learning accurate text representations.",
"Thus, a dynamically learnable degree of hard or soft for pooling may benefit text representation learning.",
"In this paper, we propose an Attentive Pooling with Learnable Norms (APLN) approach to enhance the learning of text representations 2 .",
"Instead of manually setting a fixed pooling norm, we propose to automatically learn it in a unified framework, which can find the optimal values to learn text representations for different tasks in an end-to-end manner.",
"In addition, since the learning of pooling norm may be numerically unstable in some cases due to the exponent operation, we propose two methods to improve its computational stability.",
"The first one is limiting the scale of input features, which aims to ensure their non-negativity and avoid exponential explosion.",
"The second one is a re-formulation method, which aims to avoid computing the real-valued power of input features by decomposing the exponent operation into three safe and fast atomic operations.",
"We conducted experiments on four benchmark datasets, and the results show that our approach can effectively improve the learning of text representation.",
"Neural networks are widely used to learn text representations from contexts (Peng et al., 2018).",
"Pooling is usually an essential step in these methods to build contextual representations by summarizing the information of input features (LeCun et al., 2015).",
"The simplest pooling method is average pooling, which is used in many approaches to construct text representations (Tang et al., 2014, 2015a,b).",
"For example, Tang et al. (2015a) proposed to apply average pooling to the output of CNN filters to capture global contexts in a sentence.",
"In addition, they also proposed to average the sentence representations learned by parallel CNN networks with different window sizes.",
"In their another work (Tang et al., 2015b), they proposed to apply average pooling to the sequence of sentence representations to build the representations of an entire document.",
"Although average pooling is computationally efficient, it cannot distinguish important contexts from unimportant ones, which may not be optimal for learning accurate text representations.",
"There are also other popular pooling methods that can select salient features to learn more informative text representations, such as max pooling (Kim, 2014; Zhang et al., 2015) and attentive pooling (Yang et al., 2016), which are employed by many neural NLP methods (Collobert et al., 2011; Kim, 2014; Huang et al., 2012; Yang et al., 2016; Chen et al., 2016; Zhou et al., 2016; Du et al., 2017; Li et al., 2018; Wu et al., 2019a; Tao et al., 2019; Devlin et al., 2019; Wu et al., 2019b).",
"For example, Collobert et al. (2011) proposed to learn representations of contexts within each window using feed forward neural networks, and used max pooling to build final text representations.",
"Kim (2014) proposed to apply max pooling over time to the contextual word representations learned by multiple CNN filters.",
"Huang et al. (2012) proposed to build representations of the entire document using the summation of word representations weighted by their TF-IDF scores.",
"Yang et al. (2016) proposed a hierarchical attention network to first learn sentence representations from words and then learn document representations from sentences.",
"They proposed to apply attentive pooling at both word and sentence levels to select informative words and sentences for more informative representation learning.",
"Wu et al. (2019b) proposed a hierarchical user and 1 2 1 1 2 Max-over-time ... 1 2 ... 1 2 ... 1 2 ... ... 1/ 1 2 ...",
"1 2 1 1 2 Max-over-time ... 1 2 ... 1 2 ... 1 2 ... ... 1/ 1 2 ...",
"item representation model with three-tier attention, which applies attentive pooling to simultaneously select important words, sentences and reviews.",
"However, the pooling norms of max and attentive pooling are always fixed, which may not be optimal for universal text representation learning since the characteristics of different tasks may be different.",
"In addition, both pooling methods may over-emphasize the most salient features, and other useful contextual information is not fully exploited, which may also be sub-optimal.",
"There are a few methods to adapt the pooling norms in different tasks.",
"For example, Gulcehre et al. (2014) explored the influence of selecting different pooling norms on the performance of different image classification tasks.",
"However, the norms in their method are manually tuned, which are usually very time-consuming and may not be optimal.",
"Different from all aforementioned methods, our approach can automatically optimize pooling norms in an end-to-end manner, and can effectively select important contexts to learn informative text representations.",
"Extensive experiments on four datasets with different characteristics validate the effectiveness of our approach.",
"In this section, we will first present a brief introduction to several popular pooling methods, i.e., average, max and attentive pooling.",
"To make it easier to understand, we present an intuitive comparison of the mechanisms of these different pooling methods in Fig. 2. Average Pooling.",
"Average pooling is used to build contextual representations by taking the arithmetic mean of input features, as shown in Fig.",
"2(a).",
"It uses the L 1 norm of the input.",
"Denote the input sequence of hidden representations as [ h 1 , h 2 , ..., h N ] , where N is the sequence length.",
"Max Pooling.",
"Max pooling aims to build contextual representations by selecting the most salient features via max-over-time operations, as shown in Fig.",
"2(b).",
"It utilizes the L norm at the time dimension of input features.",
"Denote r j as the j -th value in the vector r , which is computed as: r j = max( h j 1 , h j 2 , ..., h jN ) , (2) where h ji represents the j -th value in the feature vector h i .",
"Attentive Pooling.",
"As shown in Fig.",
"2(c), attentive pooling usually builds contextual representations by selecting important input features, which can also be regarded as a kind of L 1 norm average.",
"It computes an attention weight i for the input at each position to indicate its informativeness, which is formulated as follows: i = exp[ q T f ( h i )] (cid:80) Nj =1 exp[ q T f ( h j )] , (3) where f ( ) is a non-linear function, q is the attention query vector.",
"Following Yang et al. (2016), we apply the tanh operation to the linear transformation of h i to form the function f ( ) .",
"The final contextual representation r is the summation of input representation vectors weighted by their attention weight as follows: r = N (cid:88) i =1 i h i .",
"In this section, we will introduce the details of our Attentive Pooling with Learnable Norms (APLN) approach.",
"In the aforementioned pooling methods, the pooling norm is always fixed (i.e., L 1 or L ).",
"However, the characteristics of different NLP tasks and even different datasets should have some differences, and it may not be optimal to use a fixed pooling norm for universal text representation learning.",
"In addition, tuning the pooling norm manually is usually very time-consuming, and it may also be sub-optimal.",
"Thus, it is an intuitive idea to automatically learn the pooling norm in an end-to-end manner to alleviate the efforts on hy-perparameter searching and learn more informative text representations.",
"The architecture of our APLN approach is shown in Fig. 3. We will introduce its details as follows.",
"Since different contexts usually have different importance, modeling their informativeness may help learn more informative text representations.",
"Thus, similar to the vanilla attentive pooling, in our APLN approach, we also compute an attention score for the input at each position.",
"However, instead of using the simple weighted summation to build the contextual representation r , we propose to compute the L p norm 3 average of the input feature vectors weighted their attention weights, which is formulated as follows: r = [ 1 (cid:80) Ni =1 pi N (cid:88) i =1 ( i h i ) p ] 1 p , (5) 3 It should be noticed that when p < 1 , this definition is not a norm since it does not obey the triangle inequality.",
"But we still call it norm for consistency.",
"where p is a learnable parameter.",
"In this way, our model will automatically find appropriate values of pooling norms for learning text representations in different tasks.",
"To show the influence of p on the inputs of the APLN module, we vary the value of p and illustrate the shape of the function y = x p in Fig. 4. According to Fig. 4, we can see when p is larger, the attention of APLN is sharper and sparser since small values of i h i will be suppressed, which indicates the attentive pooling is harder.",
"In contrast, if p is smaller, the attentions are more distributed, which indicates the attentive pooling is softer.",
"Thus, in this manner, our APLN model can automatically explore how hard/soft the attention should be when constructing text representations, which may help recognize important contexts and avoid the problem of over-emphasizing some features and not fully respecting other useful ones, both of which are important for learning accurate text representations.",
"Unfortunately, in most cases the training of APLN is unstable if we directly use it for pooling.",
"Thus, we propose two methods to ensure the numerical stability of the model training.",
"The first one is scale limiting , which is used to limit the range of the elements of i h i .",
"The second one is Re-formulation , which is used to avoid the direct computation of the real-valued powers of the input features and accelerate the pooling operation.",
"We will introduce the two methods as follows.",
"According to Eq.",
"(5), to ensure the values of r are real, the elements of i h i must be non-negative.",
"Thus, we apply a ReLU function to i h i to keep i h i 0 .",
"However, there are still some risks if there exist elements with i h ji > 1 , since the gradients may explode when p > 1 due to the ampli-fication of the exponent, which are also observed in our practice.",
"To solve this problem, we propose to clip the values of i h i as follows: 0 i h ji 1 .",
"In this way, the input features is re-scaled to a safe range.",
"We also explored other kinds of re-scaling methods such as normalization, but we find there are no significant differences in the model performance.",
"Thus, we simply use the clipping operation for its efficiency.",
"However, there are still some problems in our approach.",
"We find the training of our approach is not numerically stable (e.g., NAN problem) when implemented by several popular deep learning frameworks such as Tensorflow.",
"In addition, computing the real-value powers of input features is quite time-consuming.",
"Thus, we propose a reformulation strategy by converting the exponent computation in Eq.",
"(5).",
"For instance, the exponent x p is re-formulated as follows: x p = e log( x p ) = e p log( x ) e p log( x + (cid:15) ) , (7) where (cid:15) = 10 7 is a protection value.",
"In this way, the computation of the power of x is divided into three atomic operations, i.e., logarithm, multiplication and exponent, all of them are fast 4 and numerically stable in our approach.",
"Thus, using the re-formulation strategy can enhance the numerical stability and accelerate the pooling operation.",
"Our experiments are widely conducted on four benchmark datasets with different characteristics.",
"The first one is AG's News 5 , which is a news topic classification dataset.",
"Following (Zhang et al., 2015), we only use the title and description fields in this dataset.",
"The second one is IMDB 6 (Diao et al., 2014), which is a dataset with movie reviews and ratings.",
"The third one is Amazon Electronics 4 In experiments on a machine with a GTX1080ti GPU, the computation of x p is accelerated by more than 10 times.",
"(denoted as Amazon ) (He and McAuley, 2016), which contains reviews on electronics.",
"The fourth one is Yelp 2015 (denoted as Yelp ), which is a restaurant review dataset.",
"The latter three datasets are all for sentiment classification.",
"Since the original Amazon and Yelp datasets are too large, we sampled 50,000 reviews to form each dataset.",
"The detailed statistics are shown in Table 1.",
"The class distributions of the AG's News and Yelp are balanced, but are imbalanced on IMDB and Amazon , as shown in Fig. 5.",
"In addition, AG's News is a sentence-level classification dataset, while the others are document-level.",
"Since the AG's News dataset only contains the training and test sets, we randomly sampled 10% of news in the training set for validation.",
"For the other three datasets, we used 80% of samples for training, 10% for validation and the rest 10% for test.",
"In our experiments, the word embeddings were 300-dimensional and initialized by Glove (Pen-nington et al., 2014) 7 .",
"In our comparative experiments, the CNN networks had 400 filters, and their window size was 3. The dimension of LSTM hidden states was 200.",
"The attention query vectors were 200-dimensional.",
"The initial pooling norm p was set to 1, which is consistent with the vanilla attentive pooling.",
"Adam (Kingma and Ba, 2014) was used as the optimizer, and the 7 We do not use language models such as ELMo and BERT since our work focuses on facilitating the pooling technique rather than boosting the performance of our approach against the state-of-the-art methods.",
"batch size was 64.",
"We applied dropout (Srivas-tava et al., 2014) techniques to the word embeddings, CNN networks or LSTMs to mitigate over-fitting, and the dropout ratio was 0.2.",
"These hy-perparameters were tuned on the validation set.",
"In classification tasks the metrics were accuracy and macro-F scores, and in regression tasks the performance was evaluated by rooted mean squared error (RMSE).",
"We reported the average results of 10 independently repeated experiments.",
"We compare the performance of different neural text classification models with different pooling methods to evaluate the performance of our approach.",
"The methods to be compared include: (1) CNN-Avg (Tang et al., 2015b), applying average pooling to the representations learned by CNN to build contextual text representations; (2) CNN-Max (Kim, 2014), using a combination of CNN and max pooling; (3) CNN-Att (Gong and Zhang, 2016), using a combination of CNN and vanilla attentive pooling; (4) CNN-APLN , combining CNN with our APLN approach; (5) LSTM-Last (Hochreiter and Schmidhuber, 1997), using the last hidden state in an LSTM network; (6) LSTM-Avg (Zhao et al., 2016), using average pooling after LSTM; (7) LSTM-Max (Johnson and Zhang, 2016), using max pooling after LSTM; (8) LSTM-Att (Zhou et al., 2016), using attentive pooling after LSTM; (9) LSTM-APLN , combining LSTM with APLN ; (10) HAN (Yang et al., 2016), a hierarchical LSTM network with both word-level and sentence-level attentive pooling; (11) HAN-APLN , using APLN at both word and sentence levels.",
"In methods based on LSTM, we used two parallel LSTMs to scan the input in both directions.",
"The results of these methods are summarized in Table 2, which reveal several findings.",
"First, the methods based on average pooling are usually inferior to those using other pooling methods in our experiments.",
"This is probably because average pooling equally regards different features and cannot distinguish their informativeness.",
"Thus, modeling the importance of different features has the potential to improve text representation learning.",
"Second, the methods based on attentive pooling outperform their variants based on max pooling.",
"This may be because attentive pooling can model the informativeness of contexts for text representation, while max pooling only selects the most salient features, which may be sub-optimal.",
"Third, our APLN approach can consistently outperform other pooling methods, and further hypothesis test results show that the improvement brought by our approach is significant ( p < 0 . 01 ).",
"This may be because vanilla max pooling and attentive pooling methods use a fixed pooling norm for universal text representation learning, and the differences in the characteristics of different tasks and datasets are not considered, which may also be sub-optimal.",
"Our approach can dynamically adapt the pooling norm in different scenarios, which may facilitate text representation learning.",
"In addition, we find the advantage in Macro-F score of our approach over other methods is more significant on the datasets with imbalanced class distributions.",
"This may be because our approach can build text representation in a softer manner, which may help neural models avoid focusing on the clues of major classes only and alleviate their dominance.",
"Fourth, we find hierarchical models ( HAN and HAN-APLN ) outperform flatten models (e.g., LSTM-APLN ) for doc-Methods IMDB Amazon Yelp CNN-Avg 1.388 0.920 0.847 CNN-Max 1.322 0.908 0.834 CNN-Att 1.292 0.899 0.824 CNN-APLN 1.271 0.886 0.801 LSTM-Last 1.316 0.896 0.822 LSTM-Avg 1.343 0.911 0.830 LSTM-Max 1.269 0.890 0.815 LSTM-Att 1.257 0.878 0.799 LSTM-APLN 1.233 0.865 0.784 HAN 1.230 0.866 0.789 HAN-APLN 1.214 0.858 0.776 Table 3: The performance of different methods on rating regression.",
"ument representation learning.",
"This may be because modeling documents in a hierarchical manner can better utilize the structure of documents.",
"In addition, since our approach can be applied at both word and sentence levels in HAN , text representation may be learned more accurately.",
"These results validate the effectiveness of our approach.",
"To further validate the generality of our approach in regression tasks 8 , we also conduct experiments on the IMDB , Amazon and Yelp datasets by formulating the task as a rating regression problem, and the results in terms of RMSE are shown in Table 3. From the results, we find our APLN approach can also bring consistent improvements to many existing methods in the regression task.",
"In this section, we will explore the influence of the scale limiting and re-formulation techniques on the stability and relative pooling speed of our approach.",
"The results are summarized in Table 4. From these results, if the limitation of nonnegativity is removed, the model training is usually unstable, which is intuitive.",
"In addition, if the scale limitation ( 1 ) is removed, our model occasionally does not converge.",
"This may be because when p > 1 , our model has the risk of gradient explosion.",
"Thus, the scale of input features should be limited.",
"Besides, the re-formulation method also has critical impacts on our approach.",
"This is probably because directly computing the real-valued 8 We find that the regression labels need to be normalized, or the performance may be sub-optimal.",
"exponents of input features may be numerically unstable.",
"In our approach we decompose the exponents into three stable operations, which is robust to numerical errors.",
"In addition, the pooling speed can be effectively improved, since the computational costs of these atomic operations are usually small.",
"These results validate the effectiveness of our approach.",
"In this section, we study the influence of a small but very important step, i.e., the initialization of the trainable pooling norm p , on the performance of our approach.",
"We compare the performance of LSTM-APLN by varying the initialized values of p .",
"The results are shown in Fig. 6. From Fig. 6, we find the performance of our approach increases when the initialized value of p increases.",
"This is intuitive because when p is too small, the attention network may not be capable of recognizing important contexts effectively, which is not optimal for learning accurate text representations.",
"In addition, when p is initialized with a too large value, the performance will start to decline.",
"This is probably because a large value of p will lead to sharp attentions on critical contexts, and other useful information is not fully exploited.",
"Thus, the performance is also not optimal.",
"These results show that a moderate value (e.g., 1.0) is the most appropriate for initializing the pooling norm p , which is also consistent with standard attentive pooling.",
"In this section, we analyze a critical parameter learned by our model, i.e., the pooling norm p in the APLN module.",
"The evolution of the values of p learned by LSTM-APLN on the four benchmark datasets during model training is portrayed in Fig. 7. From the results, we have several interesting observations.",
"First, the pooling norms learned by our model are consistently less than (cid:3)(cid:57)(cid:68)(cid:79)(cid:88)(cid:72)(cid:3)(cid:82)(cid:73) Figure 6: The influence of the initialization of the pooling norm p on our approach.",
"1, which indicates that our norm-wise attention is softer than vanilla attention.",
"This may be because L1 norm is not optimal for attentive pooling, and a softer attention manner may be more suitable for learning accurate text representations.",
"Second, we find it is interesting that the norm p consistently decreases when the training epoch increases.",
"This may be because the model may tend to take the global contexts into consideration rather than focus on important ones.",
"Third, a moderate norm p is more appropriate for our approach.",
"This may be because when p is too large, the attentions may be too sparse and useful contextual information is not fully exploited.",
"When p is too small, the attention networks cannot effectively distinguish informative contexts from uninformative ones, which may also be sub-optimal for learning text representations.",
"Fourth, we observe that the norm p learned on datasets with imbalanced class distributions is lower than those with balanced distributions.",
"This may be because on imbalanced dataset, if p is too large, the clues I really liked the case , at first .",
"(b) Attention weights in HAN-APLN .",
"Predicted rating is 3. Figure 8: Visualization of the word-level and sentence-level attention weights in HAN and HAN-APLN on a randomly selected review in the Amazon dataset, whose gold rating score is 3. Darker colors indicate higher attention weights.",
"of the majority classes may be over-emphasized, and other useful information is not fully respected.",
"Thus, the performance of our APLN approach is better when it learns a moderate pooling norm.",
"In this section, we conducted several case studies to further explore the effectiveness of our APLN approach.",
"We visualize the word-level and sentence-level attention weights in HAN and HAN-APLN of a randomly selected review to compare their differences, and the results are portrayed in Fig. 8.",
"According to the results, we have several observations.",
"First, both HAN and HAN-APLN can recognize important words and sentences.",
"For example, the word liked and the sentence I really liked the case, at first. are highlighted since they are important for modeling the opinions condensed by this review.",
"Second, the attentions of HAN are sparse, which indicates that HAN tends to focus more on some contexts in a review such as the first and the third sentence, and pays little attentions to the useful information in other contexts such as the fourth and fifth sentences.",
"In addition, HAN wrongly classifies the rating of this review.",
"This is probably because the rating of a review is usually a synthesis of all opinions conveyed by it.",
"Thus, it may not be optimal for learning accurate text representations if only salient contexts are considered.",
"Third, different from HAN , the attentions of HAN-APLN are smoother.",
"This is probably because the pooling norm learned by our approach is less than 1, which encourages our model to attend to important contexts in a softer manner.",
"In addition, HAN-APLN can classify this review correctly.",
"This is probably because our approach can effectively take global contextual information into consideration, and does not over-emphasize critical contexts.",
"Thus, our APLN approach can learn more accurate text representations than the methods based on vanilla attentive pooling.",
"These results show the effectiveness of our approach.",
"In this paper, we propose an Attentive Pooling with Learnable Norms (APLN) approach for text representation.",
"Instead of using a fixed pooling norm for universal text representation learning, we propose to learn the norm in an end-to-end framework to automatically find the optimal ones for learning text representations in different tasks.",
"In addition, we propose two methods to ensure the numerical stability of the model training.",
"The first one is scale limiting, which limits the scale of input representations to ensure their non-negativity and avoid potential exponential explosion.",
"The second one is re-formulation, which decomposes the exponent operation into several safe atomic operations to avoid computing the real-valued powers of input features with less computational cost.",
"Extensive experiments on four benchmark datasets validate the effectiveness of our approach.",
"In our future work, we will explore several potential directions.",
"First, we plan to explore why the model prefers soft attentions rather than hard ones, which is different from the findings in several prior works based on hard attention.",
"Second, we plan to study how to model the differences on the characteristics of different samples and use different pooling norms, which may have the potential to further improve our approach.",
"Third, we will explore how to generalize our approach to other modalities, such as images, audios and videos, to see whether it can facilitate more attention-based methods.",
"This work was supported by the National Key Research and Development Program of China under Grant number 2018YFC1604002, the National Natural Science Foundation of China under Grant numbers U1936208, U1936216, U1836204, and U1705261."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"objective",
"other"
] |
[
"Distance based knowledge graph embedding methods show promising results on link prediction task, on which two topics have been widely studied: one is the ability to handle complex relations, such as N-to-1, 1-to-N and N-to-N, the other is to encode various relation patterns, such as symmetry/antisymmetry.",
"However, the existing methods fail to solve these two problems at the same time, which leads to unsatisfactory results.",
"To mitigate this problem, we propose PairRE, a model with paired vectors for each relation representation.",
"The paired vectors enable an adaptive adjustment of the margin in loss function to fit for complex relations.",
"Besides, PairRE is capable of encoding three important relation patterns, symmetry/antisymmetry, inverse and composition.",
"Given simple constraints on relation representations, PairRE can encode subrelation further.",
"Experiments on link prediction benchmarks demonstrate the proposed key capabilities of PairRE.",
"Moreover, We set a new state-of-the-art on two knowledge graph datasets of the challenging Open Graph Benchmark.",
"Knowledge graphs store huge amounts of structured data in the form of triples, with projects such as WordNet (Miller, 1995), Freebase (Bollacker et al., 2008), YAGO (Suchanek et al., 2007) and DBpedia (Lehmann et al., 2015).",
"They have gained widespread attraction from their successful use in tasks such as question answering (Bordes et al., 2014), semantic parsing (Berant et al., 2013), and named entity disambiguation (Zheng et al., 2012) and so on.",
"Since most knowledge graphs suffer from incompleteness, predicting missing links between entities has been a fundamental problem.",
"This problem is named as link prediction or knowledge graph completion.",
"Knowledge graph embedding methods, which embed all entities and relations into a low dimensional space, have been proposed for this problem.",
"Distance based embedding methods from TransE (Bordes et al., 2013) to the recent state-of-the-art RotatE (Sun et al., 2019) have shown substantial improvements on knowledge graph completion task.",
"Two major problems have been widely studied.",
"The first one refers to handling of 1-to-N, N-to-1, and N-to-N complex relations (Bordes et al., 2013; Lin et al., 2015).",
"In case of the 1-to-N relations, given triples like ( StevenSpielberg , DirectorOf , ? ), distance based models should make all the corresponding entities about film name like Jaws and JurassicP ark have closer distance to entity StevenSpielberg after transformation via relation DirectorOf .",
"The difficulty is that all these entities should have different representations.",
"Same issue happens in cases of N-to-N and N-to-1 relations.",
"The latter is learning and inferring relation patterns according to observed triples, as the success of knowledge graph completion heavily relies on this ability (Bordes et al., 2013; Sun et al., 2019).",
"There are various types of relation patterns: symmetry (e.g., IsSimilarT o ), antisymmetry (e.g., F atherOf ), inverse (e.g., P eopleBornHere and P laceOfBirth ), composition (e.g., my mother's father is my grandpa) and so on.",
"Previous methods solve these two problems separately.",
"TransH (Wang et al., 2014), TransR (Lin et al., 2015), TransD (Ji et al., 2015) all focus on ways to solve complex relations.",
"However, these methods can only encode symme-try/antisymmetry relations.",
"The recent state-of-the-art RotatE shows promising results to encode symmetry/antisymmetry, inverse and composition relations.",
"However, complex relations remain challenging to predict.",
"Here we present PairRE, an embedding method that is capable of encoding complex relations and multiple relation patterns simultaneously.",
"The proposed model uses two vectors for relation representation.",
"These vectors project the corresponding head and tail entities to Euclidean space, where the distance between the projected vectors is minimized.",
"This provides three important benefits: The paired relation representations enable an adaptive adjustment of the margin in loss function to fit for different complex relations; Semantic connection among relation vectors can be well captured, which enables the model to encode three important relation patterns, symmetry/antisymmetry, inverse and composition; Adding simple constraints on relation representations, PairRE can encode subrelation further.",
"Besides, PairRE is a highly efficient model, which contributes to large scale datasets.",
"We evaluate PairRE on six standard knowledge graph benchmarks.",
"The experiment results show PairRE can achieve either state-of-the-art or highly competitive performance.",
"Further analysis also proves that PairRE can better handle complex relations and encode symmetry/antisymmetry, inverse, composition and subrelation relations.",
"Given a knowledge graph that is represented as a list of fact triples, knowledge graph embedding methods define scoring function to measure the plausibility of these triples.",
"We denote a triple by ( h, r, t ) , where h represents head entity, r represents relation and t represents tail entity.",
"The column vectors of entities and relations are represented by bold lower case letters, which belong to set E and R respectively.",
"We denote the set of all triples that are true in a world as T .",
"f r ( h, t ) represents the scoring function.",
"We take the definition of complex relations from (Wang et al., 2014).",
"For each relation r , we compute average number of tails per head (tphr) and average number of heads per tail (hptr).",
"If tphr < 1.5 and hptr < 1.5, r is treated as 1-to-1; if tphr > 1.5 and hptr > 1.5, r is treated as a N-to-N; if tphr > 1.5 and hptr < 1.5, r is treated as 1-to-N.",
"We focus on four important relation patterns, which includes: (1) Symmetry/antisymmetry .",
"A relation r is symmetric if e 1 , e 2 E , ( e 1 , r, e 2 ) T ( e 2 , r, e 1 ) T and is antisymmetric if ( e 1 , r, e 2 ) T ( e 2 , r, e 1 ) / T ; (2) Inverse .",
"If e 1 , e 2 E , ( e 1 , r 1 , e 2 ) T ( e 2 , r 2 , e 1 ) T , then r 1 and r 2 are inverse relations; (3) Composition .",
"If e 1 , e 2 , e 3 E , ( e 1 , r 1 , e 2 ) T ( e 2 , r 2 , e 3 ) T ( e 1 , r 3 , e 3 ) T , then r 3 can be seen as the composition of r 1 and r 2 ; (4) Subrelation (Qu and Tang, 2019).",
"If e 1 , e 2 E , ( e 1 , r 1 , e 2 ) T ( e 1 , r 2 , e 2 ) T , then r 2 can be seen as a subrelation of r 1 .",
"Distance based models .",
"Distance based models measure plausibility of fact triples as distance between entities.",
"TransE interprets relation as a translation vector r so that entities can be connected, i.e., h + r t .",
"TransE is efficient, though cannot model symmetry relations and have difficulty in modeling complex relations.",
"Several models are proposed for improving TransE to deal with complex relations, including TransH, TransR, TransD, TranSparse (Ji et al., 2016) and so on.",
"All these methods project the entities to relation specific hy-perplanes or spaces first, then translate projected entities with relation vectors.",
"By projecting entities to different spaces or hyperplanes, the ability to handle complex relations is improved.",
"However, with the added projection parameters, these models are unable to encode inverse and composition relations.",
"The recent state-of-the-art, RotatE, which can encode symmetry/antisymmetry, inverse and composition relation patterns, utilizes rotation based translational method in a complex space.",
"Although expressiveness for different relation patterns, complex relations remain challenging.",
"GC-OTE (Tang et al., 2020) proposes to improve complex relation modeling ability of RotatE by introducing graph context to entity embedding.",
"However, the calculation of graph contexts for head and tail entities is time consuming, which is inefficient for large scale knowledge graphs, e.g. ogbl-wikikg (Hu et al., 2020).",
"Another related work is SE (Bordes et al., 2011), which utilizes two separate relation matrices to project head and tail entities.",
"As pointed out by (Sun et al., 2019), this model is not able to encode symmetry/antisymmetry, inverse and composition Method Score Function Performance of complex relations Relation Patterns Sym Asym Inv Comp Sub TransE || h + r t || Low (cid:55) (cid:51) (cid:51) (cid:51) (cid:55) TransR || M r h + r M r t || High (cid:51) (cid:51) (cid:55) (cid:55) (cid:55) RotatE || h r t || Low (cid:51) (cid:51) (cid:51) (cid:51) (cid:55) PairRE || h r H t r T || High (cid:51) (cid:51) (cid:51) (cid:51) (cid:51) * Table 1: Comparison between PairRE and some distance based embedding methods.",
"Table 1 shows comparison between our method and some representative distance based methods.",
"As the table shows, our model is the most expressive one, with the ability to handle complex relations and encode four key relation patterns.",
"Semantic matching models .",
"Semantic matching models exploit similarity-based scoring functions, which can be divided into bilinear models and neural network based models.",
"As the models have been developed, such as RESCAL (Nickel et al., 2011), DistMult (Yang et al., 2014), HolE (Nickel et al., 2016), ComplEx (Trouillon et al., 2016) and QuatE (Zhang et al., 2019), the key relation encoding abilities are enriched.",
"However, all these models have the flaw in encoding composition relations (Sun et al., 2019).",
"RESCAL, ComplEx and SimplE (Kazemi and Poole, 2018) are all proved to be fully expressive when embedding dimensions fulfill some requirements (Wang et al., 2018; Trouillon et al., 2016; Kazemi and Poole, 2018).",
"The fully expressiveness means these models can express all the ground truth which exists in the data, including complex relations.",
"However, these requirements are hardly fulfilled in practical use.",
"It is proved by (Wang et al., 2018) that, to achieve complete expressiveness, the embedding dimension should be greater than N /32, where N is the number of entities in dataset.",
"Neural networks based methods, e.g., convolution neural networks (Dettmers et al., 2018), graph convolutional networks (Schlichtkrull et al., 2018) show promising performances.",
"However, they are difficult to analyze as they work as a black box.",
"Encoding Subrelation .",
"Existing methods encode subrelation by utilizing first order logic rules.",
"One way is to augment knowledge graphs with grounding of rules, including subrelation rules (Guo et al., 2018; Qu and Tang, 2019).",
"The other way is adding constraints on entity and relation representations, e.g., ComplEx-NNE-AER and SimplE + .",
"The second way enriches the models' expressiveness with relatively low cost.",
"In this paper, we show that PairRE can encode subrelation with constraints on relation representations while keeping the ability to encode symmetry/antisymmetry, inverse and composition relations.",
"To overcome the problem of modeling 1-to-N/N-to-1/N-to-N complex relations and enrich the capabilities for different relation patterns, we propose a model with paired vectors for each relation.",
"Given a training triple ( h , r , t ), our model learns vector embeddings of entities and relation in real space.",
"Specially, PairRE takes relation embedding as paired vectors, which is represented as [ r H , r T ] .",
"r H and r T project head entity h and tail entity t to Euclidean space respectively.",
"The projection operation is the Hadamard product 1 between these two vectors.",
"PairRE then computes distance of the two projected vectors as plausibility of the triple .",
"We want that h r H t r T when ( h, r, t ) holds, while h r H should be far away from t r T otherwise.",
"In this paper, we take the L 1 -norm to measure the distance.",
"In order to remove scaling freedoms, we also add constraint on embeddings similar to previous distance based models (Bordes et al., 2013; Wang et al., 2014; Lin et al., 2015).",
"And the constraint is only added on entity embeddings.",
"We want relation embeddings to capture semantic connection among relation vectors (e.g., P eopleBornHere and P laceOfBirth ) and complex characteristic (e.g., 1-N) easily and sufficiently.",
"For entity embedding, the L 2 -norm is set to be 1. The scoring function is defined as follows: f r ( h , t ) = || h r H t r T || , (1) where h , r H , r T , t R d and || h || 2 = || t || 2 = 1 .",
"embed-1 Hadamard product means entry-wise product.",
"dings, { e j } E j =1 and all the relations' embeddings, { r j } R j =1 .",
"Illustration of the proposed PairRE is shown in Figure 1. Compared to TransE/RotatE, PairRE enables an entity to have distributed representations when involved in different relations.",
"We also find the paired relation vectors enable an adaptive adjustment of the margin in loss function, which alleviates the modeling problem for complex relations.",
"Let's take a 1-to-N relation as an example.",
"We set the embedding dimension to one and remove the constraint on entity embeddings for better illustration.",
"Given triples ( h, r, ?) , where the correct tail entities belong to set S = { t 1 , t 2 , ..., t N } , PairRE predicts tail entities by letting || h r H t i r T || < , where is a fixed margin for distance based embedding models and t i S .",
"The value of t i should stay in the following range: t i (( h r H ) / r T , ( h r H + ) / r T ) , if r T > 0 , (( h r H + ) / r T , ( h r H ) / r T ) , if r T < 0 , ( , + ) , otherwise .",
"The above analysis shows PairRE can adjust the value of r T to fit the entities in S .",
"The larger the size of S , the smaller the absolute value r T .",
"While models like TransE or RotatE have a fixed margin for all complex relation types.",
"When the size of S is large enough, these models will be difficult to fit the data.",
"For N-to-1 relations, PairRE can also adjust the value of r H adaptively to fit the data.",
"Meanwhile, not adding a relation specific translational vector enables the model to encode several key relation patterns.",
"We show these capabilities below.",
"Proposition 1. PairRE can encode symme-try/antisymmetry relation pattern.",
"Proof.",
"If ( e 1 , r 1 , e 2 ) T and ( e 2 , r 1 , e 1 ) T , we have e 1 r H 1 = e 2 r T 1 e 2 r H 1 = e 1 r T 1 r H 1 2 = r T 1 2 (2) if ( e 1 , r 1 , e 2 ) T and ( e 2 , r 1 , e 1 ) / T , we have e 1 r H 1 = e 2 r T 1 e 2 r H 1 (cid:54) = e 1 r T 1 r H 1 2 (cid:54) = r T 1 2 (3) Proposition 2. PairRE can encode inverse relation pattern.",
"Proof.",
"If ( e 1 , r 1 , e 2 ) T and ( e 2 , r 2 , e 1 ) T , we have e 1 r H 1 = e 2 r T 1 e 2 r H 2 = e 1 r T 2 r H 1 r H 2 = r T 1 r T 2 (4) Proposition 3. PairRE can encode composition relation pattern.",
"Proof.",
"If ( e 1 , r 1 , e 2 ) T , ( e 2 , r 2 , e 3 ) T and ( e 1 , r 3 , e 3 ) T , we have e 1 r H 1 = e 2 r T 1 e 2 r H 2 = e 3 r T 2 e 1 r H 3 = e 3 r T 3 r T 1 r T 2 r H 3 = r H 1 r H 2 r T 3 (5) Moreover, with some constraint, PairRE can also encode subrelations.",
"For a subrelation pair, h, t E : ( h, r 1 , t ) ( h, r 2 , t ) , it suggests triple ( h, r 2 , t ) should be always more plausible than triple ( h, r 1 , t ) .",
"In order to encode this pattern, PairRE should have the capability to enforce f r 2 ( h, r 2 , t ) f r 1 ( h, r 1 , t ) .",
"Proof.",
"Assume a subrelation pair r 1 and r 2 that h, t E : ( h, r 1 , t ) ( h, r 2 , t ) .",
"We impose the following constraints: r H 2 ,i r H 1 ,i = r T 2 ,i r T 1 ,i = i , | i | 1 , (6) where R d .",
"Then we can get f r 2 ( h, t ) f r 1 ( h, t ) = || h r H 1 t r T 1 || || h r H 2 t r T 2 || = || h r H 1 t r T 1 || || ( h r H 1 t r T 1 ) || 0 .",
"When the constraints are satisfied, PairRE forces triple ( h, r 2 , t ) to be more plausible than triple ( h, r 1 , t ) .",
"Optimization .",
"To optimize the model, we utilize the self-adversarial negative sampling loss (Sun et al., 2019) as objective for training: L = log ( f r ( h , t )) n (cid:88) i =1 p ( h (cid:48) i , r, t (cid:48) i ) log ( f r ( h (cid:48) i , t (cid:48) i ) ) , (8) where is a fixed margin and is the sigmoid function.",
"( h (cid:48) i , r , t (cid:48) i ) is the i th negative triple and p ( h (cid:48) i , r, t (cid:48) i ) represents the weight of this negative sample.",
"p ( h (cid:48) i , r, t (cid:48) i ) is defined as follows: p (( h (cid:48) i , r, t (cid:48) i ) | ( h, r, t )) = exp f r ( h (cid:48) i , t (cid:48) i ) (cid:80) j exp f r ( h (cid:48) j , t (cid:48) j ) .",
"We evaluate the proposed method on link prediction tasks.",
"At first, we validate the ability to deal with complex relations and symmetry/antisymmetry, inverse and composition relations on four benchmarks.",
"Then we validate our model on two subrelation specific benchmarks.",
"Statistics of these benchmarks are shown in Table 2. ogbl-wikikg2 2 (Hu et al., 2020) is extracted from Wikidata knowledge base (Vrandecic and Krotzsch, 2014).",
"One of the main challenges for this dataset is complex relations.",
"ogbl-biokg 2 ogbl-wikikg2 fixes a bug in test/validation negative samples from original ogbl-wikikg.",
"(Hu et al., 2020) contains data from a large number of biomedical data repositories.",
"One of the main challenges for this dataset is symmetry relations.",
"FB15k (Bordes et al., 2013) contains triples from Freebase.",
"The main relation patterns are inverse and symmetry/antisymmetry.",
"FB15k-237 (Toutanova and Chen, 2015) is a subset of FB15k, with inverse relations removed.",
"The main relation patterns are antisymmetry and composition.",
"DB100k (Ding et al., 2018) is a subset of DBpedia.",
"The main relation patterns are composition, inverse and subrelation.",
"Sports (Wang et al., 2015) is a subset of NELL (Mitchell et al., 2018).",
"The main relation patterns are antisymmetry and subrelation.",
"Evaluation protocol .",
"Following the state-of-the-art methods, we measure the quality of the ranking of each test triple among all possible head entity and tail entity substitutions: ( h (cid:48) , r , t ) and ( h , r , t (cid:48) ), h (cid:48) , t (cid:48) E .",
"Three evaluation metrics, including Mean Rank(MR), Mean Reciprocal Rank (MRR) and Hit ratio with cut-off values n = 1, 3, 10, are utilized.",
"MR measures the average rank of all correct entities.",
"MRR is the average inverse rank for correct entities with higher value representing better performance.",
"Hit@ n measures the percentage of correct entities in the top n predictions.",
"The rankings of triples are computed after removing all the other observed triples that appear in either training, validation or test set.",
"For experiments on ogbl-wikikg2 and ogbl-biokg, we follow the evaluation protocol of these two benchmarks (Hu et al., 2020).",
"Implementation .",
"We utilize the official implementations of benchmarks ogbl-wikikg2 and ogbl-biokg (Hu et al., 2020) for the corresponding experiments 3 .",
"Only the hypeparameter and embedding dimension are tuned.",
"The other settings are kept the same with baselines.",
"For the rest experiments, we implement our models based on the implementation of RotatE (Sun et al., 2019).",
"All hypeparam-3 Our code is available at: https://github.com/alipay/KnowledgeGraphEmbeddingsViaPairedRelationVectors PairRE -ogbl-wikikg2 ogbl-biokg Model # Dim Test MRR Valid MRR # Dim Test MRR Valid MRR TransE 100 0 .",
"eters except and embedding dimension are kept the same with RotatE.",
"Comparisons for ogbl-wikikg2 and ogbl-biokg are shown in Table 3. On these two large scale datasets, PairRE achieves state-of-the-art performances.",
"For ogbl-wikikg2 dataset, PairRE performs best on both limited embedding dimension and increased embedding dimension.",
"With the same number of parameters to ComplEx (dimension 100), PairRE Model MRR Hit@10 Hit@3 Hit@1 TransE 0.111 0.270 0.164 0.016 DistMult 0.233 0.448 0.301 0.115 HolE 0.260 0.411 0.309 0.182 ComplEx 0.242 0.440 0.312 0.126 SeeK 0.338 0.467 0.370 0.268 ComplEx-NNE 0.298 0.426 0.330 0.229 ComplEx-NNE-AER 0.306 0.418 0.334 0.244 PairRE 0.412 0.600 0.472 0.309 0 .",
"improves Test MRR close to 10%.",
"With increased dimension, all models are able to achieve higher MRR on validation and test sets.",
"Due to the limitation of hardware, we only increase embedding dimension to 200 for PairRE.",
"PairRE also outperforms all baselines and improves Test MRR 8.7%.",
"Based on performances of baselines, the performance of PairRE may be improved further if embedding dimension is increased to 500 .",
"Under the same experiment setting and the same number of parameters, PairRE also outperforms all baselines on ogbl-biokg dataset.",
"It improves Test MRR by 0.69%, which proves the superior ability to encode symmetry relations.",
"Comparisons for FB15k and FB15k-237 datasets are shown in Table 4. Since our model shares the same hyper-parameter settings and implementation with RotatE, comparing with this state-of-the-art model is fair to show the advantage and disadvantage of the proposed model.",
"Besides, the comparisons also include several leading methods, such as TransE (Bordes et al., 2013), DistMult (Yang et al., 2014), HolE (Nickel et al., 2016), ConvE (Dettmers et al., 2018), ComplEx (Trouillon et al., 2016), SimplE (Kazemi and Poole, 2018), SeeK (Xu et al., 2020) and OTE (Tang et al., 2020).",
"Compared with RotatE, PairRE shows clear improvements on FB15k and FB15k-237 for all evaluation metrics.",
"For MRR metric, the improvements are 1.4% and 1.3% respectively.",
"Compared with the other leading methods, PairRE also shows highly competitive performances.",
"All these comparisons prove the effectiveness of PairRE to encode inverse and composition relations.",
"We further compare our method with two of the leading methods ComplEx-NNE-AER and SimplE + , which focus on encoding subrelation.",
"These two methods add subrelation rules to semantic matching models.",
"We utilize these rules as constraints on relation representations for PairRE.",
"Two ways are validated.",
"We first test the performance of weight tying for subrelation rules on Sports dataset.",
"The rules ( r 1 r 2 ) are added as follows: r H 2 = r H 1 cosine ( ) , r T 2 = r T 1 cosine ( ) , (10) where R d .",
"The added rules are shown in Table 5. The experiments results in Table 6 show effectiveness of the proposed method.",
"Weight tying on relation representation is a way to incorporate hard rules.",
"The soft rules can also be incorporated into PairRE by approximate entailment constraints on relation representations.",
"In this section, we add the same rules from ComplEx-NNE-AER, which includes subrelation and inverse rules.",
"We denote by r 1 r 2 the approximate entailment between relations r 1 and r 2 , with con-fidence level .",
"The objective for training is then changed to: L rule = L + (cid:88) subrelation 1 T ( r H 1 r T 2 r T 1 r H 2 ) 2 + (cid:88) inverse 1 T ( r H 1 r H 2 r T 1 r T 2 ) 2 , (11) where L is calculated from Equation 8, is loss weight for added constraints, subrelation and inverse are the sets of subrelation rules and inverse rules respectively.",
"Following (Ding et al., 2018), we take the corresponding two relations from subrelation rules as equivalence.",
"Because subrelation contains both rule r 1 r 2 and rule r 2 r 1 .",
"We validate our method on DB100k dataset.",
"The results are shown in Table 7. We can see PairRE outperforms the recent state-of-the-art SeeK and ComplEx based models with large margins on all evaluation metrics.",
"With added constraints, the performance of PairRE is improved further.",
"The improvements for the added rules are 0.7%, 1.2% for MRR and Hit@1 metrics respectively.",
"We analyze the performances of PairRE for complex relations.",
"The results of PairRE on different relation categories on FB15k and ogbl-wikikg2 are summarized into Table 8. We can see PairRE performs quite well on N-to-N and N-to-1 relations.",
"It has a significant lead over baselines.",
"We also notice that performance of 1-to-N relations on ogbl-wikikg2 dataset is not as strong as the other relation categories.",
"One of the reasons is that only 2.2% of test triples belong to the 1-to-N relation category.",
"In order to further test the performance of paired relation vectors, we change the relation vector in RotatE to paired vectors.",
"In the modified RotatE model, both head and tail entities are rotated with different angles based on the paired Figure 2: Performance comparison between RotatE and RotatE+PairRelation on ogbl-wikikg2 dataset.",
"relation vectors.",
"This model can also be seen as complex value based PairRE.",
"We name this model as RotatE+PairRelation.",
"The experiment results are shown in Figure 2. With the same embedding dimension (50 in the experiments), Ro-tatE+PairRelation improves performance of RotatE with 20.8%, 27.5%, 14.4% and 39.1% on 1-to-1, 1-to-N, N-to-1 and N-to-N relation categories respectively.",
"These significant improvements prove the superior ability of paired relation vectors to handle complex relations.",
"To further verify the learned relation patterns, we visualize some examples.",
"Histograms of the learned relation embeddings are shown in Figure 3 .",
"Symmetry/AntiSymmetry .",
"Figure 3a shows a symmetry relation spouse from DB100k.",
"The embedding dimension is 500.",
"For PairRE, symmetry relation pattern can be encoded when embedding r satisfies r H 2 = r T 2 .",
"Figure 3b shows most of the paired elements in r H and r T have the same absolute value.",
"Figure 3c shows a antisymmetry relation tv station owner , where most of the paired elements do not have the same absolute value as shown in Figure 3d.",
"Antoine Bordes, Jason Weston, and Nicolas Usunier.",
"2014.",
"Open question answering with weakly supervised embedding models.",
"In Joint European conference on machine learning and knowledge discovery in databases , pages 165180.",
"Springer.",
"Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel.",
"2018.",
"Convolutional 2d knowledge graph embeddings.",
"In Thirty-Second AAAI Conference on Artificial Intelligence .",
"In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 110121, Melbourne, Australia.",
"Bahare Fatemi, Siamak Ravanbakhsh, and David Poole.",
"2019.",
"Improved knowledge graph embedding using background taxonomic information.",
"In Proceedings of the AAAI Conference on Artificial Intelligence , volume 33, pages 35263533.",
"Shu Guo, Quan Wang, Lihong Wang, Bin Wang, and Li Guo.",
"2018.",
"Knowledge graph embedding with iterative guidance from soft rules.",
"In Thirty-Second AAAI Conference on Artificial Intelligence .",
"Shizhu He, Kang Liu, Guoliang Ji, and Jun Zhao.",
"2015.",
"Learning to represent knowledge graphs with gaussian embedding.",
"In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management , pages 623632.",
"Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec.",
"2020.",
"Open graph benchmark: Datasets for machine learning on graphs.",
"arXiv preprint arXiv:2005.00687 .",
"Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao.",
"2016.",
"Knowledge graph completion with adaptive sparse transfer matrix.",
"In AAAI , volume 16, pages 985 991.",
"Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst.",
"2017.",
"Knowledge base completion: Baselines strike back.",
"In Proceedings of the 2nd Workshop on Representation Learning for NLP , pages 6974, Vancouver, Canada.",
"Association for Computational Linguistics.",
"Seyed Mehran Kazemi and David Poole.",
"2018.",
"Simple embedding for link prediction in knowledge graphs.",
"In Advances in Neural Information Processing Systems , pages 42844295.",
"Inverse .",
"Figure 3c and Figure 3e show an example of inverse relations from FB15k.",
"As the histogram in Figure 3f shows these two inverse relations tv station owner ( r 2 ) and tv station owner tv stations ( r 3 ) close to satisfy r H 3 r H 2 = r T 3 r T 2 .",
"Composition .",
"Figures 3g, 3h, 3i show an example of composition relation pattern from FB15k, where the third relation r 6 can be seen as the composition of the first relation r 4 and the second relation r 5 .",
"As Figure 3j shows these three relations close to satisfy r H 4 r H 5 r T 6 r T 4 r T 5 r H 6 .",
"To better handle complex relations and tackle more relation patterns, we proposed PairRE, which represents each relation with paired vectors.",
"With a slight increase in complexity, PairRE can solve the aforementioned two problems efficiently.",
"Beyond the symmetry/antisymmetry, inverse and composition relations, PairRE can further encode subrelation with simple constraint on relation representations.",
"On large scale benchmark ogbl-wikikg2 an ogbl-biokg, PairRE outperforms all the state-of-the-art baselines.",
"Experiments on other well designed benchmarks also demonstrate the effectiveness of the focused key abilities."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"other",
"method",
"method",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"other",
"abstain",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Due to large number of entities in biomedical knowledge bases, only a small fraction of entities have corresponding labelled training data.",
"This necessitates entity linking models which are able to link mentions of unseen entities using learned representations of entities.",
"Previous approaches link each mention independently, ignoring the relationships within and across documents between the entity mentions.",
"These relations can be very useful for linking mentions in biomedical text where linking decisions are often difficult due mentions having a generic or a highly specialized form.",
"In this paper, we introduce a model in which linking decisions can be made not merely by linking to a knowledge base entity but also by grouping multiple mentions together via clustering and jointly making linking predictions.",
"In experiments we improve the state-of-the-art entity linking accuracy on two biomedical entity linking datasets including on the largest publicly available dataset.",
"Ambiguity is inherent in the way entities are mentioned in natural language text.",
"Grounding such ambiguous mentions to their corresponding entities, the task of entity linking , is critical to many applications: automated knowledge base construction and completion (Riedel et al., 2013; Surdeanu et al., 2012), information retrieval (Meij et al., 2014), smart assistants (Balog and Kenter, 2019), question answering (Dhingra et al., 2020), text mining (Leaman and Lu, 2016; Murty et al., 2018).",
"Consider the excerpt of text from a biomedical research paper in Figure 1, the three highlighted mentions ( expression , facial expressions , and facially expressive ) all link to the same entity, namely C0517243 Facial Expresson in the leading biomedical KB, Unified Medical Language System (UMLS) (Bodenreider, 2004).",
"The mention expression is highly ambiguous and easily confused with the more prevalent en-When emotion and expression diverge : The social costs of Parkinson 's disease Patients with Parkinson 's disease are perceived more negatively than their healthy peers , yet it remains unclear what factors contribute to this negative social perception . Based on a cohort of 17 Parkinson 's disease patients and 20 healthy controls , we assessed how nave raters judge the emotion and emotional intensity displayed in dynamic facial expressions as adults with and without Parkinson 's disease watched emotionally evocative films ( Experiment 1 ) , and how age matched peers nave to patients ' disease status judge their social desirability along various dimensions from audiovisual stimuli ( interview excerpts ) recorded after certain films ( Experiment 2 ) .",
"tity, Gene expression .",
"This linking decision may become easier with sufficient training examples (or sufficiently rich structured information in the knowledge-base) .",
"However, in biomedical (Mo-han and Li, 2019) and other specialized domains (Logeswaran et al., 2019), it is often the case that the knowledge-base information is largely incomplete.",
"Furthermore, the scarcity of training data leads to a setting in which most entities have not been observed at training.",
"State-of-the-art entity linking methods which are able to link entities unseen at training time make predictions for each mention independently (Lo-geswaran et al., 2019; Wu et al., 2019).",
"In this way, the methods may have difficulty linking mentions which, as in the example above, have little lexical similarity with the entities in the knowledge-base, as well as mentions for which the context is highly ambiguous.",
"These mentions cannot directly use information from one mention (or its linking decision) to inform the prediction of another mention.",
"On the other hand, entity linking methods that do jointly consider entity linking decisions (Ganea and Hofmann, 2017; Le and Titov, 2018) are designed for cases in which all of the entities in the knowledge-base to have example mentions or meta-data at training time (Logeswaran et al., 2019).",
"In this paper, we propose an entity linking model in which entity mentions are either (1) linked directly to an entity in the knowledge-base or (2) join a cluster of other mentions and link as a cluster to an entity in the knowledge-base.",
"Some mentions may be difficult to link directly to their referent ground truth entity, but may have very clear coreference relationships to other mentions.",
"So long as one mention among the group of mentions clustered together links to the correct entity the entire cluster can be correctly classified.",
"This provides for a joint, tranductive-like, inference procedure for linking.",
"We describe both the inference procedure as well as training objective for optimizing the model's inference procedure, based on recent work on supervised clustering (Yadav et al., 2019).",
"It is important to note that our approach does not aim to do joint coreference and linking, but rather makes joint linking predictions by clustering together mentions that are difficult to link directly to the knowledge-base.",
"For instance, in Figure 1, the mention expression may be difficult to link to the ground truth Facial expression entity in the knowledge-base because the mention can refer to a large number of entities.",
"However, the local syntactic and semantic information of the paragraph give strong signals that expression is coreferent with facial expression , which is easily linked to the correct entity.",
"We perform experiments on two biomedical entity linking datsets: MedMentions (Mohan and Li, 2019), the largest publicly available dataset as well as the benchmark BC5CDR (Li et al., 2016).",
"We find that our approach improves over our strongest baseline by 2.3 points of accuracy on MedMentions and 0.8 points of accuracy on BC5CDR over the baseline method (Logeswaran et al., 2019).",
"We further analyze the performance of our approach and observe that (1) our method better handles ambiguous mention surface forms (as in the example shown in Figure 1) and (2) our method can correctly link mentions even when the candidate generation step fails to provide the correct entity as a candidate.",
"Each document D D , has a set of mentions M ( D ) = { m ( D ) 1 , m ( D ) 2 , . . . , m ( D ) N } .",
"We denote the set of all mentions across all documents as plainly M .",
"The task of entity linking is to classify each mention m i as referent to a single entity e i from a KB of entities.",
"We use E ( m i ) to refer to the ground truth entity of mention m i and e i to refer to the predicted entity.",
"Knowledge-bases .",
"We assume that we are given a knowledge-base corresponding to a closed world of entities.",
"These KBs are typically massive: English Wikipedia contains just over 6M entities 1 and the 2020 release of the UMLS contains 4.28M entities 2 .",
"We describe in Sections 5.1 & 5.2 the details of the KBs used in each of the experiments.",
"Candidate Generation .",
"Given the massive number of entities to which a mention may refer, previous work (Logeswaran et al., 2019, inter alia) uses a candidate generation step to reduce the restrict the number of entities considered for a given mention, m , to a candidate set ( m ) .",
"The recall of this step is critical to the overall performance of entity linking models.",
"In this section, we describe our clustering-based approach for jointly making entity linking predictions for a set of mentions.",
"Our proposed inference method builds a graph where the nodes are the union of all of the mentions and entities and the edges have weights denoting the affinities between the endpoints.",
"To make linking decisions, we cluster the nodes of the graph such that each cluster contains exactly one entity, following which each mention is assigned to the entity in its cluster.",
"Let : M M R and : M E R be parameterized functions which compute symmetric mention-mention and mention-entity affinities, respectively.",
"The exact parameterizations of these functions are detailed in Section 3.2.",
"M { ( m, e ) : e ( m ) } .",
"The weight of each edge, w ( v i , v j ) for v i , v j V , is determined by or depending on the vertices of the edge: w ( m i , m j ) = ( m i , m j ) and w ( m i , e l ) = ( m i , e l ) .",
"Linking decisions for each mention are determined by clustering the vertices of G under the constraint that every entity must appear in exactly one cluster.",
"Given the graph G , we start with every node in their own individual cluster.",
"We define affinity between a pair of clusters as the strongest cross-cluster edge between nodes in the two clusters.",
"Iteratively, we greedily merge clusters by choosing a pair of clusters with largest affinity between them under the constraint that we cannot merge two clusters which both contain an entity.",
"When every cluster contains exactly one entity, this process can no longer merge any clusters, and thus terminates 3 .",
"Each mention is linked to the entity present in its cluster at the end of inference.",
"Algorithm 1 describes this process of constructing the graph and clustering nodes to make linking decisions more formally.",
"Figure 2 shows the proposed inference in action 3 This process is equivalent to single-linkage hierarchical agglomerative clustering with the constraint that two entities cannot be in the same cluster.",
"on five entities and six mentions.",
"Initially, every mention and entity start in a singleton cluster.",
"In the first round, clusters { m 1 } and { m 2 } are merged, followed by merger of { e 3 } and { m 6 } in the second round, and so on.",
"Note that in fifth round, clusters c 1 = { m 4 , e 2 } has higher affinity with c 2 = { m 1 , m 2 , m 3 , e 1 } than with c 3 = { m 5 } , yet c 1 and c 3 are merged instead of c 1 and c 2 due to the constraint that we cannot merge two clusters which both contain an entity.",
"At the end, every mention is clustered together with exactly one entity, and there could be entities present as singleton clusters such as { e 4 } and { e 5 } .",
"Note that m 3 correctly links to its gold entity e 1 as a result of being clustered with mentions m 1 , m 2 even though it has higher affinity with entity e 3 : w ( m 3 , e 3 ) > w ( m 3 , e 1 ) .",
"We parameterize ( , ) and ( , ) using two separate deep transformer encoders (Vaswani et al., 2017) for our mention-mention affinity model and mention-entity affinity model specifically we use the BERT architecture (Devlin et al., 2019) initialized using the weights from BioBERT (Lee et al., 2019).",
"The mention-mention model is also a cross-encoder, taking as input a pair of mention in context and producing a single scalar affinity for every pair.",
"The input tokens take the form: [CLS] < m i > [SEP] < m j > [SEP] where < m i > := c l [START] m i [END] c r where m i is the mention tokens and c l and c r are the left and right context of the mention in the text, respectively.",
"The [START] and [END] tokens are special tokens fine-tuned to signify the start and end of the mention in context, respectively.",
"We restrict the length of each input sequence to have a maximum of 256 tokens.",
"A representations of each mention is computed using the average of the encoder's output representations corresponding to the mention's input tokens.",
"The affinity for a mention pair is computed by concatenating their mention representations and passing it through a linear layer with a sigmoid activation.",
"We make this affinity symmetric by averaging the two possible orderings of a pair of mentions in the cross-encoder input sequence.",
"The mention-entity affinity model is a cross-encoder model (Vig and Ramea, 2019; Wolf et al., 2019; Humeau et al., 2019, inter alia) and takes as input the concatenation of the mention in context with the entity description.",
"The input tokens take the form: [CLS] c l [START] m [END] c r [SEP] e [SEP] where the mention in context is the same as in the mention-mention model and e is the description of the entity.",
"We restrict the length of this input sequence to 256 tokens.",
"After passing the input sequence through BERT, we transform the output representation corresponding to the [CLS] token with a linear layer with one output unit.",
"This value is finally passed through the sigmoid function to output affinity between the mention and the entity.",
"In this section, we explain the training procedure for the affinity models ( , ) and ( , ) used by the clustering inference procedure.",
"We train the mention-mention and mention-entity models independently in a way that allows the affinities to be comparable when performing inference.",
"We use triplet max-margin based training objectives to train both models.",
"The most important aspect of our procedure is how we pick negatives during training.",
"For the mention-entity model, we restrict our negatives to be from the candidate set.",
"For the mention-mention model, we restrict our negatives to come from mentions within the same document.",
"From these sets of possible negatives we choose the topk most offending ones according the instantaneous state of the model i.e. the negatives with highest predicted affinities according to the model at that point during training.",
"The following sections detail the training procedures for both models.",
"To train the mention-mention affinity model we use a variant of the maximum spanning tree (MST) supervised single linkage clustering algorithm presented in Yadav et al. (2019).",
"Let M ( D ) e l = { m M ( D ) | E ( m ) = e l } be the set of mentions referring to entity e l in any one document and the set of ground truth clusters be represented by C = {M ( D ) e l | e l E} .",
"Let P be the set of positive training edges: the edges of the MST of the complete graph on the cluster C C .",
"Let N ( m ) be the k -nearest within document negatives to the anchor point m C according to the current state of the model during training.",
"The objective of this training procedure is to minimize the following triplet max-margin loss 4 with margin for each cluster C C : L ( ; C )= (cid:88) m ,m + P (cid:88) m N ( m ) (cid:96) , ( m , m + , m ) , where (cid:96) , ( a, p, n ) = [ ( a, n ) ( a, p ) + ] + .",
"For the mention-entity model, we use a triplet max-margin based objective with margin where the anchor is a mention m in the training set, the positive is the ground truth entity e + = E ( m ) , and the negatives are chosen from the candidate set ( m ) .",
"Denote the k most offending negatives according to the current state of the model during training as N ( m ) ( m ) \\ {E ( m ) } .",
"Formally, the loss is L ( ; M )= (cid:88) m,e + (cid:88) e N ( m ) (cid:96) , ( m, e + , e ) , where (cid:96) , ( a, p, n ) = [ ( a, n ) ( a, p ) + ] + .",
"We evaluate on biomedical entity linking using the MedMentions (Mohan and Li, 2019) and BC5CDR (Li et al., 2016) datasets.",
"We compare to state-of-the-art methods.",
"We then analyze the performance of our method in more detail and provide qualitative examples demonstrating our approaches' ability to use mention-mention relationships to improve candidate generation and linking.",
"MedMentions is a publicly available 5 dataset consisting of the titles and abstracts of 4,392 PubMed articles.",
"The dataset is hand-labeled by annotators and contains labeled mention spans and entities linked to the 2017AA full version of UMLS.",
"Following the suggestion of Mohan and Li (2019), we use the ST21PV subset, which restricts the entities linked in documents to a set of 21 entity types 4 Define [ x ] + = max( x, 0) 5 https://github.com/chanzuckerberg/ MedMentions MedMentions BC5CDR Train Dev Test Train Dev Test |M| 120K 40K 40K 18K 934 10K |E ( M ) | 19K 9K 8K 2K 281 1K % seen 100 57.7 57.5 100 80.1 64.8 Table 1: Linking Datasets .",
"that were deemed most important for building scientific knowledge-bases.",
"We refer the readers to Mohan and Li (2019) for a complete analysis of the dataset and provide a few important summary statistics here.",
"The train/dev/test split partitions the PubMed articles into three non-overlapping groups.",
"This means that some entities seen at training time will appear in dev/test and other entities will appear in dev/test but not at training time.",
"In fact, a large number of entities that appear in dev/test time are unseen at training, about 42% of entities.",
"See Table 1 for split details and statistics.",
"Previous work has evaluated on MedMentions using unfairly optimistic candidate generation settings such as using only 10 candidates including the ground truth (Zhu et al., 2019) or restricting candidates to entities appearing somewhere in the MedMentions corpus (Murty et al., 2018).",
"We instead work in a much more general setting where all entities in UMLS are considered at candidate generation time and the generated candidates might not include the ground truth entity.",
"BC5CDR (Li et al., 2016) is another entity linking benchmark in the biomedical domain.",
"The dataset consists of 1,500 PubMed articles annotated with labeled disease and chemical entities.",
"Unlike MedMentions, which contains 21 types of entities, this dataset contains just two types.",
"These chemical and disease mentions are labeled with entities from MeSH 6 , a much smaller biomedical KB than UMLS.",
"See Table 1 for split details and statistics.",
"The MedMentions ST21PV corpus is processed as follows:",
"(i) Abbreviations defined in the text of each paper are identified using AB3P (Sohn et al., 2008).",
"Each definition and abbreviation instance is then replaced with the expanded form.",
"(ii) The text of each paper in the corpus is tokenized and 6 https://www.nlm.nih.gov/mesh PubMed Abstract Independent Predictions Cluster-based Predictions Entity KB Impact of alcohol use on EEG dynamics of response inhibition : a cotwin control analysis Research indicates that alcohol misuse is associated with behavioral disinhibition , but the neurophysiological mechanisms governing this relationship remain largely unknown .",
"split into sentences using CoreNLP (Manning et al., 2014).",
"(iii) Overlapping mentions are resolved by preferring longer mentions that begin earlier in each sentence, and mentions are truncated at sentence boundaries.",
"This results in 379 mentions to be dropped from the total of 203,282.",
"(iv) Finally, the corpus is saved into the IOB2 tag format.",
"The same preprocessing steps are used for BC5CDR, except overlapping mentions are not dropped.",
"For both datasets, we use a character n -gram TF-IDF model to produce candidates for all of the mentions in all splits.",
"The candidate generator utilizes the 200k most frequent character n -grams, n { 2 . . . 5 } and the 200k most frequent words in the names in E to produce sparse vectors for all of the mentions and entity descriptions (which in our case is the canonical name, the type, and a list of known aliases and synonyms).",
"Table 5 provides candidate generation results for each dataset.",
"The results report the average recall@ K at different numbers of candidates ( K ), i.e., whether or not the gold entity is top K candidates for a given mention.",
"Our model contains 220M parameters, the majority of which are contained within the two separate BERT-based models.",
"We optimize both the models with mini-batch stochastic gradient descent using the Adam optimizer (Kingma and Ba, 2014) with recommended learning rate of 5e-5 (Devlin et al., 2019) with no warm-up steps.",
"We accumulate gradients over all of the triples for a batch size of 16 within document clusters.",
"We compute the topk most offending negatives on-the-fly for each batch by running the model in inference mode proceeding each training step.",
"Training and inference are done on a single machine with 8 NVIDIA 1080 Ti GPUs.",
"We train our model on MedMentions for two epochs and BC5CDR for four epochs.",
"Training takes approximately three days for MedMentions and one day for BC5CDR.",
"Clustering-based inference takes about three hours for MedMentions and one hour for BC5CDR.",
"Code and data to reproduce experiments will be made available.",
"We compare our clustering-based inference procedure, which we refer to our approach as CLUSTERING-BASED , to a state-of-the-art independent inference procedure, INDEPENDENT , which is the zero-shot architecture proposed by Logeswaran et al..",
"This same model is used as the mention-entity affinity model used in our approach.",
"We also compare to to an n-gram tf-idf model (our candidate generation model), TAGGERONE (Leaman and Lu, 2016), BIOSYN (Sung et al., 2020), and SAPBERT (Liu et al., 2020) on both MedMentions and BC5CDR.",
"Table 2 shows performance of the baseline models, INDEPENDENT , and CLUSTERING-BASED inference procedure on MedMentions and BC5CDR.",
"We report results using the gold mention segmen-MedMentions BC5CDR Overall Acc.",
"Due to TAGGERONE 's joint entity recognition, typing, and linking architecture, we cannot make predictions for gold mention boundaries without also using their gold types.",
"And so to have a fair comparison to TaggerOne, we provide the gold mention boundaries and types to each system and report these results as well.",
"We use seen and unseen to refer to the sets of mentions whose ground truth entities are seen and unseen at training, respectively.",
"Note that even if a mention is in the subset of mentions referred to as seen , it does not mean that we have seen the particular surface form before in the training set, merely that we have seen other mentions of that particular entity.",
"On MedMentions, when the models are provided with only the gold mention span, CLUSTERINGBASED inference procedure outperforms INDEPENDENT by 1.3 points of accuracy, and we see improvements in accuracy for both seen and unseen entities.",
"When the models are additionally provided with the gold type, we see substantial improvements in accuracy for both INDEPENDENT and CLUSTERING-BASED over TAGGERONE , namely 3.0 and 5.3 points of improvement, respectively.",
"CLUSTERINGBASED inference procedure outperforms INDEPENDENT by 0.4 points of accuracy, and we see improvements in accuracy for both seen and unseen entities.",
"When the models are additionally provided with the gold type, we see improvements in accuracy for both INDEPENDENT and CLUSTERINGBASED over TAGGERONE , namely 0.8 and 1.6 points of improvement, respectively.",
"Observe that the candidate generation results are drastically different for the two datasets (Table 5).",
"We posit that the ability to generate correct candidates correlates with the relative difficultly of the linking task on each dataset, respectively.",
"We hypothesize that our clustering-based inference procedure would allow for better performance on mentions for which candidate generation is difficult.",
"Observe that while the performance of the independent model is upper bounded by the recall of candidate generation, this is not an upper bound for the clustering-based model.",
"The clustering-based model can allow mentions that have no suitable candidates to link to other mentions in the same document.",
"We report the accuracy of both systems with respect to whether or not the ground truth entity is in each mentions' list of candidates.",
"approach offers a large number of mentions a correct resolution, when the independent model could not link them correctly due to the ground truth entity being missing from the candidate list.",
"Additionally, it can be seen that CLUSTERING-BASED does sacrifice some performance in comparison to INDEPENDENT , but more than makes up for it in the case where the ground truth entity is not in the candidate set.",
"We also hypothesize that for mentions which are highly ambiguous and could refer to many different entities, such as common nouns like virus , disease , etc, the clustering-based inference should offer improvements.",
"Table 4 shows that our approach is able to correctly link more ambiguous mentions compared to independent model 7 .",
"Figure 3 shows two examples from this subset where CLUSTERING-BASED inference is able to make the correct linking decision and INDEPENDENT is not.",
"Entity linking is widely studied and often focused on linking mentions to Wikipedia entities (also known as Wikification) (Mihalcea and Csomai, 2007; Cucerzan, 2007; Milne and Witten, 2008; Hoffart et al., 2011; Ratinov et al., 2011; Cheng and Roth, 2013).",
"Entity linking is often done independently for each mention in the document (Rati-nov et al., 2011; Raiman and Raiman, 2018) or by modeling dependencies between predictions of entities in a document (Cheng and Roth, 2013; Ganea and Hofmann, 2017; Le and Titov, 2018).",
"In the biomedical domain, Unified Medical Language System (UMLS) is often used as a knowledge-base for entities (Mohan and Li, 2019; Leaman and Lu, 2016).",
"While UMLS is a rich ontology of concepts and relationships between them, this domain is low resource compared to Wikipedia with respect to number of labeled training data for each entity mention.",
"This leads to a zero-shot setting in datasets such as MedMentions (Mohan and Li, 2019) where new entities are seen at test time.",
"Previous work has addressed this zero-shot setting using models of the type hierarchy (Murty et al., 2018; Zhu et al., 2019).",
"This previous work (Murty et al., 2018; Zhu et al., 2019) uses an unrealistic 7 These are: activation, activity, a, b, cardiac, cells, clinical, compounds, cr, development, disease, function, fusion, inhibition, injuries, injury, liver, management, methods, mice, model, pa, production, protein, regulation, report, responses, response, r, screening, stress, studies, study, treatment candidate generation setting where the true positive candidate is within the candidate set and/or entities are limited to those in the dataset rather than those in the knowledge-base.",
"Mention-mention relationships are also explored in (Le and Titov, 2018) which extends the pairwise CRF model (Ganea and Hofmann, 2017) to use mention-level relationships in addition to entity relationships.",
"These works use attention in a way to build the context representation of the mentions.",
"However, as mentioned by Logeswaran et al. (2019) is not well suited for zero-shot linking.",
"Coreference (both within and across documents) has also been explored by past work (Dutta and Weikum, 2015).",
"This work uses an iterative procedure that performs hard clustering for the sake of aggregating the contexts of entity mentions.",
"Durrett and Klein (2014) presents a CRF-based model for joint NER, within-document coreference, and linking.",
"They show that jointly modeling these three tasks improves performance over the independent baselines.",
"This differs from our work since we do not require coreference decisions to be correct in order to make correct linking decisions.",
"Other work performs joint entity and event coreference (Barhom et al., 2019) without linking.",
"In this work, we presented a novel clustering-based inference procedure which enables joint entity linking predictions.",
"We evaluate the effectiveness of our approach on the two biomedical entity linking datasets, including the largest publicly available dataset.",
"We show through analysis that our approach is better suited to link mentions with ambiguous surface forms and link mentions where the ground truth entity is not in the candidate set.",
"Entity linking is a task with the intention of providing useful information when building a semantic index of documents.",
"This semantic index is a core component of systems which allow users to search, retrieve, and analyze text documents.",
"In our specific case, we are interested in building semantic indexes of scientific documents where the end user would be scientists and researchers.",
"The goal is to help them navigate the vast amount of literature and accelerate science.",
"This being said, users need to take the outputs of such a system as suggestions and with the potential that the information is incorrect.",
"Researchers must be aware that the sys-MedMentions BC5CDR E ( m ) ( m ) E ( m ) (cid:54) ( m ) E ( m ) ( m ) E ( m ) (cid:54) ( m ) INDEPENDENT 85.3 0.0 95.5 0.0 CLUSTERING-BASED 84.5 13.9 95.3 14.9 w/ Gold Types INDEPENDENT 90.0 0.0 95.7 0.0 CLUSTERING-BASED 89.3 19.3 95.4 15.9 Table 3: Performance when Candidate Generation Fails .",
"tem is not perfect and they should not jump to any conclusions especially about important decisions.",
"Additionally, the researcher can always verify the decisions being made by the system.",
"While this paper focuses on biomedical entity linking, this technique could be extended to other domains.",
"In such other domains, users might not have as much expertise, but the user is still responsible for making decisions on their own, since the system is not perfect.",
"In addition, the system developers and designers need to be aware of their particular application to ensure to mitigate harm which could come from such a system.",
"For example, in any application that deals with personalized data, we need to be wary of the potential outcomes which could come from an entity linking based system or semantic index, such as privacy or other potential malicious behaviour or unforeseen consequences due to the decisions being made by the system.",
"We thank members of UMass IESL and NLP groups for helpful discussion and feedback.",
"This work is funded in part by the Center for Data Science and the Center for Intelligent Information Retrieval, and in part by the National Science Foundation under Grants No. 1763618, and in part by the Chan Zuckerberg Initiative under the project Scientific Knowledge Base Construction.",
"The work reported here was performed in part by the Center for Data Science and the Center for Intelligent Information Retrieval, and in part using high performance computing equipment obtained under a grant from the Collaborative R&D Fund managed by the Massachusetts Technology Collaborative.",
"Rico Angell is supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1938059.",
"Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"result",
"result",
"other",
"abstain",
"other",
"method",
"other",
"method",
"other",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"other",
"objective",
"method",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other"
] |
[
"This paper discusses the importance of uncovering uncertainty in end-to-end dialog tasks and presents our experimental results on uncertainty classification on the processed Ubuntu Dialog Corpus 1 .",
"We show that instead of retraining models for this specific purpose, we can capture the original retrieval model's underlying confidence concerning the best prediction using trivial additional computation.",
"Uncertainty modeling is a widely explored problem in dialog research.",
"Stochastic models like deep Q-networks (Tegho et al., 2017), Gaussian processes (Gai and Young, 2014), and partially observable Markov decision process (Roy et al., 2000) are often used in spoken dialog systems to optimize dialog management by explicitly estimating uncertainty in policy assignments.",
"However, these approaches are either computationally intensive (Gal and Ghahramani, 2015) or require significant work on refining policy representations (Gai and Young, 2014).",
"Moreover, most current uncertainty studies in dialog focus on the dialog management component.",
"End-to-end (E2E) dialog retrieval models jointly encode a dialog and a candidate response (Wu et al., 2016; Zhou et al., 2018), assuming the ground truth is always present in the candidate set, which is not the case in production.",
"Larson et al. (2019) recently showed that classifiers that perform well on in-scope intent classification for task-oriented dialog systems struggle to identify out-of-scope queries.",
"The response selection task in the most recent Dialog System Technology Challenge (Chulaka Gunasekara and Lasecki, 2019) also explicitly mentions that none 1 Our datasets for the NOTA task are released at https://github.com/yfeng21/nota prediction of the proposed utterances is a good candidate should be a valid option.",
"The goal of this paper is to set a new direction for future task-oriented dialog system research: while retrieving the best candidate is crucial, it should be equally important to identify when the correct response (i.e. ground truth) is not present in the candidate set.",
"In this paper, we measure the E2E retrieval model's capability to capture uncertainty by inserting an additional none of the above (NOTA) candidate into the proposed response set at inference time.",
"The contributions of this paper include: (1) demonstrating that it is crucial to learn the relationship amongst the candidates as a set instead of looking at point-wise matching to solve the NOTA detection task.",
"As a result, the logistic regression (LogReg) approach proposed here consistently achieves the best performance compared to several strong baselines.",
"(2) extensive experiments show that the raw output score ( logits ) is more informative in terms of representing model confidence than normalized probabilities after the Softmax layer.",
"Our use of NOTA to measure uncertainty in dialog response is motivated by the design of student performance assessment in psychology studies.",
"Test creators often include NOTA candidates in multiple-choice design questions, both as correct answers and as distractors.",
"How the use of NOTA affects the difficulty and discrimination of a question has been discussed widely (Gross, 1994; Pachai et al., 2015).",
"For assessment purposes, a common finding is that using NOTA as the correct response increases question difficulty, and also lures highand low-performing students toward distractors (Pachai et al., 2015).",
"Returning a NOTA-like response is a common practice in dialog production systems (IBM).",
"The idea of adding the NOTA option to a candidate set is also widely used in other language technology fields like speaker verification (Pathak and Raj, 2013).",
"However, the effect of adding NOTA is rarely introduced in dialog retrieval research problems.",
"To the best of our knowledge, we are the first to scientifically evaluate a variety of conventional approaches for retrieving NOTA in the dialog field.",
"All of the experiments herein use the Ubuntu (Lowe et al., 2015) Dialog Corpus, which contains multiturn, goal-oriented chat logs on the Ubuntu fo-rum.",
"For next utterance retrieval purposes, we use the training data version that was preprocessed by Mehri and Eskenazi (2019), where all negative training samples (500,127) were removed, and, for each context, 9 distractor responses were randomly chosen from the dataset to form the candidate response set, together with the ground truth response.",
"For the uncertainty task, we use a special token NOTA to represent the none of the above choice, as in multiple-choice questions.",
"More details on this NOTA setup can be found in Sections 4.1 and 4.2.",
"The modified training dataset has 499,873 dialog contexts, and each has 10 candidate responses.",
"The validation and test sets remain unchanged, with 19,561 validation samples and 18,921 test samples.",
"The LSTM dual encoder model consists of two single-layer, uni-directional encoders, one to encode the embedding ( c ) of the context and one to encode the embedding ( r ) of the response.",
"The output function is computed as a dot product of the two, f ( r, c ) = c T r .",
"This model architecture has already been shown to perform well for the Ubuntu dataset (Lowe et al., 2015; Kadlec et al., 2015).",
"We carry out experiments with the following variants of the vanilla model for training: Binary This is the most common training method for next utterance ranking on the Ubuntu corpus.",
"With training data prepared in the format of [CON-TEXT] [RESPONSE] [LABEL], the model performs binary classification on each sample, predicting whether a given response is the ground truth.",
"The binary cross-entropy between the label and ( f ( r, c )) following a sigmoid layer is used as the loss function.",
"Selection As the validation and test datasets are both in the format of [CONTEXT] [RESPONSE]*x , where x is usually 10, we train the selection model in the same way.",
"For this model, following a softmax layer, the loss is calculated by the negative log likelihood function: L = log (cid:18) exp ( f ( r ground truth , c ) (cid:80) xi =1 exp ( f ( r i , c )) (cid:19) (1) Dropout Gal and Ghahramani (2015) found that dropout layers can be used in neural networks as a Bayesian approximation to the Gaussian process, and thus have the ability to represent model uncertainty in deep learning.",
"Inspired by this work, we add a dropout layer after each encoder's hidden layer at training time.",
"At inference, we have the dropout layer activated and pass each sample through n times, and then make the final prediction by taking a majority vote among the n predictions.",
"Unlike the other models, the NOTA binary classification decision is not based on the output score itself, but rather is calculated on the score variance of each response.",
"LSTM For the LSTM models, unless otherwise specified, the word embeddings are initialized randomly with a dimension of 300, and a hidden size of 512.",
"The vocabulary is constructed of the 10000 most common words in the training dataset, plus the UNK and PAD special tokens.",
"We use the Adam algorithm (Kingma and Ba, 2014) for optimization with a learning rate of 0.005.",
"The gradients are clipped to 5.0.",
"With a batch size of 128, we train the model for 20 epochs, and select the best checkout based on its performance on the validation set.",
"In the dropout model, we use a dropout probability of 50% .",
"LogReg For the logistic regression model, we train on the validation set's LSTM outputs with the same hyperparameter (where applicable to LogReg) setup as in the corresponding LSTM model.",
"For the direct prediction experiment, we randomly choose 50% of the response sets and replace the ground truth responses with the NOTA special token (we label this subset as isNOTA ).",
"For the other 50% samples, we replace the first distractor with the NOTA token (we label this subset as notNOTA ).",
"By using this setup, we ensure that a NOTA token is always present in the candidate set.",
"Although making decisions based on logits ( Directlogits ) or probability ( DirectProb) yields the same argmax prediction, we collect both output scores for the following LogReg model (details in Section 4.3).",
"Concretely, the final output y (cid:48) of a direct prediction model is: y (cid:48) = argmax r A (cid:83) { NOTA } f ( r, c ) (2) 4.2 Threshold Another common approach toward returning NOTA is to reject a candidate utterance based on confidence score thresholds.",
"Therefore, in the threshold experiments, with the same preprocessed data as in Section 4.1, we remove all NOTA tokens at the inference model's batch preparation stage, leaving 9 candidates, thus 50% of the response sets (the isNOTA set) with no ground truth present.",
"After the model outputs scores for each candidate response, with the predefined threshold, it further decides whether to accept the prediction with the highest score as its final response, or to reject the prediction and give NOTA instead.",
"We investigate the performance of setting the threshold based on probability ( ThresholdProb ) and logits ( ThresholdLogits ) respectively.",
"Concretely, the final output y (cid:48) is given by: y (cid:48) = (cid:40) NOTA if f ( r, c ) < threshold argmax r A f ( r, c ) (3) 4.3 Logistic Regression We feed the output scores of the LSTM models for all candidate answers as input features to the LogReg model consisting of a single linear layer and a logistic output layer.",
"Separate LogReg models are trained for different numbers of candidates.",
"The probability output indicates whether the previous model's prediction is ground truth or just the best-scoring distractor.",
"Since LogReg can see output scores from all candidate responses, it is trained to model the relationship amongst all the candidates, making it categorically different from the binary estimation mentioned in Section 4.1 and 4.2.",
"Note that at inference time, LogReg works essentially as a threshold method.",
"The final output is determined by: y (cid:48) = (cid:40) NOTA if LogReg ( { f ( r i , c ) } ) < 0 .",
"where input to the LogReg model f ( r i , c ) is the output of LSTM models, either in logits or normalized form, as previously defined in subsection 3.2.",
"Dialog retrieval tasks often use recall out of k ( R x @ k ) as a key metric, measuring out of x candidates how often the answer is in top-k.",
"In this paper, we focus on the top-1 accuracy R x @1 ( R x for short) with a candidate set size of x , where x { 2 , 5 , 10 , 20 , 40 , 60 , 80 , 100 } .",
"The recall metric is modified for uncertainty measurement purposes, and is further extended to calculate the NOTA accuracy out of x ( N x ), and F1 scores for each class ( NF 1 x , GF 1 x ).",
"Let D = { c, y } and D n = { c, isNOTA } be the two subparts of data that correspond to samples that are notNOTA and isNOTA respectively, the above metrics are computed by: R x = (cid:80) y D ( y (cid:48) = y ) | D | (5) N x = (cid:80) y D n ( y (cid:48) = y ) + (cid:80) y D ( y (cid:48) (cid:54) = NOTA ) | D | + | D n | (6) In Equation (6), the numerator represents correctly predicted (same as in Equation (5)) plus other true negative isNOTA predictions, where the model correctly predicts notNOTA , but fails to choose the ground truth.",
"The positive class in NF 1 x is the isNOTA class, and the positive class in GF 1 x is the notNOTA class.",
"In real-world problems, retrieval response sets usually have many more than 10 candidates.",
"Therefore, we further test the selection and binary models on a bigger reconstructed test set.",
"For each context, we randomly select 90 more distractors from other samples' candidate responses, producing a candidate response set of size 100 for each context.",
"Table 1 summarizes the experimental results.",
"Due to space limitation, this table only displays results on 10 candidates.",
"Complete results on other numbers of candidates, which have similar performance patterns as 10, are found in the Appendix.",
"The thresholds and hyperparameters are tuned on the validation set according to the highest average F1 score.",
"For the selection model, in addition to the original dataset, we also train the model on a modified training dataset, containing NOTA choices as in inference datasets, with the same set of hyperparameters.",
"As expected, since there are now fewer real distractor responses, training including NOTA improves the model's NOTA classification performance, but sacrifices recall scores, which is not desirable.",
"In all the models, regardless of the training dataset used and the model architecture, adding a logistic regression on top of the LSTM output significantly improves average F1 scores.",
"Specifically, the highest F1 scores are always achieved with logits scores as LogReg input features.",
"These results show that, though setting a threshold is a common heuristic to balance true and false acceptance rates (Larson et al., 2019), its NOTA predic-R 10 N 10 NF 1 10 GF 1 10 AverageF1 SelectionModel(originaldata)DirectPredict 56.12 61.48 52.82 67.46 60.14 +LogReg(Logits) 55.98 87.81 86.96 88.56 87.76 +LogReg(Softmax) 50.94 74.30 74.46 74.15 74.31 LogitsThreshold(=0.5) 50.10 64.28 62.84 65.61 64.22 +LogReg 62.81 80.45 80.49 80.42 80.45 SoftmaxThreshold(=0.55) 48.76 60.10 59.69 60.50 60.09 +LogReg 63.64 78.50 80.17 76.52 78.34 SelectionModel( NOTA )DirectPredict 55.43 63.07 54.28 69.03 61.66 +LogReg(Logits) 40.66 78.19 78.80 77.53 78.16 +LogReg(Softmax) 51.63 77.94 78.21 77.67 77.94 LogitsThreshold(=2.0) 48.44 61.32 57.75 64.32 61.03 +LogReg 60.73 79.22 79.11 79.33 79.22 SoftmaxThtrshold(=0.5) 48.18 59.06 57.32 60.67 59.00 +LogReg 61.08 78.01 79.75 75.94 77.84 BinaryModel DirectPredict 35.73 61.72 63.54 59.72 61.63 +LogReg(Logits) 35.64 94.08 93.72 94.40 94.06 +LogReg(Softmax) 25.42 85.06 85.41 84.69 85.05 LogitsThreshold(=1.0) 41.64 61.50 57.77 64.62 61.20 +LogReg 51.58 77.15 76.74 77.55 77.14 SoftmaxThreshold(=0.4) 39.70 54.96 51.83 57.70 54.77 +LogReg 52.00 74.40 76.43 71.99 74.21 DropoutModel DirectPredict 28.57 50.13 1.48 66.61 34.05 +LogReg(Logits) 19.21 66.89 61.87 70.74 66.30 +LogReg(Softmax) 21.73 50.49 56.37 42.79 49.58 LogitsVarianceThreshold(=0.1) 13.73 51.89 57.15 45.15 51.15 +LogReg 20.87 56.13 40.18 65.37 52.78 SoftmaxVarianceThreshold(=0.001) 22.22 50.03 38.98 57.69 48.33 +LogReg 23.84 57.21 60.87 52.81 56.84 Table 1: Results on 10 candidates.",
"tion performance is not comparable to the LogReg approach, even after an exhaustive grid-search of best thresholds.",
"This finding is underlined by receiver operating characteristic (ROC) curves on the validation set Figure 1: Merged ROC curves for LSTM outputs with the original selection model.",
"Figure 2 shows ROC plots for predicting NOTA with LogReg in the same order as Figure 1, where a separate LogReg model is trained for each score setting.",
"In both figures, the areas under curve (AUC) indicate that logits serves as a more discriminative confidence score compared to the normalized softmax score.",
"Comparing the top right plots in both Figures, we can see that with the same set of logits scores as threshold criteria, AUC is boosted from 0.71 to 0.91 with the additional LogReg model, providing further evidence that LogReg significantly outperforms the LSTM models in this NOTA classification task.",
"We see that there are apparent differences between isNOTA ' and notNOTA 's best score distributions.",
"This is an encouraging observation because it suggests that current retrieval models can already distinguish good versus wrong responses to some extent.",
"Note that as the NOTA token is not included in training, for direct prediction tasks, the NOTA token is encoded as an UNK token at inference time.",
"The tails of the isNOTA plot in both the DirectLogits and DirectProb graphs suggest that the model will, very rarely, pick the unknown token as the best response.",
"Figure 4 shows the average F1 score trends with the original selection model on the test set with 100 distractors.",
"The plot shows the trend that with more distractors, the LSTM model struggles to determine the presence of ground truth, while the LogReg model performs consistently well.",
"The complete results of this extended test set are in the Appendix.",
"With NOTA options in the training data, the models learn to sometimes predict NOTA as the best response, resulting in more false-positive isNOTA predictions at inference time.",
"Also, by replacing various ground truths and strong distractors with NOTA, the model has fewer samples to help it learn to distinguish between different ground truths and strong distractors/ Thus it performs less well on borderline predictions (scores close to the threshold).",
"This behavior results in some selection methods trained on the dataset containing NOTA tokens performing worse than when they are trained on the original dataset.",
"This motivates us to advocate the proposed LogReg approach instead of the conventional add a NOTA choice method.",
"Another prominent advantage of the LogReg approach is that it does not require dataor model-dependent input like embedding vectors or hidden layer output.",
"Instead, it takes logits or normalized scores, both of which can be output from any models.",
"This feature makes our approach insensitive to the underlying architecture.",
"We have created a new NOTA task on the Ubuntu Dialog Corpus, and have proposed to solve the problem by learning the response set representation with a binary classification model.",
"We hope the dataset we release will be used to benchmark future dialog system uncertainty research."
] | [
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method"
] |
[
"Speech translation (ST) aims to learn transformations from speech in the source language to the text in the target language.",
"Previous works show that multitask learning improves the ST performance, in which the recognition decoder generates the text of the source language, and the translation decoder obtains the final translations based on the output of the recognition decoder.",
"Because whether the output of the recognition decoder has the correct semantics is more critical than its accuracy, we propose to improve the multitask ST model by utilizing word embedding as the intermediate.",
"Speech translation (ST) increasingly receives attention from the machine translation (MT) community recently.",
"To learn the transformation between speech in the source language and the text in the target language, conventional models pipeline automatic speech recognition (ASR) and text-to-text MT model (Berard et al., 2016).",
"However, such pipeline systems suffer from error propagation.",
"Previous works show that deep end-to-end models can outperform conventional pipeline systems with sufficient training data (Weiss et al., 2017; In-aguma et al., 2019; Sperber et al., 2019).",
"Nevertheless, well-annotated bilingual data is expensive and hard to collect (Bansal et al., 2018a,b; Duong et al., 2016).",
"Multitask learning plays an essential role in leveraging a large amount of monolingual data to improve representation in ST. Multitask ST models have two jointly learned decoding parts, namely the recognition and translation part.",
"The recognition part firstly decodes the speech of source language into the text of source language, and then based on the output of the recognition part, the translation part generates the text in the target language.",
"Variant multitask models have been explored (Anas-tasopoulos and Chiang, 2018), which shows the improvement in low-resource scenario.",
"Although applying the text of source language as the intermediate information in multitask end-to-end ST empirically yielded improvement, we argue whether this is the optimal solution.",
"Even though the recognition part does not correctly transcribe the input speech into text, the final translation result would be correct if the output of the recognition part preserves sufficient semantic information for translation.",
"Therefore, we explore to leverage word embedding as the intermediate level instead of text.",
"In this paper, we apply pre-trained word embedding as the intermediate level in the multitask ST model.",
"We propose to constrain the hidden states of the decoder of the recognition part to be close to the pre-trained word embedding.",
"Prior works on word embedding regression show improved results on MT (Jauregi Unanue et al., 2019; Kumar and Tsvetkov, 2018).",
"Experimental results show that the proposed approach obtains improvement to the ST model.",
"Further analysis also shows that constrained hidden states are approximately isospectral to word embedding space, indicating that the decoder achieves speech-to-semantic mappings.",
"Our method is based on the multitask learning for ST (Anastasopoulos and Chiang, 2018), including speech recognition in the source language and translation in the target language, as shown in Fig.",
"1(a).",
"The input audio feature sequence is first encoded into the encoder hidden state sequence h = h 1 , h 2 , . . . , h T with length T by the pyramid encoder (Chan et al., 2015).",
"To present speech recognition in the source language, the attention mechanism and a decoder is employed to produce source decoder sequence s = s 1 , s 2 , . . . , s M , where M is the number of decoding steps in the source language.",
"For each decoding step m , the probability P ( y m ) of predicting the token y m in the source language vocabulary can be computed based on the corresponding decoder state s m .",
"To perform speech translation in the target language, both the source language decoder state sequence s and the encoder state sequence h will be attended and treated as the target language de-coder's input.",
"The hidden state of target language decoder can then be used to derived the probability P ( y q ) of predicting token y q in the target language vocabulary for every decoding step q .",
"Given the ground truth sequence in the source language y = y 1 , y 2 , . . . , y M and the target language y = y 1 , y 2 , . . . , y Q with length Q , multitask ST can be trained with maximizing log likelihood in both domains.",
"Formally, the objective function of multitask ST can be written as: LST = ML src + QL tgt = M (cid:88) m log P ( y m ) + Q (cid:88) q log P ( y q ) , (1) where and are the trade-off factors to balance between the two tasks.",
"We propose two ways to help the multitask end-to-end ST model capture the semantic relation between word tokens by leveraging the source language word embedding as intermediate level.",
"E = { e 1 , e 2 , ... e | V | } , where V is the vocabulary set and e v RD is the embedding vector with dimension D for any word v V , in the recognition task.",
"We choose the source language decoder state (embedding) s to reinforce since it is later used in the translation task.",
"To be more specific, we argue that the embedding generated by the source language decoder should be more semantically correct in order to benefit the translation task.",
"Given the pre-trained source language word embedding E , we proposed to constrain the source decoder state s m at step m to be close to its corresponding word embedding e y m with the two approaches detailed in the following sections.",
"Since semantic-related words would be close in terms of cosine distance (Mikolov et al., 2018), a simple idea is to minimize the cosine distance (CD) between the source language decoder hidden state s m and the corresponding word embedding e y m for every decode step m ,",
"LCD = (cid:88) m 1 cos( f ( s m ) , e y m ) = (cid:88) m 1 f ( s m ) e y m (cid:107) f ( s m ) (cid:107)(cid:107) e y m (cid:107) , (2)",
"where f ( ) is a learnable linear projection to match the dimensionality of word embedding and decoder state.",
"With this design, the network architecture of the target language decoder would not be limited by the dimension of word embedding.",
"Fig.",
"1(b) illustrates this approach.",
"By replacing L src in Eq.",
"(1) with LCD , semantic learning from word embedding for source language recognition can be achieved.",
"Ideally, using word embedding as the learning target via minimizing CD can effectively train the decoder to model the semantic relation existing in the embedding space.",
"However, such an approach suffers from the hubness problem (Faruqui et al., 2016) of word embedding in practice (as we later discuss in Sec. 4.5).",
"To address this problem, we introduce cosine softmax (CS) function (Liu et al., 2017a,b) to learn speech-to-semantic embedding mappings.",
"Given the decoder hidden state s m and the word embedding E , the probability of the target word y m is defined as PCS ( y m ) = exp(cos( f ( s m ) , e y m ) / ) (cid:80) e v E exp(cos( f ( s m ) , e v ) / ) , (3) where cos( ) and f ( ) are from Eq.",
"(2), and is the temperature of softmax function.",
"Note that since the temperature re-scales cosine similarity, the hubness problem can be mitigated by selecting a proper value for .",
"Fig.",
"1(c) illustrates the approach.",
"With the probability derived from cosine softmax in Eq.",
"(3), the objective function for source language decoder can be written as LCS = (cid:88) m log PCS ( y m ) .",
"We used Fisher Spanish corpus (Graff et al., 2010) to perform Spanish speech to English text translation.",
"And we followed previous works (Inaguma et al., 2019) for pre-processing steps, and 40/160 hours of train set, standard dev test are used for the experiments.",
"Byte-pair-encoding (BPE) (Kudo and Richardson, 2018) was applied to the target transcriptions to form 10K subwords as the target of the translation part.",
"Spanish word embeddings were obtained from FastText pre-trained on Wikipedia (Bojanowski et al., 2016), and 8000 Spanish words were used in the recognition part.",
"The encoder is a 3-layer 512-dimensional bidirectional LSTM with additional convolution layers, yielding 8 down-sampling in time.",
"The decoders are 1024-dimensional LSTM, and we used one layer in the recognition part and two layers in the translation part.",
"The models were optimized using Adadelta with 10 6 as the weight decay rate.",
"Scheduled sampling with probability 0.8 was applied to the decoder in the translation part.",
"Experiments ran 1.5M steps, and models were selected by the highest BLEU on four transcriptions per speech in dev set.",
"Baseline : We firstly built the single-task end-to-end model ( SE ) to set a baseline for multitask learning, which resulted in 34.5/34.51 BLEU on dev and test set respectively, which showed comparable results to Salesky et al. (2019).",
"Multitask end-to-end model ( ME ) mentioned in Sec. 2 is another baseline.",
"By applying multitask learning in addition,",
"High-resource : Column",
"(a) in Table 1 showed the results trained on 160 hours of data.",
"CD and CS represent the proposed methods mentioned in Sec. 3.1 and 3.2 respectively.",
"We got mixed results on further applying pre-trained word embedding on ME .",
"CD degraded the performance, which is even worse than SE , but CS performed the best.",
"Results showed that directly learn word embedding via cosine distance is not a good strategy in the high-resource setting, but integrating similarity with cosine softmax function can significantly improve performance.",
"We leave the discussion in Sec. 4.5.",
"Low-resource : We also experimented on 40 hours subset data for training, as shown in column",
"(b) in Table 1.",
"We could see that ME , CD and CS overwhelmed SE in low-resource setting.",
"Although CD resulted in degrading performance in high-resource setting, it showed improvements in low-resource scenario.",
"CS consistently outperformed ME and CD on different data size, showing it is robust on improving ST task.",
"In this section, we analyzed hidden states s by existing methods.",
"For each word v in corpus, we denoted its word embedding e v as pre-trained embedding , and e v as predicted embedding .",
"Note that because a single word v could be mapped by multiple audio segments, we took the average of all its predicted embedding.",
"We obtained the top 500 frequent words in the whole Fisher Spanish corpus, and tested on the sentences containing only these words in test set.",
"Eigenvector Similarity : To verify our proposed methods can constrain hidden states in the word embedding space, we computed eigenvector similarity between predicted embedding and pre-trained embedding space.",
"The metric derives from Laplacian eigenvalues and represents how similar be-160 hours 40 hours dev test dev test ME 16.50 18.58 13.80 15.09 CD 2.60 3.44 3.95 3.63 CS 11.55 13.76 8.62 9.80 Table 2: Eigenvector similarity.",
"tween two spaces, the lower value on the metric, the more approximately isospectral between the two spaces.",
"Previous works showed that the metric is correlated to the performance of translation task (Sgaard et al., 2018; Chung et al., 2019).",
"As shown in Table 2, predicted embedding is more similar to pre-trained embedding when models trained on sufficient data (160 v.s 40 hours).",
"CD is the most similar case among the three cases, and ME is the most different case.",
"Results indicated that our proposals constrain hidden states in pre-trained embedding space.",
"Semantic Alignment : To further verify if predicted embedding is semantically aligned to pre-trained embedding , we applied Procrustes alignment (Conneau et al., 2017; Lample et al., 2017) method to learn the mapping between predicted embedding and pre-trained embedding .",
"Top 50 frequent words were selected to be the training dictionary, and we evaluated on the remaining 450 words with cross-domain similarity local scaling (CSLS) method.",
"Precision@k (P@k, k=1,5) were reported as measurements.",
"As shown in Table 3, CD performed the best, and ME was the worst one.",
"This experiment reinforced that our proposals can constrain hidden states to the similar structure of word embedding space.",
"We further analyzed the results of speech recognition for ME and CS .",
"To obtain the recognition results from Eq (3), simply take arg max v PCS ( v ) .",
"The word error rate (WER) of the source language recognition was reported in Table 4. Combining the results shown in Table 1, we could see that CS 160 hours 40 hours dev test dev test ME 43.13 38.57 53.42 54.70 CS 50.15 44.43 57.63 57.21 Table 4: Word error rate (%) trained on different size of data.",
"has worse WER, but higher BLEU compared with ME .",
"We concluded that although leveraging word embedding at the intermediate level instead of text results in worse performance in speech recognition (this indicates that the WER of the recognition part does not fully determine the translation perfor-mance), the semantic information could somewhat help multitask models generate better translation in terms of BLEU.",
"We do not include the WER of CD in Table 1 because its WER is poor ( > 100%), but interestingly, the BLEU of CD is still reasonable, which is another evidence that WER of the intermediate level is not the key of translation performance.",
"Based on experimental results, we found that proposals are possible to map speech to semantic space.",
"With optimizing CS , BLEU consistently outperformed ME , which shows that utilizing semantic information truly helps on ST. Directly minimizing cosine distance made the predicted embedding space closest to pre-trained embedding space, but performed inconsistently on BLEU in different data sizes.",
"We inferred that the imbalance word frequency training and hubness problem (Faruqui et al., 2016) in word embedding space made hidden states not discriminated enough for the target language decoder while optimizing CS can alleviate this issue.",
"5 Conclusions Our proposals showed that utilizing word embedding as intermediate helps with the ST task, and it is possible to map speech to the semantic space.",
"We also observed that lower WER in source language recognition not imply higher BLEU in target language translation.",
"This work is the first attempt to utilize word embedding in the ST task, and further techniques can be applied upon this idea.",
"For example, cross-lingual word embedding mapping methods can be considered within the ST model to shorten the distance between MT and ST tasks."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain"
] |
[
"Text generation with generative adversarial networks (GANs) can be divided into the text-based and code-based categories according to the type of signals used for discrimination.",
"In this work, we introduce a novel text-based approach called Soft-GAN to effectively exploit GAN setup for text generation.",
"We demonstrate how autoencoders (AEs) can be used for providing a continuous representation of sentences, which we will refer to as soft-text.",
"This soft representation will be used in GAN discrimination to synthesize similar soft-texts.",
"We also propose hybrid latent code and text-based GAN (LATEXT-GAN ) approaches with one or more discriminators, in which a combination of the latent code and the soft-text is used for GAN discriminations.",
"We perform a number of subjective and objective experiments on two well-known datasets (SNLI and Image COCO) to validate our techniques.",
"We discuss the results using several evaluation metrics and show that the proposed techniques outperform the traditional GAN-based text-generation methods.",
"Text generation is an active research area and has many real-world applications, including, but not limited to, machine translation (Bahdanau et al., 2014), AI chat bots (Li et al., 2017), image captioning (Xu et al., 2014), question answering and information retrieval (Wang et al., 2017).",
"Recurrent neural network language models (RNNLMs) (Mikolov et al., 2010) is the most popular approach for text generation which rely on maximum likelihood estimation (MLE) solutions such as teacher forcing (Williams and Zipser, 1989) (i.e. the model is trained to predict the next word given all the previous predicted words); however, it is well-known in the literature that MLE is a simplistic objective for this complex NLP task (Li et al., 2017).",
"It is reported that MLE-based methods suffer from exposure bias (Huszr, 2015), which means that at training time the model is exposed to gold data only, but at test time it observes its own predictions.",
"Hence, wrong predictions quickly accumulate and result in poor text generation quality.",
"However, generative adversarial networks (GANs) (Goodfellow et al., 2014) which are based on an adversarial loss function suffers less from the mentioned problems of the MLE solutions.",
"The great success of GANs in image generation framework (Salimans et al., 2016) motivated researchers to apply its framework to NLP applications as well.",
"GANs have been extensively used recently in various NLP applications such as machine translation (Wu et al., 2017; Yang et al., 2017a), dialogue models (Li et al., 2017), question answering (Yang et al., 2017b), and natural language generation (Gulrajani et al., 2017; Rajeswar et al., 2017; Press et al., 2017; Kim et al., 2017; Zhang et al., 2017; Cifka et al., 2018; Spinks and Moens, 2018; Haidar and Reza-gholizadeh, 2019; Gagnon-Marchand et al., 2019; Rashid et al., 2019).",
"However, applying GAN in NLP is challenging due to the discrete nature of the text.",
"Consequently, back-propagation would not be feasible for discrete outputs and it is not straightforward to pass the gradients through the discrete output words of the generator.",
"Traditional methods for GAN-based text generation can be categorized according to the type of the signal used for discrimination into two categories: text-based and code-based techniques.",
"Code-based methods such as adversarial autoencoder (AAE) (Makhzani et al., 2015) and adversarially regularized AE (ARAE) (Kim et al., 2017) derive a latent space representation of the text using an AE and attempt to learn data manifold of that latent space (Kim et al., 2017) instead of modeling text directly.",
"Text-based solutions, such as reinforcement learning (RL) based methods or approaches based on continuous approximation of discrete sampling, focus on generating text directly from the generator.",
"RL-based methods treat the distribution of GAN generator as a stochastic policy and hold the discriminator responsible for providing proper reward for the generator's actions.",
"However, the RL-based methods often need pre-training and are computationally more expensive compared to the methods of the other two categories.",
"In the continuous approximation approach for generating text with GANs, the goal is to find a continuous approximation of the discrete sampling by using the Gumbel Softmax technique (Kusner and Hernandez-Lobato, 2016) or approximating the non-differentiable argmax operator with a continuous function (Zhang et al., 2017; Gulrajani et al., 2017).",
"In this paper, we introduce Soft-GAN as a new solution for the main bottleneck of using GAN for text generation.",
"Our solution is based on an AE to derive a soft representation of the real text (i.e. soft-text).",
"This soft-text is fed to the GAN discriminator instead of the conventional one-hot representation used in (Gulrajani et al., 2017).",
"Furthermore, we propose hybrid latent code and text-based GAN (LATEXT-GAN) approaches and show that how we can improve code-based and text-based text generation techniques by considering both signals in the GAN framework.",
"We summarize the main contributions of this paper as: We introduce a new text-based solution Soft-GAN using the above soft-text discrimination.",
"We also demonstrate the rationale behind this approach.",
"We introduce LATEXT-GAN approaches for GAN-based text generation using both latent code and soft-text discrimination.",
"To the best of our knowledge, this is the first time where a GAN-based text generation framework uses both code and text-based discrimination.",
"We evaluate our methods using subjective and objective evaluation metrics.",
"We show that our proposed approaches outperform the conventional GAN-based text generation techniques that do not need pre-training.",
"Generative adversarial networks include two separate deep networks: a generator and a discriminator.",
"The generator G takes in a random variable, z following a distribution P z ( z ) and attempt to map it to the real data distribution P x ( x ) .",
"The output distribution of the generator is expected to converge to the real data distribution during the training.",
"On the other hand, the discriminator f w is expected to discern real samples from generated ones by outputting zeros and ones, respectively.",
"During training, the generator and discriminator generate samples and classify them, respectively by adversarially affecting the performance of each other.",
"In this regard, an adversarial loss function is employed for training (Goodfellow et al., 2014): min max w ( E x P x ( x ) f w ( x )+ E z P z ( z ) (1 f w ( G ( z )))) (1) As stated, using GANs for text generation is challenging because of the discrete nature of text.",
"The main bottleneck is the argmax operator which is not differentiable and blocks the gradient flow from the discriminator to the generator.",
"min E z P z ( z ) (1 f w ( argmax ( G ( z )))) (2) 3 Related Work Text-based Solutions Generating text using pure GANs was inspired by improved Wasserstein GAN (IWGAN) work (Gulrajani et al., 2017).",
"In IWGAN, a character level language model was developed based on adversarial training of a generator and a discriminator.",
"Their generator is a convolution neural network (CNN) generating fixed-length texts.",
"The discriminator is another CNN receiving 3D tensors as input sentences.",
"The real sentences and the generated ones are represented using one-hot and softmax representations, respectively.",
"A similar approach to IWGAN was proposed in (Rajeswar et al., 2017) with a recurrent neural network (RNN) based generator.",
"In (Press et al., 2017), RNN is trained to generate text with GAN using curriculum learning (Bengio et al., 2009).",
"The TextGAN (Zhang et al., 2017) method was proposed to alleviate the mode-collapsing problem by matching the high-dimensional latent feature distributions of real and synthetic sentences (Salimans et al., 2016).",
"Morever, several versions of the RL-based techniques using GAN have been introduced in the literature including Seq-GAN (Yu et al., 2017), RankGAN (Lin et al., 2017), MaliGAN (Che et al., 2017), LeakGAN (Guo et al., 2017), and MaskGAN (Fedus et al., 2018).",
"Code-based Solutions AEs have been exploited along with GANs in different architectures for computer vision application such as AAE (Makhzani et al., 2015), ALI (Dumoulin et al., 2016), and HALI (Belghazi et al., 2018).",
"Similarly, AEs can be used with GANs for generating text.",
"For instance, ARAE was proposed in (Kim et al., 2017) where it employs a discrete auto-encoder to learn continuous codes based on discrete inputs with a WGAN objective to learn an implicit probabilistic model over these codes.",
"ARAE aims at exploiting GANs ability to push the generator to follow a continuous code space corresponding to the encoded real text in the AE.",
"The generated code is then used by the decoder to generate the synthetic texts.",
"A different version of the ARAE method was also introduced in (Spinks and Moens, 2018).",
"In (Subramanian et al., 2018), sentence embeddings were learned by a generator to model a pre-trained sentence representation obtained by a general-purpose sentence encoder.",
"A temperature sweeping framework was discussed in (Caccia et al., 2018) to evaluate the text generation models over whole quality-diversity spectrum and pointed out the fundamental flaws of quality-only evaluation.",
"The Variational autoencoders (VAEs) (Kingma and Welling, 2013) were also applied for text generation in (Bowman et al., 2015; Hu et al., 2017).",
"In this section, we introduce a new text-based solution by discriminating the reconstructed output of an AE (i.e., soft-text) with the synthesized generated text and we call it as Soft-GAN.",
"Here, we use the decoder of the AE as the generator to generate the synthesized texts from random noise data z .",
"The model is described in Figure 1a.",
"We demonstrate the rationale behind this soft-GAN approach, which is to make the discrimination task of the discriminator between the real and synthetic texts more difficult and consequently providing a richer signal to the generator.",
"We also introduce three LATEXT-GAN approaches where both code and text-based signals will be used in the GAN framework.",
"We introduce LATEXT-GAN I approach on top of the AAE method.",
"LATEXT-GAN II and III approaches will be proposed based on the ARAE approach.",
"In the LATEXT-GAN I and II techniques, we use separate discriminators for the code and text discriminations.",
"In the LATEXT-GAN III approach, the concatenation of the synthetic code and the synthetic text tries to mimic the concatenation of the latent code and the soft-text using a discriminator.",
"The schematic diagram of the LATEXT-GAN I, II, and III approaches are described in Figures 1b, 1c, and 1d respectively.",
"We share the parameters of the decoder of the AE to generate the synthesized text x .",
"In order to x One-hot representation of the training text x Soft-text: Reconstructed output of the AE x Synthesized generated text x [ x P x ] [ x P x ] + (1 ) [ x P x ] z N Random data drawn from a normal distribution c Latent code representation of the training text c Synthesized generated code c [ c P c ] [ c P c ] + (1 ) [ c P c ] cz [ cz P cz ] [ c P c ] + (1 ) [ z P z ] w t + c Parameters of the combination of text and code-based discriminator w t Parameters of the text-based discriminator w c Parameters of the code-based discriminator Parameters of the encoder Parameters of the decoder Parameters of the generator Gradient penalty co-efficient describes gradient Table 1: Notations that are used in this paper train these approaches, we train the auto-encoder and the GAN alternatingly by minimizing their loss functions using the WGAN-gradient penalty (WGAN-GP) approach (Gulrajani et al., 2017).",
"In each iteration, the first step is the AE training in all of these techniques followed by the GAN loss functions.",
"The autoencoder can be trained by using a cross-entropy or mean-squared loss functions.",
"The input x to the AE is mapped to a latent code representation c which is decoded to soft-text x .",
"In our experiments, we train the autoencoder using mean-squared loss min ( , ) || x x || 2 .",
"We describe the training details of the Soft-GAN, LATEXT-GAN I, II, and III methods in the following subsections where the term critic and discriminator used interchangeably.",
"The notations that are used in this paper are described in Table 1.",
"As stated, in conventional text-based discrimination approach IWGAN (Gulrajani et al., 2017), the real and the synthesized generated text are described by the one-hot and the softmax represe-nation respectively.",
"A disadvantage of this technique is that the discriminator is able to tell apart the one-hot input from the softmax input very easily.",
"One way to avoid this issue is to derive a continuous representation of words rather than their one-hot and train the discriminator to differentiate between the continuous representations.",
"We use a conventional AE to replace the one-hot representation with softmax reconstructed output ( x ), which we refer to as soft-text.",
"This soft-text representation is used as the real input to the discriminator.",
"The synthetic generated text x is obtained by inputting the random noise data z to the shared decoder.",
"We define the proposed method as Soft-GAN.",
"In each iteration, the model is trained using the following steps after the AE training step: Train the text-based discriminator f w t for k times and the decoder once using the loss L critic t to maximize the ability of the f w t to discriminate between x and x : L critic t = min ( w t , ) ( E x P x [ f w t ( x )] + E x P x [ f w t ( x )] + E x P x [( || x f w t ( x ) || 2 1) 2 ]) (3) Train the decoder based on the loss L gen to fool the discriminator with improving the representation x : L gen = min ( E x P x [ f w t ( x )] + E x P x [ f w t ( x )]) (4) Figure 2: Locus of the input vectors to the discriminator f w t for a two-word language; Left panel: IWGAN, Right panel: Soft-GAN 4.1.1 Rationale: Why Soft-GAN should Work Better than IWGAN?",
"Suppose we have a language of vocabulary size of two words: x 1 and x 2 .",
"In the IWGAN approach, the one-hot representation of these two words (as two points in the Cartesian coordinates) and the span of the generated softmax outputs (as a line segment connecting them) is depicted in the left panel of Figure 2. As evident graphically, the task of the critic is to discriminate the points from the line connecting them, which is a rather simple very easy task, which makes it more prone to vanishing gradient.",
"On the other hand, the output locus of the soft-GAN decoder would be two red line segments as depicted in Figure 2 (Right panel) instead of two points (in the one-hot case).",
"The two line segments lie on the output locus of the generator, which will make the generator more successful in fooling the critic.",
"In the LATEXT-GAN I approach (Figure 1b), we deploy two critics: one for the soft-text discrimination and the other for the latent code discrimination.",
"The text-based discriminator f w t is used to discriminate the soft-text output x with the synthesized text x which is obtained by inputting the random noise data to the shared decoder.",
"The code-based discriminator f w c is used to discriminate the random noise data z with the latent code c in the AAE setting which was explored for image generation.",
"In the AAE setting (Makhzani et al., 2015), the encoder enhances its representation to a prior distribution z .",
"It can be seen that the LATEXT-GAN I can be obtained by adding the above code-based discriminator f w c into the Soft-GAN.",
"In each iteration, the model is trained using the following steps after the AE training step and the Equation 3 step of the Soft-GAN in section 4.1: Train the code-based discriminator f w c for k times using the loss L critic c to maximize the ability of the f w c to discriminate between c and z : L critic c = min w c ( E c P c [ f w c ( c )] E z P z [ f w c ( z )]+ E cz P cz [( || cz f w c ( cz ) || 2 1) 2 ]) (5) Train the encoder and the decoder neural networks once with the loss L gen to fool the discriminators f w c and f w t with improving the representations c and x respectively: L gen = min ( , ) ( E x P x [ f w t ( x )] + E x P x [ f w t ( x )] + E z P z [ f w c ( z )] E c P c [ f w c ( c )]) (6) 4.3 LATEXT-GAN II The LATEXT-GAN II approach (Figure 1c) is similar to the LATEXT-GAN I approach except the training of the code-based discriminator f w c is done as the ARAE training.",
"The critic f w c is used to discriminate the synthetic code c with the latent code c .",
"Here, the synthetic code is formed by using the ARAE method (Spinks and Moens, 2018).",
"For each iteration in the model training, the AE training step and the Equation 3 step of the Soft-GAN in section 4.1 are carried out first.",
"Then the following two steps are performed: Train the code-based discriminator f w c for k times and the encoder once using the loss L critic c to maximize the ability of the f w c to discriminate between c and c : L critic c = min ( w c , ) ( E c P c [ f w c ( c )] + E c P c [ f w c ( c )] + E c P c [( || c f w c ( c ) || 2 1) 2 ]) (7) Train the generator and the decoder neural networks once using the loss L gen to fool the discriminators f w t and f w c with improving the representations x and c respectively: L gen = min ( , ) ( E x P x [ f w t ( x )] + E x P x [ f w t ( x )] E c P c [ f w c ( c )] + E c P c [ f w c ( c )]) (8) 4.4 LATEXT-GAN III In the third approach (Figure 1d), the combination of latent code c generated by ARAE (Spinks and Moens, 2018) and the soft-text output x of an AE is used to signal the discriminator.",
"We performed this combination by getting inspiration from an Adversarially Learned Inference (ALI) paper (Du-moulin et al., 2016) introduced for image generation.",
"We call it as LATEXT-GAN III.",
"Here, the discriminator f w t + c tries to determine which combination of the samples derive from the latent code and the soft-text, ( x , c ), and which ( x, c ) are generated from the noise z .",
"After the AE training step min ( , ) || x x || 2 , the LATEXT-GAN III model is trained using the next two steps in each iteration: Train the discriminator f w t + c for k times, the encoder and the decoder once using the loss L critic t + c to maximize the ability of the discriminator network to discriminate between ( x , c ) and ( x, c ): L critic t + c = min ( w t + c ,, ) ( E ( x,c ) P x ,P c [ f w t + c ( x, c )] + E ( x, c ) P x ,P c [ f w t + c ( x, c )]+ E ( x, c ) P x ,P c [( || ( x, c ) f w t + c ( x, c ) || 2 1) 2 ]) (9) Train the generator and the decoder once based on L gen to fool the discriminator f w t + c with improving the representation ( x, c ): L gen = min ( , ) ( E ( x, c ) P x ,P c [ f w t + c ( x, c )] + E ( x,c ) P x ,P c [ f w t + c ( x, c )]) (10) 5 Experiments 5.1 Dataset and Experimental Procedures We do our experiments on two different datasets: the Stanford Natural Language Inference (SNLI) corpus 1 , which contains 714,667 sentences for 1 https://github.com/aboev/arae-tf/tree/master/data snli training and 13323 sentences for testing, and the Image COCO 2 dataset's image caption annotations, where we sample 3 10,000 sentences as training set and another 10,000 as test set (Zhu et al., 2018).",
"We perform word-based experiments.",
"For the SNLI dataset, we use a vocabulary size of 10000 words and use the maximum sentence length of size 15.",
"For the COCO dataset, we use a vocabulary size of 5000 and perform experiments using the maximum sentence length of sizes 15 and 20.",
"We train a simple AE using one layer with 512 LSTM cells (Hochreiter and Schmidhu-ber, 1997) for both the encoder and the decoder.",
"For decoding, the output from the previous time step is used as the input to the next time step.",
"We use the hidden code c from the last time step of the encoder and applied as an additional input at each time step of decoding.",
"We normalize the code and then added an exponentially decaying noise before decoding.",
"The greedy search approach is applied to get the best output.",
"We train the autoencoder using Adam (Diederik and Jimmy, 2014) optimizer with learning rate = 0.001, 1 = 0.9, and 2 = 0.999.",
"We use CNN-based generator and discriminator with residual blocks (Gulrajani et al., 2017).",
"The tanh function is applied on the output of the ARAE generator (Kim et al., 2017).",
"We train the generator and the discriminator using Adam optimizer with learning rate = 0.0001, 1 = 0.5, and 2 = 0.9.",
"We do not apply any kind of attention mechanisms (Bahdanau et al., 2014; Zhang et al., 2018) and pre-training (Zhu et al., 2018) in our experiments.",
"We use the WGAN-GP (Gul-rajani et al., 2017) approach with 5 discriminator updates for every generator update and a gradient penalty co-efficient of =10 unlike a setup in (Zhu et al., 2018).",
"For the AAE-based experiments, we normalize the data drawn from a prior distribution z .",
"We train the models for 200000 iterations where in each iteration we sample a random batch and train the networks of the models.",
"We use the frequently used BLEU metric (Pap-ineni et al., 2002) to evaluate the word similarity between sentences and the perplexity to evaluate our techniques.",
"We calculate BLEU-n scores for n-grams without a brevity penalty (Zhu et al., 2018).",
"The results with the best BLEU-n scores in the synthesized generated texts are reported.",
"To calculate the BLEU-n scores, we generate ten batches of sentences as candidate texts, i.e. 640 sentences and use the entire test set as reference texts.",
"As the GAN-based models usually suffer from mode collapse (i.e., generating same samples over and over again), evaluating models by only BLEU metric is not appropriate.",
"So, we also calculate recently proposed self-BLEU scores for the COCO dataset using maximum sentence length of size 20 and 10k synthetic sentences to evaluate the diversity (Zhu et al., 2018).",
"Using one synthetic sentence as hypothesis and others as reference, the BLEU is calculated for every synthetic sentence, and define the average BLEU score as the self-BLEU (Zhu et al., 2018).",
"A higher self-BLEU score describe less diversity.",
"For the perplexity evaluations, we generate 100k and 10k sentences for the SNLI and the COCO datasets respectively using the models of the last iteration.",
"The BLEU score results for the n-grams of the synthesized texts are depicted in Table 2 and 3 with maximum sentence length of 15 for the SNLI and the COCO datasets respectively.",
"We also report experimental results with a longer maximum sentence length of 20 using the COCO dataset to differentiate the effectiveness of code and text-based solutions (in Table 4).",
"Furthermore, we report the BLEU and self-BLEU score results of our proposed approaches in Table 5 and 6 respectively for the COCO dataset to compare with the results of the existing approaches reported in (Zhu et al., 2018).",
"dataset and 640 synthetic sentences From tables 2, 3, and 4, we can see that our proposed approaches outperform the standalone code (AAE or ARAE) and text-based (IWGAN) solutions.",
"For the maximum sentence length of size 15 experiments, the LATEXT-GAN I is better than LATEXT-GAN II and III for shorter length text (e.g., 2,3-grams).",
"The performance of the LATEXT-GAN II and III degrades with increasing maximum sentence length to 20.",
"This is be-Model B-2 B-3 B-4 B-5 AAE 0.733 0.477 0.284 0.156 ARAE 0.676 .457 0.287 0.172 IWGAN 0.636 0.417 0.258 0.155 Soft-GAN 0.781 0.492 0.296 0.155 LATEXT-GAN I 0.758 0.496 0.294 0.155 LATEXT-GAN II 0.707 0.489 0.316 0.198 LATEXT-GAN III 0.701 0.483 0.311 0.193 Table 3: BLEU-n (B-n) scores results for COCO dataset with maximum sentence length of size 15 and 640 synthetic sentences Model B-2 B-3 B-4 B-5 AAE 0.751 0.475 0.287 0.167 ARAE 0.665 0.447 0.279 0.162 IWGAN 0.669 0.454 0.294 0.178 Soft-GAN 0.799 0.520 0.317 0.190 LATEXT-GAN I 0.77 0.503 0.314 0.185 LATEXT-GAN II 0.687 0.456 0.283 0.174 LATEXT-GAN III 0.680 0.466 0.292 0.178 Table 4: BLEU-n (B-n) scores results for COCO dataset with maximum sentence length of size 20 and over 640 synthetic sentences cause for longer sequence length experiments, the hidden code of the last time step might not be able to keep all the information from the earlier time steps.",
"On the other hand, the LATEXT-GAN I and the Soft-GAN improve their performance with increasing maximum sentence length to 20.",
"This might be because of the encoder enhances its representation better to the prior distribution, z from which the text is generated.",
"Furthermore, the Soft-GAN outperforms all the proposed LATEXT GAN approaches.",
"We also compare our proposed approaches with TextGAN (Zhang et al., 2017), some RL-based approaches (SeqGAN (Yu et al., 2017), RankGAN (Lin et al., 2017), MaliGAN (Che et al., 2017)) and MLE approach described in a benchmark platform (Zhu et al., 2018) where they apply pre-training before applying adversarial training.",
"We evaluate the BLEU and Self-BLEU score results on 10k synthetic sentences using the maximum sentence length of size 20 for the COCO dataset with a vocabulary of size 5000 as in (Zhu et al., 2018).",
"The BLEU and the self-BLEU score results are reported in Table 5 and 6 respectively.",
"From Table 5, it can be noted that our proposed approaches show comparable results to the RL-based solutions for the BLEU score results.",
"We can also see that our proposed LATEXT-GAN III approach gives lower self-BLEU scores in Table 6. From the above experimental results, we can note that LATEXT-GAN III can generate real-like and more diverse sentences compare to some approaches reported in (Zhu et al., 2018) and our other proposed approaches.",
"The forward and reverse perplexities of the LMs trained with maximum sentence length of 15 and 20 using the SNLI and the COCO datasets respectively are described in Table 7. The forward perplexities (F-PPL) are calculated by training an RNN language model (Zaremba et al., 2015) on real training data and evaluated on the synthetic samples.",
"This measure describe the fluency of the synthetic samples.",
"We also calculate the reverse perplexities (R-PPL) by training an RNNLM on the synthetic samples and evaluated on the real test data.",
"We can easily compare the performance of the LMs by using the forward perplexities while it is not possible by using the reverse perplexities as the models are trained using the synthetic samples with different vocabulary sizes.",
"The perplexities of the LMs using real data are 16.01 and 67.05 for the SNLI and the COCO datasets respectively reported in F-PPL column.",
"From the tables, we can note the models with lower forward perplexities (higher fluency) for the synthetic samples tend to have higher reverse perplexities except the AAE-based models (Cifka et al., 2018) and/or the IWGAN.",
"The forward perplexity for the IWGAN is the worst which means that the synthetic sentences of the IWGAN model are not fluent or real-like sentences.",
"For the SNLI dataset, we can note that the LATEXT-GAN II and III approaches can generate more fluent sentences than the other approaches.",
"For the COCO dataset, it can be seen that the forward perplexity of the LATEXT-GAN I (51.39) is far lower than the real data (67.05) which means the model suffers from mode-collapse.",
"The Soft-GAN, the LATEXT-GAN II and III approaches suffer less from the mode-collapse.",
"The subjective judgments of the synthetic sentences of the models trained using the COCO dataset with maximum sentence length of size 20 is reported in Table 8. We used 20 different random synthetic sentences generated by using the last iteration of each model and gave them to a group of 5 people.",
"We asked them to rate the sentences based on a 5-point Likert scale according to their fluency.",
"The raters are asked to score 1 which corresponds to gibberish, 3 corresponds to understandable but ungrammatical, and 5 correspond to naturally constructed and understandable sentences (Cifka et al., 2018).",
"From Table 8, we can note that the proposed LATEXT-GAN III approach get the higher rate compare to the other approaches.",
"From all the above different evaluations, we can note that the synthetic sentences by using the LATEXT-GAN II and III approaches are more balanced in diversity and fluency compare to the other approaches.",
"We also depicted some examples of the synthetic sentences for the COCO and the SNLI datasets in Table 9 and 10 respectively.",
"In this paper, we introduced Soft-GAN as a new solution for the main bottleneck of using GAN for generating text, which is the discontinuity of text.",
"This is based on applying soft-text to the GAN discriminator instead of the one-hot representation in the traditional approach.",
"We also introduced three LATEXT-GAN approaches by combining the reconstructed output (soft-text) of an auto-encoder to the latent code-based GAN approaches (AAE and ARAE) for text generation.",
"LATEXT-GAN I is formed on top of AAE method.",
"LATEXT-GAN II and III approaches were formed based on ARAE.",
"The LATEXT-GAN I and II approaches used separate discriminators for the synthetic text and the synthetic code discriminations.",
"The LATEXT-GAN III used the combination of the soft-text and the latent code to compare with the concatenation of the synthetic text and the synthetic code by using a single discriminator.",
"We evaluated the proposed approaches over the SNLI and the COCO datasets using subjective and objective evaluation metrics.",
"The results of the experiments are consistent with different evaluation metrics.",
"We showed the superiority of our proposed techniques, especially the LATEXT-GAN III method over other conventional GAN-based techniques which does not require pre-training.",
"Finally, we summarize our plan for future work in the following: 1. We trained the GAN using WGAN-GP approach.",
"Spectral normalization technique (Miyato et al., 2018) can be applied to stabilize the GAN training which could generate more diverse text in our settings.",
"2. The proposed approaches are the pure GAN-based techniques for text generation and they are not very powerful in generating long sentences.",
"RL or self-attention (Zhang et al., 2018) techniques can be used as a tool to ac-ARAE IWGAN Soft-GAN a motorcycle parked outside of a counter street .",
"3. We used the hidden code from the last time step of the encoder.",
"Attention mechanism can be applied for decoding.",
"Furthermore, the powerful transformer (Vaswani et al., 2017) can be applied for the auto-encoder which could improve the performance of the proposed approaches.",
"Pre-train the auto-encoder before adversarial training can also improve the performance.",
"We would like to thank Professor Qun Liu, Chief Scientist of Speech and Language Computing, Huawei Noah's Ark Lab for his valuable comments on the paper."
] | [
"abstain",
"objective",
"objective",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"This position paper investigates the problem of automated text anonymisation , which is a prerequisite for secure sharing of documents containing sensitive information about individuals.",
"We summarise the key concepts behind text anonymisation and provide a review of current approaches.",
"Anonymisation methods have so far been developed in two fields with little mutual interaction, namely natural language processing and privacy-preserving data publishing .",
"Based on a case study, we outline the benefits and limitations of these approaches and discuss a number of open challenges, such as (1) how to account for multiple types of semantic inferences, (2) how to strike a balance between disclosure risk and data utility and (3) how to evaluate the quality of the resulting anonymisation.",
"We lay out a case for moving beyond sequence labelling models and incorporate explicit measures of disclosure risk into the text anonymisation process.",
"Privacy is a fundamental human right (Art. 12 of the Universal Declaration of Human Rights) and a critical component of any free society, among others to protect citizens against social control, stigmatisation, and threats to political expression.",
"Privacy is also protected by multiple national and international legal frameworks, such as the General Data Protection Regulation (GDPR) introduced in Europe in 2018.",
"This right to privacy imposes constraints on the usage and distribution of data including personal information, such as emails, court cases or patient records.",
"In particular, personal data cannot be distributed to third parties (or even used for secondary purposes) without legal ground, such as the explicit and informed consent of the individuals to whom the data refers.",
"As informed consent is often difficult to obtain in practice, an alternative is to rely on anonymisation techniques that render personal data no longer personal.",
"Access to anonymised data is a prerequisite for research advances in many scientific fields, notably in medicine and the social sciences.",
"By facilitating open data initiatives, anonymised data can also help empower citizens and support democratic participation.",
"For structured databases, anonymisation can be enforced through well-established privacy models such as k -anonymity (Samarati, 2001; Samarati and Sweeney, 1998) or differential privacy (Dwork et al., 2006).",
"These privacy models and their implementations are, however, difficult to apply to unstructured data such as texts.",
"In fact, text anonymisation has been traditionally enforced manually, a process that is costly, time-consuming and prone to errors (Bier et al., 2009).",
"These limitations led to the development of various computational frameworks designed to extend automated or semi-automated anonymisation to the text domain (Meystre et al., 2010; Sanchez and Batet, 2016; Dernoncourt et al., 2017).",
"In this paper, we review the core concepts underlying text anonymisation, and survey the approaches put forward to solve this task.",
"These can be divided into two independent research directions.",
"On the one hand, NLP approaches rely on sequence labelling to detect and remove predefined categories of entities that are considered sensitive or of personal nature (such as names, phone numbers or medical conditions).",
"On the other hand, privacy-preserving data publishing (PPDP) approaches take the notion of disclosure risk as starting point and anonymise text by enforcing a privacy model.",
"Anonymisation consists of a sequence of transformations (such as removal or generalisation) on the document to ensure the requirements derived from the privacy model are fulfilled.",
"This position paper makes the case that none of these approaches provide a fully satisfactory account of the text anonymisation problem.",
"We illustrate their merits and shortcomings on a case study and discuss three open challenges:",
"1. How to ensure that anonymisation is robust against multiple types of semantic inferences, based on background knowledge assumed to be available to an adversary",
"; 2. How to transform the text in order to minimise the risk of disclosing personal data, yet retain as much semantic content as possible",
"; 3. How to empirically evaluate the quality (in terms of disclosure risk and utility preservation) of the resulting anonymisation.",
"We argue in this paper that NLP and PPDP approaches should be viewed as complementary (one focusing on linguistic patterns, the other on disclosure risk) and that future anonymisation approaches for text should seek to reconcile these two views.",
"In particular, we contend that text anonymisation models should combine a data-driven editor model (which selects masking operations on the document) with an adversary seeking to infer confidential attributes from edited documents.",
"The most common definition of privacy amounts to self-determination, which is the ability of individuals, groups or organisations to seclude information about themselves selectively (Westin, 1967).",
"Information related to an identified or identifiable person is known as personal data, or more precisely personally identifiable information (PII).",
"Datasets with PII cannot be released without control as this would impair the privacy of the data subjects.",
"Various legal frameworks regulate how PII can be collected and processed.",
"In particular, the General Data Protection Regulation introduced in Europe (GDPR, 2016) states that data owners must have a legal basis for processing PII, the most important one being the explicit consent of the data subjects.Alternatively, data owners may choose to anonymise the data to ensure it can no longer be attributed to specific individuals.",
"Anonymised data is no longer regulated by the GDPR and can therefore be freely released.",
"Table 1 defines some of the key terms related to data anonymisation (Elliot et al., 2016).",
"This terminology is, however, not always applied consistently, as several authors seem to use e.g. the Direct Identifier : A (set of) variable(s) unique for an individual (a name, address, phone number or bank account) that may be used to directly identify the subject.",
"Quasi Identifier : Information (such as gender, nationality, or city of residence) that in isolation does not enable re-identification, but may do so when combined with other quasi-identifiers and background knowledge.",
"Confidential Attribute : Private personal information that should not be disclosed (such as a medical condition).",
"Identity Disclosure : Unequivocal association of a record/document with a subject's identity.",
"Attribute disclosure : Unequivocal inference of a confidential attribute about a subject.",
"De-identification : Process of removing specific, predefined direct identifiers from a dataset.",
"Anonymisation : Complete and irreversible removal from a dataset of any information that, directly or indirectly, may lead to a subject's data being identified.",
"Pseudonymisation : Process of replacing direct identifiers with pseudonyms or coded values (such John Doe Patient 3).",
"The mapping between coded values and the original identifiers is then stored separately.",
"terms anonymisation and de-identification interchangeably (Chevrier et al., 2019).",
"GDPR-compliant anonymisation is the complete and irreversible process of removing personal identifiers, both direct and indirect, that may lead to an individual being identified.",
"Direct identifiers correspond to values such as names or social security numbers that directly disclose the identity of the individual.",
"However, removing direct identifiers is not sufficient to eliminate all disclosure risks, as individuals may also be re-identified by combining several pieces of information together with some background knowledge.",
"For instance, the combination of gender, birth date and postal code can be exploited to identify between 63 and 87% of the U.S. population, due to the public availability of US Census Data (Golle, 2006).",
"These types of personal identifiers are called quasi-identifiers and encompass a large variety of data types such as demographic and geospatial data.",
"Anonymisation therefore necessitates both the removal of direct identifiers and the masking of quasi-identifiers.",
"Other legal frameworks have adopted a different approach.",
"In the US, the Health Insurance Portability and Accountability Act (HIPAA) (HIPAA, 2004) lists 18 data types, such as patient's name, address or social security number, which qualify as protected health information (PHI) and should be removed from the data prior to release.",
"This process of removing predefined categories of identifiers is called de-identification 1 .",
"In other words, while HIPAA-based de-identification is limited to specific categories of direct identifiers, the anonymisation process defined by GDPR requires us to consider any direct or indirect information that, combined with background knowledge, may lead to re-identifying an individual.",
"The California Consumer Privacy Act (CCPA) introduced in 2018 adopts a position relatively similar to GDPR regarding anonymisation and asserts that any data that can be linked directly or indirectly to a consumer must be considered as personal information.",
"We highlight these legal differences as they have important implications on how anonymisation tools should be designed and evaluated (Rothstein, 2010; Hintze, 2017).",
"In particular, GDPRor CCPA-compliant anonymisation cannot be restricted to the detection of predefined classes of entities but must consider how any textual element may contribute to the disclosure risk, either directly or through semantic inferences using the background knowledge assumed to be available to an adversary.",
"Legal regulations for privacy and data protection (such as GDPR and HIPAA) typically focus on identity disclosure .",
"However, personal information may also be disclosed without re-identification.",
"In particular, attribute disclosure occurs when the value of a confidential attribute (e.g., a medical condition) can be inferred from the released data, for instance when all records sharing some characteristics (e.g. age) have the same confidential value (e.g. suffering from AIDS).",
"Identity disclosure can be seen as a special case of attribute disclosure when the confidential attribute corresponds to the person identity.",
"Data anonymisation should prevent identity disclosure but, in most cases, attribute 1 GDPR also introduces the equivalent concept of pseudonymisation , which is a useful privacy-enhancing measure, but it does not qualify as full anonymisation.",
"disclosure, which is usually more harmful from a privacy perspective, should also be avoided.",
"The removal of personal information necessarily entails some data utility loss.",
"Because the ultimate purpose behind data releases is to produce usable data, the best anonymisation methods are those that optimise the trade-off between minimising the disclosure risk and preserving the data utility.",
"NLP research on text anonymisation has focused to a large extent on the tasks of de-identification, and, to a lesser extent, pseudonymisation.",
"Deidentification is generally modelled as a sequence labelling task, similar to Named Entity Recognition (NER) (Chiu and Nichols, 2016; Lample et al., 2016).",
"Most work to date has been performed in the area of clinical NLP, where the goal is to detect Protected Health Information (PHI) in clinical texts (Meystre et al., 2010; Aberdeen et al., 2010).",
"Several shared tasks have contributed to increased activity within this area, in particular through the release of datasets manually annotated with PHIs.",
"The 2014 i2b2/UTHealth shared task (Stubbs and Uzuner, 2015) includes diabetic patient medical records annotated for an extended set of PHI categories.",
"Another influential dataset stems from the 2016 CEGS N-GRID shared task (Stubbs et al., 2017) based on psychiatric intake records, which are particularly challenging to de-identify due to a higher density of PHIs.",
"Early approaches to this task were based on rule-based and machine learning-based methods, either alone or in combination (Yogarajan et al., 2018).",
"Dernoncourt et al. (2017) and Liu et al. (2017) present the first neural models for de-identification using recurrent neural networks with character-level embeddings, achieving state-of-the-art performance on the i2b2 2014 dataset.",
"A central challenge in clinical de-identification is the availability of annotated data and the lack of universal annotation standards for PHI, making it difficult to transfer data across domains.",
"Hartman et al. (2020) examine how to adapt de-identification systems across clinical sub-domains.",
"They compare the use of labelled or unlabelled data for domain adaptation with in-domain testing and off-the-shelf de-identification tools, and show that manual labelling of even small amounts of PHI examples yields performance above existing tools.",
"Further, embeddings trained on larger amounts of in-domain, unlabelled data can be employed to adapt models to a new domain (Yang et al., 2019).",
"Finally, Friedrich et al. (2019) present an adversarial approach for learning privacy-preserving text representations, thereby allowing data to be more easily shared to train de-identification tools.",
"Outside of the clinical domain, Medlock (2006) presents a dataset of e-mails annotated with both direct identifiers (person names, transactional codes, etc.) and quasi-identifiers (organisations, course names, etc.).",
"Some annotation efforts are also geared towards de-identification for languages other than English.",
"Eder et al. (2020) present a deidentification dataset consisting of German e-mails.",
"For Swedish, Velupillai et al. (2009); Alfalahi et al. (2012) present efforts to collect and standardise annotated clinical notes, while Megyesi et al. (2018) present a pseudonymised learner language corpus.",
"For Spanish, a recently held shared task on clinical de-identification released a synthetic Spanish-language dataset (Marimon et al., 2019).",
"The problem of replacing identifiers with surrogate values is rarely addressed in NLP.",
"Most approaches simply replace detected identifiers with dummy values such as X , although some models attempt to preserve the gender of person names and provide dedicated rules for e.g. dates and addresses (Sweeney, 1996; Alfalahi et al., 2012; Eder et al., 2019; Chen et al., 2019) or to a somewhat broader range of identifiers (Volodina et al., 2020).",
"A few studies have analysed the re-identification risk of de-identified or pseudonymised texts (Car-rell et al., 2013; Meystre et al., 2014b).",
"The data utility of de-identified texts is analysed in Meystre et al. (2014a), concluding that the impact of deidentification is small, but non-negligible.",
"Beyond de-identification, several research efforts have looked at detecting and obfuscating social media texts based on quasi-identifying categories such as gender (Reddy and Knight, 2016) or race (Blod-gett et al., 2016).",
"A number of recent approaches have sought to transform latent representations of texts to protect confidential attributes, using adversarial learning (Elazar and Goldberg, 2018), reinforcement learning (Mosallanezhad et al., 2019) or encryption (Huang et al., 2020).",
"However, those methods operate at the level of latent vector representations and do not modify the texts themselves.",
"NLP approaches to anonymisation suffer from a number of shortcomings.",
"Most importantly, they are limited to predefined categories of entities and ignore how less conspicuous text elements may also play a role in re-identifying the individual.",
"For instance, the family status or physical appearance of a person may lead to re-identification but will rarely be considered as categories to detect.",
"On the other hand, those methods may also end up removing too much information, as they will systematically remove all occurrences of a given category without examining their impact on the disclosure risk or on the utility of the remaining text.",
"Privacy-preserving data publishing (PPDP) develops computational techniques for releasing data without violating privacy (Chen et al., 2009).",
"The PPDP approach to anonymisation is privacy-first: a privacy model specifying an ex ante privacy condition is enforced through one or several data masking methods, such as noise addition or generalisation of values (Domingo-Ferrer et al., 2016).",
"The first widely-accepted privacy model is k -anonymity (Samarati, 2001): a dataset satisfies k anonymity if each combination of values of quasi-identifier attributes is shared by at least k records.",
"With k > 1 , no unequivocal re-identifications are possible, thereby preventing identity disclosure.",
"Most of the attention of the PPDP community has been on structured databases.",
"Privacy models such as k -anonymity assume that datasets consist of records, each one detailing the attributes of a single individual, and that attributes have been classified beforehand into identifiers, quasi-identifiers and confidential attributes.",
"Moreover, most masking methods employed to enforce privacy models have been designed with numerical data in mind, and barely (and poorly) manage categorical or nominal attributes (Rodrguez-Garca et al., 2019).",
"Solutions for anonymising unstructured text are scarce and mostly theoretical.",
"The first approaches adapted k -anonymity for collections of documents.",
"In (Chakaravarthy et al., 2008), the authors presented the notion of K -safety.",
"They assume a collection of entities e to be protected against disclosure, each one characterised by a set of terms C ( e ) that represent their contexts (i.e. words co-occurring with e and that may be known to an attacker).",
"Then, a document D containing an entity e is said to be K -safe if the terms appearing in D also belong to the contexts of, at least, K 1 entities other than e .",
"Terms not fulfilling the property are redacted before release.",
"The privacy guarantee offered by this approach is sound because the probability of disclosing the protected entity is reduced to 1 /K .",
"However, it requires exhaustive collections of contexts for all entities to be protected, which is unfeasible.",
"It also assumes that the detection of sensitive terms is already performed.",
"This approach is only feasible for very constrained domains and non-dynamic sets of entities, such as collections of sensitive diseases, and documents with homogeneous contents.",
"Another approach built on k -anonymity is Cumby and Ghani (2011), where a multi-class classifier is trained to map input documents to (prede-fined) sensitive entities.",
"This aims at reproducing the inferences that a potential attacker may perform to disclose sensitive entities.",
"A document x referring to a sensitive entity y is then said to be K -confusable if the classifier outputs at least k classes other than y .",
"Documents are redacted via term removal or generalisation until the property is fulfilled.",
"To be applicable, sensitive entities should be static and the documents to be protected should match that of the corpus used for training.",
"Anandan et al. (2012) present a privacy model for document protection named t -plausibility.",
"They seek to generalise terms identified as sensitive according to the t -plausibility property: a protected document is said to fulfil t -plausibility if, at least, t different plausible documents can be derived by specialising the generalised terms.",
"In other words, Even though the privacy guarantee is intuitive, one can hardly predict the results for a certain t , because they depend on the document length, the number of sensitive entities and the granularity of the knowledge base employed to obtain term gen-eralisations.",
"Assuming that sensitive entities have already been detected also circumvents the most challenging task of document protection.",
"Sanchez and Batet (2016, 2017) tackles the",
"anonymisation problem from a different perspective.",
"Instead of expressing privacy guarantees in terms of probability of disclosure, it defines risk as an information theoretic characterisation of disclosed semantics.",
"The proposed privacy model, C -sanitise, states that given a document d , background knowledge K available to potential attackers, and a set of entities to protect C , d (cid:48) is the C -sanitised version of d if d (cid:48) does not contain any term t that, individually or in aggregate, unequivocally disclose the semantics encompassed by any entity in C by exploiting K .",
"The semantic disclosure incurred by t on any entity in C is quantified as their pointwise mutual information (Anandan and Clifton, 2011) measured from their probability of (co-)occurrence in the Web, which is assumed to represent the most comprehensive knowledge source ( K ) available to attackers (Chow et al., 2008).",
"This approach is able to automatically detect terms that may cause disclosure and can encompass dynamic collections of entities to protect.",
"Obtaining accurate probabilities of co-occurrence from large corpora is, however, costly.",
"Differential privacy (DP) is a privacy model that defines anonymisation in terms of randomised algorithms for computing statistics from the data (Dwork et al., 2006).",
"DP provides guarantees that the statistics cannot be used to learn anything substantial about any individual.",
"However, the goal of DP is to produce randomised responses to controlled queries, and applying it to data publishing leads in poor data utility (Domingo-Ferrer et al., 2021).",
"DP cannot be directly employed to edit out personal information from text while preserving the content of the rest of the document, and is thus outside the scope of this paper.",
"However, DP can be employed for other privacy-related tasks such as in producing synthetic texts (Fernandes et al., 2018; Bommasani et al., 2019), deriving differentially-private word representations (Feyisetan et al., 2019) or learning machine learning models with privacy guarantees (McMahan et al., 2017).",
"Compared to NLP approaches, proposals built around privacy models allow defining what should be protected and how.",
"This not only allows enforcing privacy requirements, but also makes it possible to tailor the trade-off between data protection and utility preservation.",
"On the negative side, PPDP methods are hampered by practical constraints, either because of their unfeasible assumptions, their cost or their dependency on external resources, such as large knowledge repositories, training corpora or social-scale probabilities.",
"To the exception of C -sanitise, PPDP methods also assume that sensitive entities have already been detected in a preprocessing step.",
"Furthermore, PPDP approaches typically reduce documents to flat collections of terms, which facilitates the formalisation of the data semantics for each document, but also ignores how terms are influenced by their context of occurrence (which is important to resolve potential ambiguities) and are interconnected through multiple layers of linguistic structures.",
"To investigate the performance of NLP and PPDP methods, we carried out a case study where 5 annotators annotated 8 English Wikipedia page extracts.",
"The extracts were all biographies from the 20th century scientists category, with a length between 300 and 500 characters.",
"Wikipedia articles are generic enough not to require expert domain knowledge and are commonly adopted for the evaluation of PPDP approaches (Chow et al., 2008; Sanchez and Batet, 2016).",
"Their informativeness and density make them particularly challenging to anonymise.",
"The annotation task 2 consisted of tagging text spans that could re-identify a person either directly or in combination with publicly available knowledge.",
"The annotators were instructed to prevent identity disclosure but otherwise seek to preserve as much semantic content as possible.",
"The five annotators were researchers without previous experience in text anonymisation.",
"The guidelines were left intentionally general to examine how annotators interpret and carry out the complex task of anonymisation and not only de-identification where multiple correct solutions are possible.",
"The task is challenging since these biographies relate to publicly known scientists for which extensive background material can be found online.",
"Inter-rater agreement between the five annotators for the binary masking decisions was low: 0.68 average observed agreement and Krippendorff's = 0 .",
"36 .",
"This low agreement illustrates that, contrary to traditional sequence labelling, several 2 The guidelines and annotated data are publicly available: https://github.com/IldikoPilan/anonymisation_ACL2021 solutions may exist for a given anonymisation problem.",
"Direct identifiers were generally agreed on, while quasi-identifiers such as professions and roles (e.g. founder ) triggered mixed decisions.",
"To shed further light on the anonymisation problem, we go on to compare the performance of existing tools with the manual annotations: A neural NER model (Honnibal and Montani, 2017) trained on the OntoNotes corpus with 18 entity types (Weischedel et al., 2011).",
"All detected entities were masked.",
"3 Presidio 4 , a data protection & anonymisation API developed by Microsoft and relying on a combination of template-based and machine learning models to detect and mask PII.",
"The C -sanitise privacy model (Sanchez and Batet, 2016) described in Section 4, where the required probabilities of (co-)occurrence of terms were gathered from Google.",
"To account for the multiple ways to anonymise a document, we measured the performance of the three tools above with micro-averaged scores over all annotators and texts.",
"Note that, while micro-averages are typically used in NLP to aggregate measures over output classes, we are here computing an average over multiple ground truths .",
"For each annotator q Q and document d D , let Y qd correspond to token indices masked by q in d , and Y d to the token indices masked by the anonymisation tool.",
"Precision and recall are then computed as: P = (cid:80) d D (cid:80) q Q | Y d Y qd | | Q | (cid:80) d D | Y d | (1) R = (cid:80) d D (cid:80) q Q | Y d Y qd | (cid:80) d D (cid:80) q Q | Y qd | (2) An anonymisation tool will thus obtain a perfect micro-averaged recall if it detects all tokens masked by at least one annotator.",
"The metric implicitly assigns a higher weight to tokens masked by several annotators in other words, if all five annotators mask a given token, not detecting it will have a 3 Although NERs do not specifically focus on data protection, they are often used to de-identify generic texts (except clinical notes, for which domain-specific tools are available).",
"4 https://github.com/microsoft/presidio P R F 1 NER IOB-Exact 0.5 0.49 0.47 IOB-Partial 0.61 0.48 0.54 Binary 0.64 0.51 0.57 Presidio IOB-Exact 0.63 0.22 0.33 IOB-Partial 0.74 0.24 0.36 Binary 0.76 0.25 0.38 C -sanitise IOB-Exact 0.51 0.66 0.57 IOB-Partial 0.57 0.68 0.62 Binary 0.58 0.69 0.63 Table 2: Micro-averaged scores for NER, C -sanitise and Presidio over all texts for annotators a1, a4, a5.",
"larger impact on the recall than a token masked by a single annotator.",
"Recall expresses the level of privacy protection while precision is related to the degree of utility preservation.",
"The most consistent manual annotations (a1, a4, a5) were compared to system outputs at token level both as binary labels ( keep or mask ) and as IOB tags expressing annotation spans 5 .",
"To go beyond token-level comparisons, we also computed a partial match score for IOB tags, by assigning a weight of 0.5 to partial true positives (i.e. I instead of B tags and vice versa), as in the SemEval 2013 evaluation scheme (Diab et al., 2013).",
"Table 2 presents the micro-averaged precision, recall and F 1 scores obtained for the three systems.",
"C -sanitise provided the best performance in terms of recall and F 1 score, while precision was higher for NER and Presidio.",
"Figure 1 illustrates the average observed agreement for all annotators and tools on the binary, token-level masking decisions.",
"Observed agreement with annotators was, on average, approximately the same for NER and C sanitise, ca.",
"75% and ca.",
"77% for Presidio.",
"We can distinguish two subgroups among the annotators in terms of mutual agreement, namely (a2, a3) and (a1, a4, a5) with 79% and 83% agreement respectively.",
"Divergent choices in entity segmentation e.g. splitting a consecutive mention of department and university or not was found to play an important role in the differences among annotators, and between annotators and systems.",
"5 B(eginning) represents the first token of a span, I(nside) the subsequent tokens, and O(ut) is the label assigned to all tokens that are not part of a span.",
"The proportion of masked tokens was around 50% for a1, a2 and C -sanitise, < 30% for a3, a4, a5 and NER and 11% for Presidio.",
"We conducted a detailed error analysis to gain a better understanding about the advantages and shortcoming of the three anonymisation tools described above.",
"The NER tool masked generic entities such as Second World War , although this term was not masked by any annotator or by C -sanitise.",
"In the phrase a Christian charity dedicated to helping the people of Cambodia , most annotators did not mask any tokens, while NER masked both Christian and Cambodia , and C -sanitise Christian charity .",
"On the other hand, NER ignored terms that were highly correlated with the individual and should have been masked, such as book titles authored by the person.",
"Another interesting error can be found in the sentence In 1964 and 1965 he was a Visiting Professor at the University of WisconsinMadison on a Fulbright Program fellowship where the university was masked by most annotators but left untouched by C -sanitise (as the university does not frequently co-occur with this person in web documents).",
"Presidio had the lowest recall and ignored the majority of quasi-identifiers (including organisations).",
"Consequently, Presidio's masking should be considered a de-identification process rather than full anonymisation.",
"See Appendix A for an annotated example document.",
"The case study illustrates a number of issues facing current methods for text anonymisation.",
"We discuss below three overarching challenges: the need to protect against several types of semantic inferences , the formalisation of possible masking operations to apply on documents, and, last but not least, the design of evaluation metrics to empirically assess the anonymisation performance.",
"Most works on PPDP address anonymisation from a statistical perspective (Batet and Sanchez, 2018).",
"Their main focus is on the statistical properties of (numerical) data and how these may allow attackers to re-identify an individual or uncover confidential data.",
"However, the most harmful inferences in text documents are semantic in nature that is, they are based on the actual meaning expressed in the texts instead of their statistical distributions.",
"NLP approaches do not explicitly account for semantic inferences, and simply mask all text spans belonging to predefined categories irrespective of their impact on the disclosure risk.",
"In many PPDP approaches (Chakaravarthy et al., 2008; Cumby and Ghani, 2011; Anandan et al., 2012), the adversary is assumed to know sets of attributes associated with each entity, and semantic inferences thus correspond to combinations of attributes enabling the adversary to single out the entity to protect.",
"However, in most practical settings, human adversaries do not have access to the original documents.",
"They do, however, make extensive use of external background knowledge available, e.g., on the web.",
"Such external background knowledge is captured in Sanchez and Batet (2016, 2017) using (co-)occurrence counts of terms on the web.",
"Other types of semantic inferences may be taken into account, such as lexical and taxonomic relations (synonyms, antonyms, hypernyms, hy-ponyms) between words or entities.",
"For instance, the word AIDS will lead to the disclosure of the confidential attribute immune system disease.",
"In Sanchez and Batet (2017), those relations are taken into account by enforcing consistency between known taxonomic relations and the information content of each term.",
"Semantic relations can, however, extend beyond individual terms and exploit various syntactic patterns, as shown in e.g. textual entailment (Dagan et al., 2013).",
"Semantic inferences can also be drawn from structured data sources such as census data or medical knowledge bases.",
"In the Wisconsin-Madison example above, the search for Fullbright recipients at that university in 1964-65 would likely allow the individual to be re-identified.",
"Such logical inferences require specifying which background knowledge may be available to a potential intruder and would be relevant for a given text domain.",
"Although semantic inferences have been studied in isolation in previous work, how to integrate and chain together those inferential mechanisms into a single framework remains an open question.",
"Formally, assuming a document d transformed into d (cid:48) by an anonymisation tool in charge of protecting a set of entities C , one can design an adversary model adv ( c, d (cid:48) , K ) seeking to predict, based on document d (cid:48) and background knowledge K , whether the entity c was part of the original document d or not.",
"Ideally, this adversary model should allow for multiple types of semantic inferences based on domain-relevant background knowledge (word co-occurrences in text corpora, taxonomic relations, knowledge bases, etc.).",
"NLP approaches to text anonymisation essentially focus on detecting personal identifiers and rarely discuss what to do with the detected text spans, generally assuming that those should be either redacted or replaced with coded values.",
"This approach may, however, lead to unnecessary loss of data utility, as it is often possible to replace quasi-identifiers by more generic (but still informative) entries.",
"How to transform a dataset to balance disclosure risk and data utility is a central research question in privacy-preserving data publishing.",
"Various transformations have been put forward: one can remove values altogether, generalise them into less detailed categories, or perturb the values by adding noise or swapping them (Domingo-Ferrer et al., 2016).",
"In the text domain, several PPDP approaches have shown how to generalise terms using ontologies (Anandan et al., 2012; Sanchez and Batet, 2016).",
"However, these approaches are intrinsically limited to entities present in such ontologies, and are difficult to extend to more generic text entries.",
"Another possible transformation is to introduce noise into the text.",
"The perturbation of data points through noise is a common type of transformation in data privacy (McSherry and Talwar, 2007).",
"This idea of perturbation has notably been applied to word embeddings (Feyisetan et al., 2019), but it produces perturbed word distributions rather than readable documents.",
"Semantic noise has also been defined to perturb nominal values (Rodrguez-Garca et al., 2017).",
"Formally, one can define an editor model edit ( d ) taking a document d and outputting an edited document d (cid:48) after applying a sequence of masking operations.",
"This model can be e.g. expressed as a neural text editing model (Mallinson et al., 2020).",
"Its optimisation objective should include both minimising the risk of letting an adversary disclose at least some of the protected entities C through semantic inferences (as described in the previous section) and minimising the number of masking operations necessary to map d to d (cid:48) .",
"Let D be a set of documents transformed into D by an anonymisation tool.",
"How can we empirically evaluate the quality of the anonymisation?",
"The most common method is to rely on human annotators to manually mark identifiers in each document d D , and then compare the system output with those human-annotated identifiers using IR-based metrics such as precision, recall and F 1 score.",
"The recall can be seen as reflecting the degree of protection of the confidential information, while the precision is correlated with the remaining data utility of the documents D (cid:48) .",
"This evaluation procedure has a number of shortcomings.",
"As observed in our case study, there may be several equally valid solutions to a given anonymisation problem.",
"Furthermore, IR-based metrics typically associate uniform weights to all identifiers, without taking into account the fact that some identifiers may have a much larger influence on the disclosure risk than others.",
"For instance, failing to detect a full person name is more harmful than failing to detect a quasi-identifier.",
"Finally, such type of evaluation procedure is limited to the detection of direct and indirect identifiers, but ignore the subsequent step of transforming the textual content.",
"Evaluating the quality of masking operations is tightly coupled with the problem of evaluating how data utility is preserved through the anonymisation process (Sanchez and Batet, 2016; Rodrguez-Garca et al., 2019).",
"However, how to empirically measure this data utility remains an open question.",
"An alternative which has so far received little attention is to conduct so-called privacy attacks on the edited documents D (cid:48) .",
"This can be achieved by e.g. providing the documents D (cid:48) to human experts and instruct them to re-identify those documents with the help of any information source at their disposal.",
"Such human evaluations can help uncover weaknesses in the anonymisation model (such as semantic inferences that had been over-looked).",
"However, they are also costly and time-consuming, as they must be repeated for each version of the anonymisation model.",
"This position paper discussed a number of unresolved challenges in text anonymisation.",
"Text anonymisation is defined as the removal or masking of any information that, directly or indirectly, may lead to an individual being identified (given some assumptions about the available background knowledge).",
"As illustrated in our case study, text anonymisation is a difficult task (also for human annotators), which goes beyond the mere detection of predefined categories of entities and may allow for several solutions.",
"How to properly anonymise text data is a problem of great practical importance.",
"In particular, access to high-quality data is a key ingredient for most scientific research, and the lack of good anonymisation methods for text documents (allowing data to be shared without compromising privacy) is a limiting factor in fields such as medicine, social sciences, psychology and law.",
"We surveyed two families of approaches with complementary strengths and weaknesses: NLP models are well-suited to capture textual patterns but lack any consideration of disclosure risk, while PPDP approaches provide principled accounts of privacy requirements, but view documents as bag-of-terms void of linguistic structure.",
"As outlined in the last section, a promising approach is to couple a neural editor model (apply-ing transformations to the text) with an adversary model (capturing possible semantic inferences to uncover confidential entities).",
"These two models can be optimised jointly using adversarial training, taking into account the necessary balance between disclosure risk and utility preservation.",
"Finally, we lay out a case for designing evaluation metrics that go beyond traditional IR-based measures, and account in particular for the fact that some identifiers and quasi-identifiers are more important than others in terms of their influence on the disclosure risk.",
"We acknowledge support from the Norwegian Research Council (CLEANUP project 6 , grant nr. 308904), the Government of Catalonia (ICREA Acad ` emia Prize to D. S anchez and grant 2017 SGR 705) and the Spanish Government (project TIN2016-80250-R Sec-MCloud)."
] | [
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other"
] |
[
"Abstract",
"Traditional approaches to semantic parsing (SP) work by training individual models for each available parallel dataset of text-meaning pairs.",
"In this paper, we explore the idea of polyglot semantic translation, or learning semantic parsing models that are trained on multiple datasets and natural languages.",
"In particular, we focus on translating text to code signature representations using the software component datasets of Richardson and Kuhn (2017a,b).",
"The advantage of such models is that they can be used for parsing a wide variety of input natural languages and output programming languages, or mixed input languages, using a single unified model.",
"To facilitate modeling of this type, we develop a novel graph-based decoding framework that achieves state-of-the-art performance on the above datasets, and apply this method to two other benchmark SP tasks.",
"Recent work by Richardson and Kuhn (2017a,b); Miceli Barone and Sennrich (2017) considers the problem of translating source code documentation to lower-level code template representations as part of an effort to model the meaning of such documentation.",
"Example documentation for a number of programming languages is shown in Figure 1, where each docstring description in red describes a given function (blue) in the library.",
"While capturing the semantics of docstrings is in general a difficult task, learning the translation from descriptions to formal code representations (e.g., formal representations of functions) is proposed as a reasonable first step towards learning more general natural language understanding models in the software domain.",
"Under this approach, one can view a software library, or API, as a kind of parallel translation corpus for studying text code or code text translation.",
"Richardson and Kuhn (2017b) extracted the standard library documentation for 10 popular programming languages across a number of natural languages to study the problem of text to function signature translation.",
"Initially, these datasets were proposed as a resource for studying semantic parser induction (Mooney, 2007), or for building models that learn to translate text to formal meaning representations from parallel data.",
"In followup work (Richardson and Kuhn, 2017a), they proposed using the resulting models to do automated question-answering (QA) and code retrieval on target APIs, and experimented with an additional set of software datasets built from 27 open-source Python projects.",
"As traditionally done in SP (Zettlemoyer and Collins, 2012), their approach involves learning individual models for each parallel dataset or language pair, e.g., ( en , Java ), ( de , PHP ), and ( en , Haskell ).",
"Looking again at Figure 1, we notice that while programming languages differ in terms of representation conventions, there is often overlap between the functionality implemented and naming in these different languages (e.g., the max 720 function), and redundancy in the associated linguistic descriptions.",
"In addition, each English description (Figure 1.1-1.3) describes max differently using the synonyms greater, maximum, largest .",
"In this case, it would seem that training models on multiple datasets, as opposed to single language pairs, might make learning more robust, and help to capture various linguistic alternatives.",
"With the software QA application in mind, an additional limitation is that their approach does not allow one to freely translate a given description to multiple output languages, which would be useful for comparing how different programming languages represent the same functionality.",
"The model also cannot translate between natural languages and programming languages that are not observed during training.",
"While software documentation is easy to find in bulk, if a particular API is not already documented in a language other than English (e.g., Haskell in de ), it is unlikely that such a translation will appear without considerable effort by experienced translators.",
"Similarly, many individual APIs may be too small or poorly documented to build individual models or QA applications, and will in some way need to bootstrap off of more general models or resources.",
"To deal with these issues, we aim to learn more general text-to-code translation models that are trained on multiple datasets simultaneously.",
"Our ultimate goal is to build polyglot translation models (cf.",
"Johnson et al. (2016)), or models with shared representations that can translate any input text to any output programming language, regardless of whether such language pairs were encountered explicitly during training.",
"Inherent in this task is the challenge of building an efficient polyglot decoder, or a translation mechanism that allows such crossing between input and output languages.",
"A key challenge is ensuring that such a decoder generates well-formed code representations, which is not guaranteed when one simply applies standard decoding strategies from SMT and neural MT (cf.",
"Cheng et al. (2017)).",
"Given our ultimate interest in API QA, such a decoder must also facilitate monolingual translation, or being able to translate to specific output languages as needed.",
"To solve the decoding problem, we introduce a new graph-based decoding and representation framework that reduces to solving shortest path problems in directed graphs.",
"We investigate several translation models that work within this framework, including traditional SMT models and models based on neural networks, and report state-of-the-art results on the technical documentation task of Richardson and Kuhn (2017b,a).",
"To show the applicability of our approach to more conventional SP tasks, we apply our methods to the GeoQuery domain (Zelle and Mooney, 1996) and the Sportscaster corpus (Chen et al., 2010).",
"These experiments also provide insight into the main technical documentation task and highlight the strengths and weaknesses of the various translation models being investigated.",
"Our approach builds on the baseline models introduced in Richardson and Kuhn (2017b) (see also Deng and Chrupaa (2014)).",
"Their work is positioned within the broader SP literature, where traditionally SMT (Wong and Mooney, 2006a) and parsing (Zettlemoyer and Collins, 2009) methods are used to study the problem of translating text to formal meaning representations, usually centering around QA applications (Berant et al., 2013).",
"More recently, there has been interest in using neural network approaches either in place of (Dong and Lapata, 2016; Kocisky et al., 2016) or in combination with (Misra and Artzi, 2016; Jia and Liang, 2016; Cheng et al., 2017) these traditional models, the latter idea we look at in this paper.",
"Work in NLP on software documentation has accelerated in recent years due in large part to the availability of new data resources through web-sites such as StackOverflow and Github (cf.",
"Al-lamanis et al. (2017)).",
"Most of this recent work focuses on processing large amounts of API data in bulk (Gu et al., 2016; Miceli Barone and Sennrich, 2017), either for learning longer executable programs from text (Yin and Neubig, 2017; Rabi-novich et al., 2017), or solving the inverse problem of code to text generation (Iyer et al., 2016; Richardson et al., 2017).",
"In contrast to our work, these studies do not look explicitly at translating to target APIs, or at non-English documentation.",
"The idea of polyglot modeling has gained some traction in recent years for a variety of problems (Tsvetkov et al., 2016) and has appeared within work in SP under the heading of multilingual SP (Jie and Lu, 2014; Duong et al., 2017).",
"A related topic is learning from multiple knowledge sources or domains (Herzig and Berant, 2017), which is related to our idea of learning from multiple APIs.",
"Problem Formulation Throughout the paper, we refer to target code representations as API components .",
"In all cases, components will consist of formal representations of functions, or function signatures (e.g., long max(int a, int b) ), which include a function name ( max ), a sequence of arguments ( int a, int b ), and other information such as a return value ( long ) and namespace (for more details, see Richardson (2018)).",
"For a given API dataset D = { ( x i , z i ) } ni =1 of size n , the goal is to learn a model that can generate exactly a correct component sequence z = ( z 1 , .., z | z | ) , within a finite space C of signatures (i.e., the space of all defined functions), for each input text sequence x = ( x 1 , ..., x | x | ) .",
"This involves learning a probability distribution p ( z | x ) .",
"As such, one can think of this underlying problem as a constrained MT task.",
"In this section, we describe the baseline approach of Richardson and Kuhn (2017b).",
"Technically, their approach has two components: a simple word-based translation model and task specific decoder, which is used to generate a k -best list of candidate component representations for a given input x .",
"They then use a discriminative model to rerank the translation output using additional non-world level features.",
"The goal in this section is to provide the technical details of their translation approach, which we improve in Section",
"4. 3.1 Word-based Translation Model The translation models investigated in Richardson and Kuhn (2017b) use a noisy-channel formulation where p ( z | x ) p ( x | z ) p ( z ) via Bayes rule.",
"By assuming a uniform prior on output components, p ( z ) , the model therefore involves estimating p ( x | z ) , which under a word-translation model is computed using the following formula: p ( x | z ) = P a A p ( x , a | z ) , where the summation ranges over the set of all many-to-one word alignments A from x z , with |A| equal to ( | z | + 1) | x | .",
"They investigate various types of sequence-based alignment models (Och and Ney, 2003), and find that the classic IBM Model 1 outperforms more complex word models.",
"This model factors in the following way and assumes an independent word generation process: p ( x | z ) = 1 |A| | x | Y j =1 | z | X i =0 p t ( x j | z i ) (1) where each p t defines a multinomial distribution over a given component term z for all words x .",
"The decoding problem for the above translation model involves finding the most likely output z , which requires solving an arg max z over Equation",
"1. In the general case, this problem is known to be N P -complete for the models under consideration (Knight, 1999) largely due to the large space of possible predictions z .",
"Richardson and Kuhn (2017b) avoid these issues by exploiting the finiteness of the target component search space (an idea we also pursue here and discuss more be-low), and describe a constrained decoding algorithm that runs in time O ( |C| log |C| ) .",
"While this works well for small APIs, it becomes less feasible when dealing with large sets of APIs, as in the polyglot case, or with more complex semantic languages typically used in SP (Liang, 2013).",
"To improve the baseline translation approach used previously (Section 3.1), we pursue a graph based approach.",
"Given the formulation above and the finiteness of our prediction space C , our approach exploits the fact that we can represent the complete component search space for any set of APIs as a directed acyclic finite-state automaton ( DAFSA ), such as the one shown graphically in Figure",
"2. The underlying graph is constructed by concatenating all of the component representations for each API of interest and applying standard finite-state construction and minimization techniques (Mohri, 1996).",
"Each path in the resulting compact automaton is therefore a well-formed component representation.",
"Using an idea from Johnson et al. (2016), we add to each component representation an artificial token that identifies the output programming language or library.",
"For example, the two edges from the initial state 0 in Figure 2 are labeled as 2C and 2Clojure , which identify the C and Clojure programming languages respectively.",
"All paths starting from the right of these edges are therefore valid paths in each respective programming language.",
"The paths starting from the initial state 0 , in contrast, correspond to all valid component representations in all languages.",
"Decoding reduces to the problem of finding a path for a given text input x .",
"For example, given the input the ceiling of a number , we would want to find the paths corresponding to the component translations numeric math ceil arg (in C) and algo math ceil x (in Clojure) in the graph shown in Figure",
"2. Using the trick above, our setup facilitates both monolingual decoding, i.e., generating components specific to a particular output language (e.g., the C language via the path shown in bold), and polyglot decoding, i.e., generating any output language by starting at the initial state 0 (e.g., C and Clojure).",
"We formulate the decoding problem using a variant of the well-known single source shortest path (SSSP) algorithm for directed acyclic graphs ( DAG s) (Johnson (1977)).",
"This involves a graph G = ( V, E ) (nodes V and labeled edges E , see graph in Figure 2), and taking an off-line topological sort of the graph's vertices.",
"Using a data structure d R | V | (initialized as | V | , as shown in Figure 2), the standard SSSP algorithm (which is the forward update variant of the Viterbi algorithm (Huang, 2008)) works by searching forward through the graph in sorted order and finding for each node v an incoming labeled edge u , with label z , that solves the following recurrence: d ( v ) = min ( u,z ):( u,v,z ) E n d ( u ) + w ( u, v, z ) o (2) where d ( u ) is shortest path score from a unique source node b to the incoming node u (computed recursively) and w ( u, v, z ) is the weight of the particular labeled edge.",
"The weight of the resulting shortest path is commonly taken to be the sum of the path edge weights as given by w , and the output translation is the sequence of labels associated with each edge.",
"This algorithm runs in linear time over the size of the graph's adjacency matrix ( Adj ) and can be extended to find k SSSPs.",
"In the standard case, a weighting function w is pro-Algorithm 1 Lexical Shortest Path Search Input: Input x of size n , DAG G = ( V, E ) , lexical translation function p t , source node b with initial score o .",
"vided by assuming a static weighted graph.",
"In our translation context, we replace w with a translation model, which is used to dynamically generate edge weights during the SSSP search for each input x by scoring the translation between x and each edge label z encountered.",
"Given this general framework, many different translation models can be used for scoring.",
"In what follows, we describe two types of decoders based on lexical translation (or unigram) and neural sequence models.",
"Technically, each decoding algorithm involves modifying the standard SSSP search procedure by adding an additional data structure s to each node (see Figure 2), which is used to store information about translations (e.g., running lexical translation scores, RNN state information) associated with particular shortest paths.",
"By using these two very different models, we can get insight into the challenges associated with the technical documentation translation task.",
"As we show in Section 6, each model achieves varying levels of success when subjected to a wider range of SP tasks, which reveals differences between our task and other SP tasks.",
"the weighting function, which can be learned ef-ficiently off-line using the EM algorithm.",
"When attempting to use the SSSP procedure to compute this equation for a given source input x , we immediately have the problem that such a computation requires a complete component representation z (Knight and Al-Onaizan, 1998).",
"We use an approximation 1 that involves ignoring the normalizer |A| and exploiting the word independence as-sumption of the model, which allows us to incrementally compute translation scores for individual source words given output translations corresponding to shortest paths during the SSSP search.",
"The full decoding algorithm in shown in Algorithm 1, where the red highlights the adjustments made to the standard SSSP search as presented in Cormen et al. (2009).",
"The main modification involves adding a data structure s R | V || x | (ini-tialized as 0 . 0 | V || x | at line 2) that stores a running sum of source word scores given the best translations at each node, which can be used for computing the inner sum in Equation",
"1. For example, given an input utterance ceiling function , s 6 in Figure 2 contains the independent translation scores for words ceiling and function given the edge label numeric and p t .",
"Later on in the search, these scores are used to compute s 7 , which will provide translation scores for each word given the edge sequence numeric math .",
"Taking the product over any given s j (as done in line 7 to get score ) will give the probability of the shortest path translation at the particular point j .",
"Here, the transformation into log space is used to find the minimum incoming path.",
"Standardly, the data structure can be used to retrieve the shortest path back to the source node b (done via the FINDPATH method).",
"Our second set of models use neural networks to compute the weighting function in Equation",
"2. We use an encoder-decoder model with global attention (Bahdanau et al., 2014; Luong et al., 2015), which has the following two components: Encoder Model The first is an encoder network, which uses a bi-directional recurrent neural network architecture with LSTM units (Hochre-iter and Schmidhuber, 1997) to compute a sequence of forward annotations or hidden states ( h 1 , ..., h | x | ) and a sequence of backward hid-1 Details about the approx.",
"are provided as supp.",
"material.",
"den states ( h , ..., h | x | ) for the input sequence ( x 1 , ..., x | x | ) .",
"Standardly, each word is then represented as the concatenation of its forward and backward states: h j = [ h j , h j ] .",
"Decoder Model The second component is a decoder network, which directly computes the conditional distribution p ( z | x ) as follows: p ( z | x ) = | z | X i =1 log p ( z i | z <i , x ) (3) p ( z i | z <i , x ) softmax ( f ( , z <i , x )) (4) where f is a non-linear function that encodes information about the sequence z <i and the input x given the model parameters .",
"We can think of this model as an ordinary recurrent language model that is additionally conditioned on the input x using information from our encoder.",
"We implement the function f in the following way: f ( , z <i , x ) = W o i + b o (5) i = MLP ( c i , g i ) (6) g i = LSTM dec ( g i 1 , E outz i 1 , c i ) (7) where MLP is a multi-layer perceptron model with a single hidden layer, E out R | dec | e is a randomly initialized embedding matrix, g i is the decoder's hidden state at step i , and c i is a context-vector that encodes information about the input x and the encoder annotations.",
"Each context vector c i in turn is a weighted sum of each annotation h j against an attention vector i,j , or c i = P | x | j =1 i,j h j , which is jointly learned using an additional single layered multi-layer perceptron defined in the following way: i,j exp( e i,j ); e i,j = MLP ( g i 1 , h j ) (8) Lexical Bias and Copying In contrast to standard MT tasks, we are dealing with a relatively low-resource setting where the sparseness of the target vocabulary is an issue.",
"For this reason, we experimented with integrating lexical translation scores using a biasing technique from Arthur et al. (2016).",
"Their method is based on the following computation for each token z i : bias i = p t 0 ( z 1 | x 1 ) . . . p t 0 ( z 1 | x | x | ) ... ... ... p t 0 ( z | dec | | x 1 ) . . . p t 0 ( z | dec | | x | x | ) i, 1 ... i, | x | 724 Algorithm 2 Neural Shortest Path Search Input: Input x , DAG G , neural parameters and non-linear function f , beam size l , source node b with init.",
"The first matrix uses the inverse ( p t 0 ) of the lexical translation function p t already introduced to compute the probability of each word in the target vocabulary dec (the columns) with each word in the input x (the rows), which is then weighted by the attention vector from Equation 8.",
"bias i is then used to modify Equation 5 in the following way: f bias ( , z <i , x ) = W o i + b o + log( bias i + (cid:15) ) where (cid:15) is a hyper-parameter that helps to preserve numerical stability and biases more heavily on the lexical model when set lower.",
"We also experiment with the copying mechanism from Jia and Liang (2016), which works by allowing the decoder to choose from a set of latent actions, a j , that includes writing target words according to Equation 5, as done standardly, or copying source words from x , or copy [ x i ] according to the attention scores in Equation 8.",
"A distribution is then computed over these actions using a softmax function and particular actions are cho-sen accordingly during training and decoding.",
"Decoding and Learning The full decoding procedure is shown in Algorithm 2, where the differences with the standard SSSP are again shown in red.",
"We change the data structure s to contain the decoder's RNN state at each node.",
"We also modify the scoring (line 7, which uses Equation 4) to consider only the top l edges or translations at that point, as opposed to imposing a full search.",
"When l is set to 1, for example, the procedure does a greedy search through the graph, whereas when l is large the procedure is closer to a full search.",
"works like an ordinary neural decoder with the difference that each decision (i.e., new target-side word translation) is constrained (in line 7) by the transitions allowed in the underlying graph in order to ensure wellformedness of each component output.",
"Standardly, we optimize these models using stochastic gradient descent with the objective of finding parameters that minimize the negative conditional log-likelihood of the training dataset.",
"Our framework facilitates both monolingual and polyglot decoding.",
"In the first case, the decoder requires a graph associated with the output semantic language (more details in next section) and a trained translation model.",
"The latter case requires taking the union of all datasets and graphs (with artificial identifier tokens) for a collection of target datasets and training a single model over this global dataset.",
"In this setting, we can then decode to a particular language using the language identifiers or decode without specifying the output language.",
"The main focus in this paper is investigating polyglot decoding, and in particular the effect of training models on multiple datasets when translating to individuals APIs or SP datasets.",
"When evaluating our models and building QA applications, it is important to be able to generate the k best translations.",
"This can easily be done in our framework by applying standard k SSSP algorithms (Brander and Sinclair, 1995).",
"We use an implementation of the algorithm of Yen (1971), which works on top of the SSSP algorithms introduced above by iteratively finding deviating or branching paths from an initial SSSP (more details provided in supplementary materials).",
"We experimented with two main types of resources: 45 API documentation datasets and two multilingual benchmark SP datasets.",
"In the former case, our main objective is to test whether training polyglot models (shown as polyglot in Tables 1-2) on multiple datasets leads to an improvement when compared to training individual monolingual models (shown as monolingual in Tables 1-2).",
"Experiments involving the latter datasets are meant to test the applicability of our general graph and polyglot method to related SP tasks, and are also used for comparison against our main technical documentation task.",
"Technical API Docs The first dataset includes the Stdlib and Py27 datasets of Richardson and Kuhn (2017b,a), which are publicly available via Richardson (2017).",
"Stdlib consists of short description and function signature pairs for 10 programming languages in 7 languages, and Py27 contains the same type of data for 27 popular Python projects in English mined from Github.",
"We also built new datasets from the Japanese translation of the Python 2.7 standard library, as well as the Lua stdlib documentation in a mixture of Russian, Portuguese, German, Spanish and English.",
"Taken together, these resources consist of 79,885 training pairs, and we experiment with training models on Stdlib and Py27 separately as well as together (shown as + more in Table 1).",
"We use a BPE subword encoding (Sennrich et al., 2015) of both input and output words to make the representations more similar and transliterated all datasets (excluding Japanese datasets) to an 8-bit latin encoding.",
"Graphs were built by concatenating all function representations into a single word list and compiling this list into a minimized DAFSA .",
"For our global polyglot dataset, this resulted in a graph with 218,505 nodes, 313,288 edges, and 112,107 paths or component representations over an output vocabulary of 9,324 words.",
"Mixed GeoQuery and Sportscaster We run experiments on the GeoQuery 880 corpus using the splits from Andreas et al. (2013), which includes geography queries for English, Greek, Thai, and German paired with formal database queries, as well as a seed lexicon or NP list for each language.",
"In addition to training models on each individual dataset, we also learn polyglot models trained on all datasets concatenated together.",
"We also created a new mixed language test set that was built by replacing NPs in 803 test examples with one or more NPs from a different language using the NP lists mentioned above (see examples in Figure 4).",
"The goal in the last case is to test our model's ability to handle mixed language input.",
"We also ran monolingual experiments on the English Sportscaster corpus, which contains human generated soccer commentary paired with symbolic meaning representation produced by a simulation of four games.",
"For GeoQuery graph construction, we built a single graph for all languages by extracting general rule templates from all representations in the dataset, and exploited additional information and patterns using the Geobase database and the semantic grammars used in (Wong and Mooney, 2006b).",
"This resulted in a graph with 2,419 nodes, 4,936 edges and 39,482 paths over an output vocabulary of 164.",
"For Sportscaster, we directly translated the semantic grammar provided in Chen and Mooney (2008) to a DAFSA , which resulted in a graph with 98 nodes, 86 edges and 830 paths.",
"For the technical datasets, the goal is to see if our model generates correct signature representations from unobserved descriptions using exact match.",
"We follow exactly the experimental setup and data splits from Richardson and Kuhn (2017b), and measure the accuracy at 1 ( Acc@1 ), accuracy in top 10 ( Acc@10 ), and MRR .",
"For the GeoQuery and Sportscaster experiments, the goal is to see if our models can generate correct meaning representations for unseen input.",
"For GeoQuery, we follow Andreas et al. (2013) in evaluating extrinsically by checking that each representation evaluates to the same answer as the gold representation when executed against the Geobase database.",
"For Sportscaster, we evaluate by exact match to a gold representation.",
"We use the Foma finite-state toolkit of Hulden (2009) to construct all graphs used in our experiments.",
"We also use the Cython version of Dynet (Neubig et al., 2017) to implement all the neural models (see supp. materials for more details).",
"In the results tables, we refer to the lexical and neural models introduced in Section 4 as Lexical Shortest Path and Neural Shortest Path , where models that use copying ( + copy ) and lexical biasing ( + bias ) are marked accordingly.",
"We also experimented with adding a discriminative reranker to our lexical models (+ rerank ), using the approach from Richardson and Kuhn (2017b), which uses additional lexical (e.g., word match and alignment) features and other phrase-level and syntax features.",
"The goal here is to see if these additional (mostly non-word level) features help improve on the baseline lexical models.",
"Technical Documentation Results Table 1 shows the results for Stdlib and Py27.",
"In the monolingual case, we compare against the best performing models in Richardson and Kuhn (2017b,a).",
"As summarized in Figure 3, our experiments show that training polyglot models on multiple datasets can lead to large improvements over training individual models, especially on the Py27 datasets where using a polyglot model resulted in a nearly 9% average increase in accuracy @1.",
"In both cases, however, the best performing lexical models are those trained only on the datasets they are evaluated on, as opposed to training on all datasets (i.e., + more ).",
"This is surprising given that training on all datasets doubles the size of the training data, and shows that adding more data does not necessarily boost performance when the additional data is from another distribution.",
"The neural models are strongly outperformed by all other models both in the monolingual and polyglot case (only the latter results shown), even when lexical biasing is applied.",
"While surprising, this is consistent with other studies on low-resource neural MT (Zoph et al., 2016; Ostling and Tiedemann, 2017), where datasets of comparable size to ours (e.g., 1 million tokens or less) typically fail against classical SMT models.",
"This result has also been found in relation to neural AMR semantic parsing, where similar issues of sparsity are encountered (Peng et al., 2017).",
"Even by doubling the amount of training data by training on all datasets (results not shown), this did not improve the accuracy, suggesting that much more data is needed (more discussion below).",
"Beyond increases in accuracy, our polyglot models support zero-shot translation as shown in Figure 4, which can be used for translating between unobserved language pairs (e.g., ( es , Clojure ), ( ru , Haskell ) as shown in 1-2), or for finding related functionality across different software projects (as shown in 3).",
"These results were obtained by running our decoder model without specifying the output language.",
"We note, however, that the decoder can be constrained to selectively translate to any specific programming language or project (e.g., in a QA setting).",
"Future work will further investigate the decoder's polyglot capabilities, which is currently hard to evaluate since we do not have an annotated set of function equivalences between different APIs.",
"Semantic Parsing Results SP results are summarized in Table",
"2. In contrast, the neural models, especially those with biasing and copying, strongly outperform all other models and are competitive with related work.",
"In the GeoQuery case, we compare against two classic grammar-based models, UBL and TreeTrans, as well as a feature rich, neural hybrid tree model (nHT).",
"We also see that the polyglot Geo achieves the best performance, demonstrating that training on multiple datasets helps in this domain as well.",
"In the Sportscaster case we compare against two PCFG learning approaches, where the second model (wo-PCFG) involves a grammar with complex word-order constraints.",
"The advantage of training a polyglot model is shown on the results related to mixed language parsing (i.e., the middle set of results).",
"Here we compared against the best performing monolingual English model ( Best Mono. Model ), which does not have a way to deal with multilingual NPs.",
"We also find the neural model to be more robust than the lexical models with reranking.",
"While the lexical models overall perform poorly on both tasks, the weakness of this model is particularly acute in the Sportscaster case.",
"We found that mistakes are largely related to the ordering of arguments, which these lexical (unigram) models are blind to.",
"That these models still perform reasonably well on the Geo task shows that such ordering issues are less of a factor in this domain.",
"Discussion Having results across related SP tasks allows us to reflect on the nature of the main technical documentation task.",
"Consistent with recent findings (Dong and Lapata, 2016), we show that relatively simple neural sequence models are competitive with, and in some cases outperform, traditional grammar-based SP methods on benchmark SP tasks.",
"However, this result is not observed in our technical documentation task, in part because this problem is much harder for neural learners given the sparseness of the target data and lack of redundancy.",
"For this reason, we believe our datasets provide new challenges for neural-based SP, and serve as a cautionary tale about the scal-ability and applicability of commonly used neural models to lower-resource SP problems.",
"In general, we believe that focusing on polyglot and mixed language decoding is not only of interest to applications (e.g, mixed language API QA) but also allows for new forms of SP evaluation that are more revealing than only translation accuracy.",
"When comparing the accuracy of the best monolingual Geo model and the worst performing neural polyglot model, one could mistakingly think that these models have equal abilities, though the polyglot model is much more robust and general.",
"Moving forward, we hope that our work helps to motivate more diverse evaluations of this type.",
"We look at learning from multiple API libraries and datasets in the context of learning to translate text to code representations and other SP tasks.",
"To support polyglot modeling of this type, we developed a novel graph based decoding method and experimented with various SMT and neural MT models that work in this framework.",
"We report a mixture of positive results specific to each task and set of models, some of which reveal interesting limitations of different approaches to SP.",
"We also introduced new API and mixed language datasets to facilitate further work on polyglot SP.",
"This work was supported by the German Research Foundation (DFG) in project D2 of SFB 732."
] | [
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"other",
"method",
"other",
"other",
"other",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"other",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"method",
"method",
"method",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"other"
] |
[
"Modeling hypernymy, such as poodle is-a dog , is an important generalization aid to many NLP tasks, such as entailment, coreference, relation extraction, and question answering.",
"Supervised learning from labeled hypernym sources, such as WordNet, limits the coverage of these models, which can be addressed by learning hypernyms from unlabeled text.",
"Existing unsupervised methods either do not scale to large vocabularies or yield unacceptably poor accuracy.",
"This paper introduces distributional inclusion vector embedding (DIVE) , a simple-to-implement unsupervised method of hypernym discovery via per-word non-negative vector embeddings which preserve the inclusion property of word contexts in a low-dimensional and interpretable space.",
"In experimental evaluations more comprehensive than any previous literature of which we are awareevaluating on 11 datasets using multiple existing as well as newly proposed scoring functionswe find that our method provides up to double the precision of previous unsupervised embeddings, and the highest average performance, using a much more compact word representation, and yielding many new state-of-the-art results.",
"Numerous applications benefit from compactly representing context distributions, which assign meaning to objects under the rubric of distributional semantics .",
"In natural language processing, distributional semantics has long been used to assign meanings to words (that is, to lexemes in the dictionary, not individual instances of word tokens).",
"The meaning of a word in the distributional sense is often taken to be the set of textual contexts (nearby tokens) in which that word appears, represented as a large sparse bag of words (SBOW).",
"Without any supervision, Word2Vec (Mikolov et al., 2013), among other approaches based on matrix factorization (Levy et al., 2015a), successfully compress the SBOW into a much lower dimensional embedding space, increasing the scalability and applicability of the embeddings while preserving (or even improving) the correlation of geometric embedding similarities with human word similarity judgments.",
"While embedding models have achieved impressive results, context distributions capture more semantic information than just word similarity.",
"The distributional inclusion hypothesis (DIH) (Weeds and Weir, 2003; Geffet and Dagan, 2005; Cimiano et al., 2005) posits that the context set of a word tends to be a subset of the contexts of its hypernyms.",
"For a concrete example, most adjectives that can be applied to poodle can also be applied to dog , because dog is a hypernym of poodle (e.g. both can be obedient ).",
"However, the converse is not necessarily true a dog can be straight-haired but a poodle cannot.",
"Therefore, dog tends to have a broader context set than poodle .",
"Many asymmetric scoring functions comparing SBOW features based on DIH have been developed for hypernymy detection (Weeds and Weir, 2003; Geffet and Dagan, 2005; Shwartz et al., 2017).",
"Hypernymy detection plays a key role in many challenging NLP tasks, such as textual entailment (Sammons et al., 2011), coreference (Ponzetto and Strube, 2006), relation extraction (Demeester et al., 2016) and question answering (Huang et al., 2008).",
"Leveraging the variety of contexts and inclusion properties in context distributions can greatly increase the ability to discover taxonomic structure among words (Shwartz et al., 2017).",
"The inability to preserve these features limits the semantic representation power and downstream applicability of some popular unsupervised learning approaches such as Word2Vec.",
"Several recently proposed methods aim to en-485 code hypernym relations between words in dense embeddings, such as Gaussian embedding (Vil-nis and McCallum, 2015; Athiwaratkun and Wilson, 2017), Boolean Distributional Semantic Model (Kruszewski et al., 2015), order embedding (Vendrov et al., 2016), H-feature detector (Roller and Erk, 2016), HyperVec (Nguyen et al., 2017), dual tensor (Glavas and Ponzetto, 2017), Poincare embedding (Nickel and Kiela, 2017), and LEAR (Vulic and Mrksic, 2017).",
"However, the methods focus on supervised or semi-supervised settings where a massive amount of hypernym annotations are available (Vendrov et al., 2016; Roller and Erk, 2016; Nguyen et al., 2017; Glavas and Ponzetto, 2017; Vulic and Mrksic, 2017), do not learn from raw text (Nickel and Kiela, 2017) or lack comprehensive experiments on the hypernym detection task (Vilnis and McCallum, 2015; Athiwaratkun and Wilson, 2017).",
"Recent studies (Levy et al., 2015b; Shwartz et al., 2017) have underscored the difficulty of generalizing supervised hypernymy annotations to unseen pairs classifiers often effectively memorize prototypical hypernyms (general' words) and ignore relations between words.",
"These findings motivate us to develop more accurate and scalable unsupervised embeddings to detect hypernymy and propose several scoring functions to analyze the embeddings from different perspectives.",
"A novel unsupervised low-dimensional embedding method via performing non-negative matrix factorization (NMF) on a weighted PMI matrix, which can be efficiently optimized using modified skip-grams.",
"Theoretical and qualitative analysis illustrate that the proposed embedding can intuitively and interpretably preserve inclusion relations among word contexts.",
"Extensive experiments on 11 hypernym detection datasets demonstrate that the learned embeddings dominate previous low-dimensional unsupervised embedding approaches, achieving similar or better performance than SBOW, on both existing and newly proposed asymmetric scoring functions, while requiring much less memory and compute.",
"The distributional inclusion hypothesis (DIH) suggests that the context set of a hypernym tends to contain the context set of its hyponyms.",
"When representing a word as the counts of contextual co-occurrences, the count in every dimension of hypernym y tends to be larger than or equal to the corresponding count of its hyponym x : x (cid:22) y c V, #( x, c ) #( y, c ) , (1) where x (cid:22) y means y is a hypernym of x , V is the set of vocabulary, and #( x, c ) indicates the number of times that word x and its context word c co-occur in a small window with size | W | in the corpus of interest D .",
"Notice that the concept of DIH could be applied to different context word representations.",
"For example, Geffet and Dagan (2005) represent each word by the set of its co-occurred context words while discarding their counts.",
"In this study, we define the inclusion property based on counts of context words in (1) because the counts are an effective and noise-robust feature for the hypernymy detection using only the context distribution of words (Clarke, 2009; Vulic et al., 2016; Shwartz et al., 2017).",
"Our goal is to produce lower-dimensional embeddings preserving the inclusion property that the embedding of hypernym y is larger than or equal to the embedding of its hyponym x in every dimension.",
"Formally, the desired property can be written as x (cid:22) y x [ i ] y [ i ] , i { 1 , ..., L } , (2) where L is number of dimensions in the embedding space.",
"We add additional non-negativity constraints, i.e. x [ i ] 0 , y [ i ] 0 , i , in order to increase the interpretability of the embeddings (the reason will be explained later in this section).",
"This is a challenging task.",
"In reality, there are a lot of noise and systematic biases that cause the violation of DIH in Equation (1) (i.e. #( x, c ) > #( y, c ) for some neighboring word c ), but the general trend can be discovered by processing thousands of neighboring words in SBOW together (Shwartz et al., 2017).",
"After the compression, the same trend has to be estimated in a much smaller embedding space which discards most of the information in SBOW, so it is not surprising to see most of the unsupervised hypernymy detection studies focus on SBOW (Shwartz et al., 2017) 486 and the existing unsupervised embedding methods like Gaussian embedding have degraded accuracy (Vulic et al., 2016).",
"Popular methods of unsupervised word embedding are usually based on matrix factorization (Levy et al., 2015a).",
"The approaches first compute a co-occurrence statistic between the w th word and the c th context word as the ( w, c ) th element of the matrix M [ w, c ] .",
"Next, the matrix M is factorized such that M [ w, c ] w T c , where w is the low dimension embedding of w th word and c is the c th context embedding.",
"The statistic in M [ w, c ] is usually related to pointwise mutual information (Levy et al., 2015a): P MI ( w, c ) = log( P ( w,c ) P ( w ) P ( c ) ) , where P ( w, c ) = #( w,c ) | D | , | D | = P w VP c V #( w, c ) is number of co-occurrence word pairs in the corpus, P ( w ) = #( w ) | D | , #( w ) = P c V #( w, c ) is the frequency of the word w times the window size | W | , and similarly for P ( c ) .",
"For example, M [ w, c ] could be set as positive PMI (PPMI), max( P MI ( w, c ) , 0) , or shifted PMI, P MI ( w, c ) log( k 0 ) , which (Levy and Goldberg, 2014) demonstrate is connected to skip-grams with negative sampling (SGNS).",
"Intuitively, since M [ w, c ] w T c , larger embedding values of w at every dimension seems to imply larger w T c , larger M [ w, c ] , larger P MI ( w, c ) , and thus larger co-occurrence count #( w, c ) .",
"However, the derivation has two flaws: (1) c could contain negative values and (2) lower #( w, c ) could still lead to larger P MI ( w, c ) as long as the #( w ) is small enough.",
"To preserve DIH, we propose a novel word embedding method, distributional inclusion vector embedding (DIVE) , which fixes the two flaws by performing non-negative factorization (NMF) (Lee and Seung, 2001) on the matrix M , where M [ w, c ] = log( P ( w, c ) P ( w ) P ( c ) #( w ) k I Z ) = log(#( w, c ) | V | #( c ) k I ) , (3) where k I is a constant which shifts PMI value like SGNS, Z = | D | | V | is the average word frequency, and | V | is the vocabulary size.",
"We call this weighting term #( w ) Z inclusion shift .",
"(i.e. Equation (2)) implies that Equation (1) (DIH) holds if the matrix is reconstructed perfectly.",
"The derivation is simple: If the embedding of hypernym y is greater than or equal to the embedding of its hyponym x in every dimension ( x [ i ] y [ i ] , i ), x T c y T c since context vector c is nonnegative.",
"Then, M [ x, c ] M [ y, c ] tends to be true because w T c M [ w, c ] .",
"This leads to #( x, c ) #( y, c ) because M [ w, c ] = log( #( w,c ) | V | #( c ) k I ) and only #( w, c ) changes with w .",
"Due to its appealing scalability properties during training time (Levy et al., 2015a), we optimize our embedding based on the skip-gram with negative sampling (SGNS) (Mikolov et al., 2013).",
"The objective function of SGNS is l SGNS = X w VX c V #( w, c ) log ( w T c ) + X w V k 0 X c V #( w, c ) E c N PD [log ( w T c N )] , (4) where w R , c R , c N R , is the logistic sigmoid function, and k 0 is a constant hyper-parameter indicating the ratio between positive and negative samples.",
"Levy and Goldberg (2014) demonstrate SGNS is equivalent to factorizing a shifted PMI matrix M 0 , where M 0 [ w, c ] = log( P ( w,c ) P ( w ) P ( c ) 1 k 0 ) .",
"By setting k 0 = k I Z #( w ) and applying non-negativity constraints to the embeddings, DIVE can be optimized using the similar objective function: l DIV E = X w VX c V #( w, c ) log ( w T c ) + k IX w VZ #( w ) X c V #( w, c ) E c N PD [log ( w T c N )] , (5) where w 0 , c 0 , c N 0 , and k I is a constant hyper-parameter.",
"PD is the distribution of negative samples, which we set to be the corpus word frequency distribution (not reducing the probability of drawing frequent words like SGNS) in this paper.",
"Equation (5) is optimized by ADAM (Kingma and Ba, 2015), a variant of stochastic gradient descent (SGD).",
"The non-negativity constraint is implemented by projection (Polyak, 1969) (i.e. clipping any embedding which crosses the zero boundary after an update).",
"dl DIV E d w = X c V #( w, c )(1 ( w T c )) c k IX c N V #( c N ) | V | ( w T c N ) c N .",
"(6) Assume hyponym x and hypernym y satisfy DIH in Equation (1) and the embeddings x and y are the same at some point during the gradient ascent.",
"At this point, the gradients coming from negative sampling (the second term) decrease the same amount of embedding values for both x and y .",
"However, the embedding of hypernym y would get higher or equal positive gradients from the first term than x in every dimension because #( x, c ) #( y, c ) .",
"This means Equation (1) tends to imply Equation (2) because the hypernym has larger gradients everywhere in the embedding space.",
"Combining the analysis from the matrix factorization viewpoint, DIH in Equation (1) is approximately equivalent to the inclusion property in DIVE (i.e. Equation (2)).",
"For a frequent target word, there must be many neighboring words that incidentally appear near the target word without being semantically meaningful, especially when a large context window size is used.",
"The unrelated context words cause noise in both the word vector and the context vector of DIVE.",
"We address this issue by filtering out context words c for each target word w when the PMI of the co-occurring words is too small (i.e. log( P ( w,c ) P ( w ) P ( c ) ) < log( k f ) ).",
"That is, we set #( w, c ) = 0 in the objective function.",
"This preprocessing step is similar to computing PPMI in SBOW (Bullinaria and Levy, 2007), where low PMI co-occurrences are removed from SBOW.",
"After applying the non-negativity constraint, we observe that each latent factor in the embedding is interpretable as previous findings suggest (Pauca et al., 2004; Murphy et al., 2012) (i.e. each dimension roughly corresponds to a topic).",
"Furthermore, DIH suggests that a general word appears in more diverse contexts/topics.",
"By preserving DIH using inclusion shift, the embedding of a general word (i.e. hypernym of many other words) tends to have larger values in these dimensions (topics).",
"This gives rise to a natural and intuitive interpretation of our word embeddings: the word embeddings can be seen as unnormalized probability distributions over topics.",
"In Figure 1, we visualize the unnormalized topical distribution of two words, rodent and mammal , as an example.",
"Since rodent is a kind of mammal , the embedding (i.e. unnormalized topical distribution) of mammal includes the embedding of rodent when DIH holds.",
"More examples are illustrated in our supplementary materials.",
"In this section, we compare DIVE with other unsupervised hypernym detection methods.",
"In this paper, unsupervised approaches refer to the methods that only train on plaintext corpus without using any hypernymy or lexicon annotation.",
"The embeddings are tested on 11 datasets.",
"The first 4 datasets come from the recent review of Shwartz et al. (2017) 1 : BLESS (Ba-roni and Lenci, 2011), EVALution (Santus et al., 2015), Lenci/Benotto (Benotto, 2015), and Weeds (Weeds et al., 2014).",
"The next 4 datasets are downloaded from the code repository of the H-feature detector (Roller and Erk, 2016) 2 : Medical (i.e., Levy 2014) (Levy et al., 2014), LEDS (also referred to as ENTAILMENT or Baroni 2012) (Baroni et al., 2012), TM14 (i.e., Turney 2014) (Turney and Mohammad, 2015), and Kotlerman 2010 (Kotlerman et al., 2010).",
"In addition, the performance on the test set of HypeNet (Shwartz et al., 2016) (using the random train/test split), the test set of WordNet (Vendrov et al., 2016), and all pairs in HyperLex (Vulic et al., 2016) are also evaluated.",
"The F1 and accuracy measurements are sometimes very similar even though the quality of prediction varies, so we adopted average precision, AP@all (Zhu, 2004) (equivalent to the area under the precision-recall curve when the constant interpolation is used), as the main evaluation metric.",
"The HyperLex dataset has a continuous score on each candidate word pair, so we adopt Spearman rank coefficient (Fieller et al., 1957) as suggested by the review study of Vulic et al. (2016).",
"Any OOV (out-of-vocabulary) word encountered in the testing data is pushed to the bottom of the prediction list (effectively assuming the word pair does not have hypernym relation).",
"We trained all methods on the first 51.2 million tokens of WaCkypedia corpus (Baroni et al., 2009) because DIH holds more often in this subset (i.e. SBOW works better) compared with that in the whole WaCkypedia corpus.",
"The window size | W | of DIVE and Gaussian embedding are set as 20 (left 10 words and right 10 words).",
"The number of embedding dimensions in DIVE L is set to be 100.",
"The other hyper-parameters of DIVE and Gaussian embedding are determined by the training set of HypeNet.",
"Other experimental details are described in our supplementary materials.",
"If a pair of words has hypernym relation, the words tend to be similar (sharing some context words) and the hypernym should be more general than the hyponym.",
"Section 2.4 has shown that the embedding could be viewed as an unnormalized topic distribution of its context, so the embedding of hypernym should be similar to the embedding of its hyponym but having larger magnitude.",
"As in HyperVec (Nguyen et al., 2017), we score the hypernym candidates by multiplying two factors corresponding to these properties.",
"The C S (i.e. the cosine similarity multiply the difference of summation) scoring function is defined as C S ( w q w p ) = w Tq w p || w q || 2 || w p || 2 ( k w p k 1 k w q k 1 ) , (7) where w p is the embedding of hypernym and w q is the embedding of hyponym.",
"As far as we know, Gaussian embedding (GE) (Vilnis and McCallum, 2015) is the state-of-the-art unsupervised embedding method which can capture the asymmetric relations between a hypernym and its hyponyms.",
"Gaussian embedding 489 encodes the context distribution of each word as a multivariate Gaussian distribution, where the embeddings of hypernyms tend to have higher variance and overlap with the embedding of their hy-ponyms.",
"In Table 1, we compare DIVE with Gaussian embedding 3 using the code implemented by Athiwaratkun and Wilson (2017) 4 and with word cosine similarity using skip-grams.",
"The performances of random scores are also presented for reference.",
"As we can see, DIVE is usually significantly better than other unsupervised embedding.",
"Unlike Word2Vec, which only tries to preserve the similarity signal, the goals of DIVE cover preserving the capability of measuring not only the similarity but also whether one context distribution includes the other (inclusion signal) or being more general than the other (generality signal).",
"In this experiment, we perform a comprehensive comparison between SBOW and DIVE using multiple scoring functions to detect the hypernym relation between words based on different types of signal.",
"The window size | W | of SBOW is also set as 20, and experiment setups are the same as that described in Section 3.1.",
"Notice that the comparison is inherently unfair because most of the information would be lost during the aggressive compression process of DIVE, and we would like to evaluate how well DIVE can preserve signals of interest using the number of dimensions which is several orders of magnitude less than that of SBOW.",
"After trying many existing and newly proposed functions which score a pair of words to detect hypernym relation between them, we find that good scoring functions for SBOW are also good scoring functions for DIVE.",
"Thus, in addition to C S used in Section 3.2, we also present 4 other best performing or representative scoring functions in the experiment (see our supplementary materials for more details): 3 Note that higher AP is reported for some models in previous literature: 80 (Vilnis and McCallum, 2015) in LEDS, 74.2 (Athiwaratkun and Wilson, 2017) in LEDS, and 20.6 (Vulic et al., 2016) in HyperLex.",
"The difference could be caused by different train/test setup (e.g. How the hyper-parameters are tuned, different training corpus, etc.).",
"However, DIVE beats even these results.",
"4 https://github.com/benathi/word2gm Inclusion : CDE (Clarke, 2009) computes the summation of element-wise minimum over the magnitude of hyponym embedding (i.e. || min( w p , w q ) || 1 || w q || 1 ).",
"CDE measures the degree of violation of equation (1).",
"Equation (1) holds if and only if CDE is 1.",
"Due to noise in SBOW, CDE is rarely exactly 1, but hypernym pairs usually have higher CDE.",
"Despite its effectiveness, the good performance could mostly come from the magnitude of embeddings/features instead of inclusion properties among context distributions.",
"To measure the inclusion properties between context distributions d p and d q ( w p and w q after normalization, respectively), we use negative asymmetric L1 distance ( AL 1 ) 5 as one of our scoring function, where AL 1 = min a X c w 0 max( a d q [ c ] d p [ c ] , 0)+ max( d p [ c ] a d q [ c ] , 0) , (8) and w 0 is a constant hyper-parameter.",
"Generality : When the inclusion property in (2) holds, || y || 1 = P i y [ i ] P i x [ i ] = || x || 1 .",
"Thus, we use summation difference ( || w p || 1 || w q || 1 ) as our score to measure generality signal ( S).",
"Similarity plus generality : Computing cosine similarity on skip-grams (i.e. Word2Vec + C in Table",
"1) is a popular way to measure the similarity of two words, so we multiply the Word2Vec similarity with summation difference of DIVE or SBOW (W S) as an alternative of C S. 4.2 Baselines SBOW Freq: A word is represented by the frequency of its neighboring words.",
"Applying PMI filter (set context feature to be 0 if its value is lower than log( k f ) ) to SBOW Freq only makes its performances closer to (but still much worse than) SBOW PPMI, so we omit the baseline.",
"SBOW PPMI: SBOW which uses PPMI of its neighboring words as the features (Bulli-naria and Levy, 2007).",
"Applying PMI filter to SBOW PPMI usually makes the performances worse, especially when k f is large.",
"Similarly, a constant log( k 0 ) shifting to SBOW PPMI (i.e. max( P MI log( k 0 ) , 0) ) is not helpful, so we set both k f and k 0 to be 1.",
"5 The meaning and efficient implementation of AL 1 are illustrated in our supplementary materials 490 AP@all (%) BLESS EVALution Lenci/Benotto CDE AL 1 S W S C S CDE AL 1 S W S C S CDE AL 1 S W S C S SBOW Freq 6.3 7.3 5.6 11.0 5.9 35.3 32.6 36.2 33.0 36.3 51.8 47.6 51.0 51.8 51.1 PPMI 13.6 5.1 5.6 17.2 15.3 30.4 27.7 34.1 31.9 34.3 47.2 39.7 50.8 51.1 52.0 PPMI w/ IS 6.2 5.0 5.5 12.4 5.8 36.0 27.5 36.3 32.9 36.4 52.0 43.1 50.9 51.9 50.7 All wiki 12.1 5.2 6.9 12.5 13.4 28.5 27.1 30.3 29.9 31.0 47.1 39.9 48.5 48.7 51.1 DIVE Full 9.3 7.6 6.0 18.6 16.3 30.0 27.5 34.9 32.3 33.0 46.7 43.2 51.3 51.5 50.4 w/o PMI 7.8 6.9 5.6 16.7 7.1 32.8 32.2 35.7 32.5 35.4 47.6 44.9 50.9 51.6 49.7 w/o IS 9.0 6.2 7.3 6.2 7.3 24.3 25.0 22.9 23.5 23.9 38.8 38.1 38.2 38.2 38.4 Kmean (Freq NMF) 6.5 7.3 5.6 10.9 5.8 33.7 27.2 36.2 33.0 36.2 49.6 42.5 51.0 51.8 51.2 AP@all (%) Weeds Micro Average (4 datasets) Medical CDE AL 1 S W S C S CDE AL 1 S W S C S CDE AL 1 S W S C S SBOW Freq 69.5 58.0 68.8 68.2 68.4 23.1 21.8 22.9 25.0 23.0 19.4 19.2 14.1 18.4 15.3 PPMI 61.0 50.3 70.3 69.2 69.3 24.7 17.9 22.3 28.1 27.8 23.4 8.7 13.2 20.1 24.4 PPMI w/ IS 67.6 52.2 69.4 68.7 67.7 23.2 18.2 22.9 25.8 22.9 22.8 10.6 13.7 18.6 17.0 All wiki 61.3 48.6 70.0 68.5 70.4 23.4 17.7 21.7 24.6 25.8 22.3 8.9 12.2 17.6 21.1 DIVE Full 59.2 55.0 69.7 68.6 65.5 22.1 19.8 22.8 28.9 27.6 11.7 9.3 13.7 21.4 19.2 w/o PMI 60.4 56.4 69.3 68.6 64.8 22.2 21.0 22.7 28.0 23.1 10.7 8.4 13.3 19.8 16.2 w/o IS 49.2 47.3 45.1 45.1 44.9 18.9 17.3 17.2 16.8 17.5 10.9 9.8 7.4 7.6 7.7 Kmean (Freq NMF) 69.4 51.1 68.8 68.2 68.9 22.5 19.3 22.9 24.9 23.0 12.6 10.9 14.0 18.1 14.6 AP@all (%) LEDS TM14 Kotlerman 2010 CDE AL 1 S W S C S CDE AL 1 S W S C S CDE AL 1 S W S C S SBOW Freq 82.7 70.4 70.7 83.3 73.3 55.6 53.2 54.9 55.7 55.0 35.9 40.5 34.5 37.0 35.4 PPMI 84.4 50.2 72.2 86.5 84.5 56.2 52.3 54.4 57.0 57.6 39.1 30.9 33.0 37.0 36.3 PPMI w/ IS 81.6 54.5 71.0 84.7 73.1 57.1 51.5 55.1 56.2 55.4 37.4 31.0 34.4 37.8 35.9 All wiki 83.1 49.7 67.9 82.9 81.4 54.7 50.5 52.6 55.1 54.9 38.5 31.2 32.2 35.4 35.3 DIVE Full 83.3 74.7 72.7 86.4 83.5 55.3 52.6 55.2 57.3 57.2 35.3 31.6 33.6 37.4 36.6 w/o PMI 79.3 74.8 72.0 85.5 78.7 54.7 53.9 54.9 56.5 55.4 35.4 38.9 33.8 37.8 36.7 w/o IS 64.6 55.4 43.2 44.3 46.1 51.9 51.2 50.4 52.0 51.8 32.9 33.4 28.1 30.2 29.7 Kmean (Freq NMF) 80.3 64.5 70.7 83.0 73.0 54.8 49.0 54.8 55.6 54.8 32.1 37.0 34.5 36.9 34.8 AP@all (%) HypeNet WordNet Micro Average (10 datasets) CDE AL 1 S W S C S CDE AL 1 S W S C S CDE AL 1 S W S C S SBOW Freq 37.5 28.3 46.9 35.9 43.4 56.6 55.2 55.5 56.2 55.6 31.1 28.2 31.5 31.6 31.2 PPMI 23.8 24.0 47.0 32.5 33.1 57.7 53.9 55.6 56.8 57.2 30.1 23.0 31.1 32.9 33.5 PPMI w/ IS 38.5 26.7 47.2 35.5 37.6 57.0 54.1 55.7 56.6 55.7 31.8 24.1 31.5 32.1 30.3 All wiki 23.0 24.5 40.5 30.5 29.7 57.4 53.1 56.0 56.4 57.3 29.0 23.1 29.2 30.2 31.1 DIVE Full 25.3 24.2 49.3 33.6 32.0 60.2 58.9 58.4 61.1 60.9 27.6 25.3 32.1 34.1 32.7 w/o PMI 31.3 27.0 46.9 33.8 34.0 59.2 60.1 58.2 61.1 59.1 28.5 26.7 31.5 33.4 30.1 w/o IS 20.1 21.7 20.3 21.8 22.0 61.0 56.3 51.3 55.7 54.7 22.3 20.7 19.1 19.6 19.9 Kmean (Freq NMF) 33.7 22.0 46.0 35.6 45.2 58.4 60.2 57.7 60.1 57.9 29.1 24.7 31.5 31.8 31.5 Table 2: AP@all (%) of 10 datasets.",
"SBOW PPMI w/ IS (with additional inclusion shift): The matrix reconstructed by DIVE when k I = 1 .",
"Specifically, w [ c ] = max(log( P ( w,c ) P ( w ) P ( c ) Z #( w ) ) , 0) .",
"SBOW all wiki: SBOW using PPMI features trained on the whole WaCkypedia.",
"NMF on shifted PMI: Non-negative matrix factorization (NMF) on the shifted PMI without inclusion shift for DIVE (DIVE w/o IS).",
"This is the same as applying the non-negative constraint on the skip-gram model.",
"K-means (Freq NMF): The method first uses Mini-batch k-means (Sculley, 2010) to cluster words in skip-gram embedding space into 100 topics, and hashes each frequency count in SBOW into the corresponding topic.",
"If running k-means on skip-grams is viewed as an approximation of clustering the SBOW context vectors, the method can be viewed as a kind of NMF (Ding et al., 2005).",
"DIVE performs non-negative matrix factorization on PMI matrix after applying inclusion shift and PMI filtering.",
"To demonstrate the effectiveness of each step, we show the performances of DIVE after removing PMI filtering (DIVE w/o PMI), removing inclusion shift (DIVE w/o IS), and removing matrix factorization (SBOW PPMI w/ IS, SBOW PPMI, and SBOW all wiki).",
"The methods based on frequency matrix are also tested (SBOW Freq and Freq NMF).",
"In Table 2, we first confirm the finding of the previous review study of Shwartz et al. (2017): there is no single hypernymy scoring function which always outperforms others.",
"One of the main reasons is that different datasets collect negative samples differently.",
"For example, if negative samples come from random word pairs (e.g. WordNet dataset), a symmetric similarity measure is a good scoring function.",
"On the other hand, negative samples come from related or similar words in HypeNet, EVALution, Lenci/Benotto, and Weeds, so only estimating generality difference leads to the best (or close to the best) performance.",
"The negative samples in many datasets are composed of both random samples and similar words (such as BLESS), so the combination of similarity and generality difference yields the most stable results.",
"DIVE performs similar or better on most of the scoring functions compared with SBOW consistently across all datasets in Table 2 and Table 3, while using many fewer dimensions (see Table 4).",
"This leads to 2-3 order of magnitude savings on both memory consumption and testing time.",
"Furthermore, the low dimensional embedding makes the computational complexity independent of the vocabulary size, which drastically boosts the scalability of unsupervised hypernym detection especially with the help of GPU.",
"It is surprising that we can achieve such aggressive compression while preserving the similarity, generality, and inclusion signal in various datasets with different types of negative samples.",
"Its results on C S and W S outperform SBOW Freq.",
"Meanwhile, its results on AL 1 outperform SBOW PPMI.",
"The fact that W S or C S usually outperform generality functions suggests that only memorizing general words is not sufficient.",
"The best average performance on 4 and 10 datasets are both produced by W S on DIVE.",
"SBOW PPMI improves the W S and C S from SBOW Freq but sacrifices AP on the inclusion functions.",
"It generally hurts performance to directly include inclusion shift in PPMI (PPMI w/ IS) or compute SBOW PPMI on the whole WaCkypedia (all wiki) instead of the first 51.2 million tokens.",
"The similar trend can also be seen in Table 3.",
"Note that AL 1 completely fails in the HyperLex dataset using SBOW PPMI, which suggests that PPMI might not necessarily preserve the distributional inclusion property, even though it can have good performance on scoring functions combining similarity and generality signals.",
"Removing the PMI filter from DIVE slightly drops the overall precision while removing inclusion shift on shifted PMI (w/o IS) leads to poor performances.",
"K-means (Freq NMF) produces similar AP compared with SBOW Freq but has worse AL 1 scores.",
"Its best AP scores on different datasets are also significantly worse than the best AP of DIVE.",
"This means that only making Word2Vec (skip-grams) non-negative or naively accumulating topic distribution in contexts cannot lead to satisfactory embeddings.",
"Most previous unsupervised approaches focus on designing better hypernymy scoring functions for sparse bag of word (SBOW) features.",
"They are well summarized in the recent study (Shwartz et al., 2017).",
"Shwartz et al. (2017) also evaluate the influence of different contexts, such as changing the window size of contexts or incorporating dependency parsing information, but neglect scalability issues inherent to SBOW methods.",
"A notable exception is the Gaussian embedding model (Vilnis and McCallum, 2015), which represents each word as a Gaussian distribution.",
"However, since a Gaussian distribution is normalized, it is difficult to retain frequency information during the embedding process, and experiments on HyperLex (Vulic et al., 2016) demonstrate that a sim-492 ple baseline only relying on word frequency can achieve good results.",
"Follow-up work models contexts by a mixture of Gaussians (Athiwaratkun and Wilson, 2017) relaxing the unimodality assump-tion but achieves little improvement on hypernym detection tasks.",
"Kiela et al. (2015) show that images retrieved by a search engine can be a useful source of information to determine the generality of lexicons, but the resources (e.g. pre-trained image classifier for the words of interest) might not be available in many domains.",
"Order embedding (Vendrov et al., 2016) is a supervised approach to encode many annotated hypernym pairs (e.g. all of the whole WordNet (Miller, 1995)) into a compact embedding space, where the embedding of a hypernym should be smaller than the embedding of its hyponym in every dimension.",
"Our method learns embedding from raw text, where a hypernym embedding should be larger than the embedding of its hyponym in every dimension.",
"Thus, DIVE can be viewed as an unsupervised and reversed form of order embedding.",
"Non-negative matrix factorization (NMF) has a long history in NLP, for example in the construction of topic models (Pauca et al., 2004).",
"Non-negative sparse embedding (NNSE) (Murphy et al., 2012) and Faruqui et al. (2015) indicate that non-negativity can make embeddings more interpretable and improve word similarity evaluations.",
"The sparse NMF is also shown to be effective in cross-lingual lexical entailment tasks but does not necessarily improve monolingual hypernymy detection (Vyas and Carpuat, 2016).",
"In our study, we show that performing NMF on PMI matrix with inclusion shift can preserve DIH in SBOW, and the comprehensive experimental analysis demonstrates its state-of-the-art performances on unsupervised hypernymy detection.",
"Although large SBOW vectors consistently show the best all-around performance in unsupervised hypernym detection, it is challenging to compress them into a compact representation which preserves inclusion, generality, and similarity signals for this task.",
"Our experiments suggest that the existing approaches and simple baselines such as Gaussian embedding, accumulating K-mean clusters, and non-negative skip-grams do not lead to satisfactory performance.",
"To achieve this goal, we propose an interpretable and scalable embedding method called distributional inclusion vector embedding (DIVE) by performing non-negative matrix factorization (NMF) on a weighted PMI matrix.",
"We demonstrate that scoring functions which measure inclusion and generality properties in SBOW can also be applied to DIVE to detect hypernymy, and DIVE performs the best on average, slightly better than SBOW while using many fewer dimensions.",
"Our experiments also indicate that unsupervised scoring functions which combine similarity and generality measurements work the best in general, but no one scoring function dominates across all datasets.",
"A combination of unsupervised DIVE with the proposed scoring functions produces new state-of-the-art performances on many datasets in the unsupervised regime.",
"This work was supported in part by the Center for Data Science and the Center for Intelligent Information Retrieval, in part by DARPA under agreement number FA8750-13-2-0020, in part by Defense Advanced Research Agency (DARPA) contract number HR0011-15-2-0036, in part by the National Science Foundation (NSF) grant numbers DMR-1534431 and IIS-1514053 and in part by the Chan Zuckerberg Initiative under the project Scientific Knowledge Base Construction.",
"The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA, or the U.S. Government, or the other sponsors."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"method",
"objective",
"objective",
"method",
"abstain",
"other",
"other",
"other"
] |
[
"Representation learning approaches for knowledge graphs have been mostly designed for static data.",
"However, many knowledge graphs involve evolving data, e.g., the fact (The President of the United States is Barack Obama) is valid only from 2009 to 2017.",
"This introduces important challenges for knowledge representation learning since the knowledge graphs change over time.",
"In this paper, we present a novel time-aware knowledge graph em-bebdding approach, TeLM , which performs 4th-order tensor factorization of a Te mporal knowledge graph using a L inear temporal regularizer and M ultivector embeddings.",
"Moreover, we investigate the effect of the temporal dataset's time granularity on temporal knowledge graph completion.",
"Experimental results demonstrate that our proposed models trained with the linear temporal regularizer achieve the state-of-the-art performances on link prediction over four well-established temporal knowledge graph completion benchmarks.",
"Numerous large-scale knowledge graphs (KGs) including DBpedia (Auer et al., 2007), FreeBase (Bollacker et al., 2008) and WordNet (Miller, 1995) have been established in recent years.",
"Such KGs abstract knowledge from the real world into a complex network graph consisting of billions of triples.",
"Each triple is denoted as ( s, r, o ) , where s is the subject entity, o is the object entity, and r is the relation between the entities.",
"Knowledge graph completion is one of the main challenges in the KG field since most KGs are incomplete.",
"To tackle this problem, knowledge graph embedding (KGE) approaches embed entities and relations into a low-dimensional embedding space and measure the plausibility of triples by inputting embeddings of the entities and their relation to a score function (Wang et al., 2017).",
"For instance, ComplEx (Trouillon et al., 2016) has been proven to be a highly effective KGE model, where entities and relations are represented as complex embeddings, and the score of a triple ( s, r, o ) is computed with the asymmetric Hermitian dot product.",
"Some KGs involve temporal facts, e.g., the triple ( Barack Obama , presidentOf , USA ) is only valid in a specific time period [ 2009 , 2017 ].",
"Temporal KGs like Wikidata (Erxleben et al., 2014), YAGO3 (Mahdisoltani et al., 2013) and ICEWS (Lautenschlager et al., 2015) incorporate time information into triples.",
"Triples attached with time information are represented as quadruples, shaped like ( s, r, o, T ) , where T denotes the timestamp.",
"Traditional KGE models disregard time information, leading to an ineffectiveness of performing link prediction on TKGs involving temporary relations, e.g., ( ? , presidentOf , USA , 2010 ).",
"Recent researches show that the temporal knowledge graph embedding (TKGE) models, which encode time information in their embeddings, have better performances on link prediction over TKGs than traditional KGE models (Dasgupta et al., 2018; Garca-Durn et al., 2018; Xu et al., 2019; Goel et al., 2020; Lacroix et al., 2020).",
"In this paper, we present a novel temporal KG embedding approach TeLM.",
"We move beyond the complex-valued representations and introduce more expressive multivector embeddings from 2-grade geometric algebras to model entities, relations, and timestamps for TKGE.",
"At a high level, our approach performs 4th-order tensor factorization of a temporal KG, using the asymmetric geometric product.",
"The geometric product provides a greater extent of expressiveness compared to the complex Hermitian operator.",
"Specially, each relation is represented as a pair of dual multivector embeddings used to handle the beginning and the end of the relation.",
"In this way, TeLM can adapt well to datasets where time annotations are represented in the various forms: time points, begin or end time, time intervals.",
"Moreover, we develop a new linear temporal regularization function for time representation learning which introduces a bias component in the temporal smoothing function and empirically study the effect of the time granularity for a TKG dataset on the performance of our models.",
"Experimental results on four well-established TKG datasets show that our approach outperforms the state-of-the-art TKGE models, and the linear temporal regularization function improves the performance of our model compared to three common temporal regularization functions.",
"Tensor decomposition-based KGE approaches have led to good results in static KG completion.",
"Such approaches (Yang et al., 2014; Trouillon et al., 2016; Kazemi and Poole, 2018; Zhang et al., 2019; Xu et al., 2020b) model a static KG as a low-dimensional 3rd-order tensor and consider knowledge graph completion as a tensor decomposition problem.",
"A typical tensor decomposition model ComplEx (Trouillon et al., 2016) has been proven to be fully expressive with complex embeddings.",
"Apart from tensor decomposition approaches, distance-based KGE models are also commonly used for KG completion.",
"However, distance-based KGE models like TransE (Bordes et al., 2013) and its variants (Wang et al., 2014; Lin et al., 2015; Nayyeri et al., 2019, 2020) have been proven to have limitations in modeling various relation patterns which does not lead to the state-of-the-art results on the current benchmarks.",
"The above KGE approaches achieve satisfactory results on link prediction over static KGs.",
"Recent research on TKG completion shows that the inclusion of time information can improve the performances of KGE models on TKGs.",
"TTransE (Leblay and Chekol, 2018), HyTE (Das-gupta et al., 2018), ATiSE and TeRo (Xu et al., 2019, 2020a) propose scoring functions which incorporate time representations into a distance-based score function in different ways.",
"Furthermore, RTGE (Xu et al., 2020c) introduces the concept of temporal smoothness to optimize and learn the hyperplanes of adjacent time intervals jointly on the basis of HyTE.",
"Garca-Durn et al. (2018) utilize recurrent neural networks to learn time-aware representations of relations and use standard scoring functions from the existing KG embedding model, e.g. TransE and DistMult.",
"DE-SimplE (Goel et al., 2020) uses diachronic entity embeddings to represent entities at different time steps and exploit the same score function as SimplE to score the plausibility of a quadruple.",
"TIME-PLEX (Jain et al., 2020) and TComplEx (Lacroix et al., 2020) extend the time-agnostic ComplEx model in different ways.",
"Among them, TComplEx performs a 4th-order tensor decomposition of a TKG using the quadranomial Hermitian product which involves the embedding of timestamp T .",
"Similarly to RTGE, TComplEx also uses the temporal smoothness to improve its performance.",
"Thanks to the strong expressiveness provide by the complex embeddings and the 4th-order tensor decomposition, TComplEx achieves state-of-the-art results on TKG completion.",
"In this section, we provides a brief introduction to the 2-grade Geometric Algebra G 2 .",
"The contents are sufficient to understand the rest of the work.",
"Members of G 2 are called 2-grade multivectors.",
"The multivector space G 2 is build with vectors from the vector space R 2 .",
"Let { e 1 , e 2 } be an orthonormal basis of R 2 .",
"The multivector space G 2 is based on two rules: e 1 e 1 = e 2 e 2 = 1 and e 1 e 2 = e 2 e 1 .",
"The multivector space G 2 is 4-dimensional with basis: 1 spans 0-vectors, scalars, { e 1 , e 2 } spans 1-vectors, vectors, and { e 1 e 2 } spans 2-vectors, bivectors.",
"A 2-grade multivector M G 2 can be written as M = a 0 + a 1 e 1 + a 2 e 2 + a 12 e 1 e 2 .",
"Noteworthly, the unit bivectors from G 2 has similar algebraic properties as the imaginary unit i , i.e., ( e 1 e 2 ) 2 = e 1 e 1 e 2 e 2 = 1 = i 2 .",
"Thus, the complex numbers C C can be embedded into a subalgebra of G 2 which are formed with scalars and bivectors.",
"In other words, a 2-grade multivector M = a 0 + a 12 e 1 e 2 consisting of a scalar plus a bivector is isomorphic to a complex number C = a 0 + a 12 i .",
"The norm of a multivector M G 2 is equal to the root of the square sum of real values of its all elements.",
"Taking a 2-grade multivector as an example, its norm is defined as: || M || = (cid:112) a 20 + a 21 + a 22 + a 212 .",
"Geometric algebra also introduces a new product geometric product denoted as n where n is the grade of multivectors, as well as three multivector involutions, space inversion , reversion and Clifford conjugation .",
"The geometric product of two 2-grade multivectors comprises of multiplications between scalars, vectors and bivectors.",
"The product of two 2-grade multivectors M a = a 0 + a 1 e 1 + a 2 e 2 + a 12 e 1 e 2 and M b = b 0 + b 1 e 1 + b 2 e 2 + b 12 e 1 e 2 from G 2 is equal to M a 2 M b = a 0 b 0 + a 1 b 1 + a 2 b 2 a 12 b 12 + ( a 0 b 1 + a 1 b 0 a 2 b 12 + a 12 b 2 ) e 1 + ( a 0 b 2 + a 1 b 12 + a 2 b 0 a 12 b 1 ) e 2 + ( a 0 b 12 + a 1 b 2 a 2 b 1 + a 12 b 0 ) e 1 e 2 , (1) The Clifford conjugation of a 2-grade multivector M is a subsequent composition of space inversion M and reversion M as M = M , where space inversion M is obtained by changing e i to e i and reversion is obtained by reversing the order of all products i.e. changing e 1 e 2 to e 2 e 1 .",
"Thus, the Clifford conjugation of an 2-grade multivector M a = a 0 + a 1 e 1 + a 2 e 2 + a 12 e 1 e 2 is computed as M a = a 0 a 1 e 1 a 2 e 2 a 12 e 1 e 2 .",
"Note that the product of a multivector M a and its conjugation M 2 is a scalar, i.e., given a 2-grade multivector M a = a 0 + a 1 e 1 + a 2 e 2 + a 12 e 1 e 2 , we have M a 2 M a = a 20 a 21 a 22 + a 212 , (2) producing a real number.",
"Let E denote the set of entities, R denote the set of relations.",
"A TKG denoted as is a collection of numerous quadruples shaped like ( s, r, o, T ) where s, o E , r R and T denotes the timestamp.",
"The timestamp T can be represented as various forms, e.g., a time interval [ t b , t e ] , a begin time [ t s , ] or an end time [ , t e ] and a time point t .",
"A time point t can be denoted as a special time interval [ t b , t e ] where t = t b = t e .",
"We extend the relation set R of a TKG to a pair of dual relation sets, R b and R e .",
"A relation r b R b is used to handle the begin of relation r , meanwhile a relation r e R e is used to handle the end of relation r .",
"By doing this, we score a fact ( s , r , o , [ t b , t e ]) as the mean value of scores of two quadruples, ( s , r b , o , t b ) and ( s , r e , o , t e ) which represent the begin and the end of this fact respectively, i.e., f ( s, r, o, [ t b , t e ]) = 12 ( f ( s, r b , o, t b ) + f ( s, r e , o, t e )) .",
"For a fact missing the begin time or the end time, e.g., ( s , r , o , [ t b , ]) or ( s , r , o , [ , t e ]), the score of this fact is equal to the score of the quadruple involving the known time, i.e., f ( s, r, o, [ t b , ]) = f ( s, r b , o, t b ) , f ( s, r, o, [ , t e ]) = f ( s, r e , o, t e ) .",
"We construct a set of time steps T for a TKG.",
"For any time t appearing in the TKG, we can find a time step T to represent t .",
"The time set T changes with time granularity of the TKG.",
"Our approach TeLM embeds a TKG in a multiple-dimensional 2-grade multivector space G 2 k where k is the dimension of embeddings, and score a quadruple with an element-wise geometric product.",
"TeLM embeds each entity, relation and time step as a k -dimensional 2-grade multivector embedding M where each component is a multivector, i.e., M = [ M 1 , . . . , M k ] , i = 1 , . . . , k, M i G 2 .",
"We can define the score function of TeLM as f ( s, r, o, t ) = (cid:104) Sc ( M s 2 M r 2 M o ) , 1 (cid:105) , (3) where is the time step corresponding to time t , M r = M r 2 M , M s , M r , M o and M denote the k -dimensional multivector embeddings of s , r , o and respectively.",
"2 denotes the element-wise geometric product between 2-grade multivector embeddings, e.g., M r 2 M = [ M r 1 2 M 1 , , M r k 2 M k ] .",
"Sc ( ) denotes the real-valued vector of the scalar component of a multivector embedding, 1 denotes a k 1 vector having all k elements equal to one, M denotes the element-wise conjugation of multivectors i.e. M = [ M 1 , . . . , M k ] .",
"and (cid:104) a, b (cid:105) := (cid:80) k a k b k is the dot product.",
"In our approach, the total number of parameters increases linearly with embedding dimension k , i.e., the space complexity of a TeLM model is O ( k ) .",
"Since the score is computed with an asymmetric quadranomial geometric product between k -dimensional multivector embeddings, the time complexity is also equal to O ( k ) , which are the same as some common KGE models, e.g., TransE and DistMult.",
"Using full multiclass log-softmax loss function and N3 regularization has been proven to be helpful in boosting the performances of tensor decomposition-based (T)KGE models (Lacroix et al., 2018; Xu et al., 2020b; Lacroix et al., 2020; Jain et al., 2020).",
"In this work, we follow such setting for TeLM and utilize the reciprocal learning for simplifying the training process.",
"For each relation r , we create an inverse relation r 1 and create a quadruple ( o, r 1 , s, t ) for each training quadruple ( s, r, o, t ) .",
"At the evaluation phase, queries of the form (? , r, o, t ) are answered as ( o, r 1 , ? , t ) .",
"By doing this, the multiclass log-loss of a training quadruple = ( s, r, o, t ) can be defined as follows, L = log ( exp ( f ( s, r, o, t )) (cid:80) s (cid:48) E exp ( f ( s (cid:48) , r, o, t ))) log ( exp ( f ( o, r 1 , s, t )) (cid:80) o (cid:48) E exp ( f ( o (cid:48) , r 1 , s, t ))) (4) + (cid:88) k i =1 ( || M s i || 33 + || M r i || 33 + || M o i || 33 ) , where denotes the N3 regularization weight.",
"A common approach to leverage the temporal aspect of temporal graphs is to use time as a regularizer to impose a smoothness constraint on time embeddings.",
"RTGE (Xu et al., 2020c) and TComplEx (Lacroix et al., 2020) introduce the temporal smoothness between hyperplanes and embeddings of adjacent time steps, respectively, based on the assumption that the neighboring time steps should have close representations.",
"The smoothing temporal regularizer is defined as, LT = n 1 (cid:88) i =1 || M i +1 M i || pp , (5) where n is the number of time steps and p = 3 in this work since we use N3 regularization.",
"Apart from the basic temporal smoothness, various temporal regularization methods are used for learning temporal embeddings.",
"Singer et al. (2019) add a rotation projection to align the neighboring temporal embeddings.",
"The loss of such projective temporal regularization can be defined as, LT = n 1 (cid:88) i =1 || M i +1 M w 2 M i || pp , (6) where M w is the rotation embedding.",
"Yu et al. (2016) propose an autoregressive temporal regularizer based on the assumption that the change of temporal embeddings fits an AR model.",
"This autoregressive temporal regularizer is defined as, LT = n m (cid:88) i =1 || M i + m m 1 (cid:88) j =0 M j 2 M i + j || pp , (7) where m = 3 is the order of the AR model used in our work, and M j denote the weight of the embeddings of previous time steps which are learned during the training process.",
"In this work, we develop a novel linear temporal regularizer by adding a bias component between the neighboring temporal embeddings, which can be defined as, LT = n 1 (cid:88) i =1 || M i +1 M i M b || pp .",
"where M b denotes the bias embedding which are randomly initialized and then learned from the training process.",
"This linear regularizer promotes that the difference between embeddings of two adjacent time steps is smaller than the difference between embeddings of two distant time steps, i.e., || M i + m M i || > || M i +1 M i || when m (cid:29) 1 .",
"This formulation can be helpful for effectively clustering and ordering time embeddings M i .",
"The total loss L b of a training batch b is the sum of the quadruple loss and the temporal regularization term, i.e., L b = 1 b (cid:88) b L + TLT .",
"where T denotes the coefficient of the temporal regularizer.",
"In this work, we use the linear temporal regularizer for TeLMand compare its performance with other three temporal regularizers.",
"To compare our model with baselines, we used the following three datasets, namely ICEWS14, ICEWS05-15, and YAGO11k, released by (Das-gupta et al., 2018) and (Garca-Durn et al., 2018).",
"ICEWS14 and ICEWS05-15 are the two most common TKG benchmarks extracted from the large-scale event-based database, Integrated Crisis Early Warning System (ICEWS) (Lautenschlager Dataset #Entities #Relations Period(year) #Train #Valid #Test ICEWS14 6,869 230 201 72,826 8,941 8,963 ICEWS05-15 10,094 251 2005-2015 368,962 46,275 46,092 YAGO11k 10,623 10 -431-2844 16,406 2,050 2,051 Wikidata12k 12,554 24 19-2020 32,497 4,062 4,062 Table 1: Statistics of datasets.",
"et al., 2015).",
"ICEWS is a repository that contains political events with specific time annotations, e.g. ( Barack Obama , Make a visit , Ukraine , 2014-07-08 ).",
"It is noteworthy that time annotations in ICEWS are all time points.",
"ICEWS14 contains events in 2014, and ICEWS05-15 contains events occurring between 2005-2015.",
"These two datasets are filtered by only selecting the most frequently occurring entities in the graph.",
"YAGO3 (Mahdisoltani et al., 2013) and Wikidata (Erxleben et al., 2014) are two temporal KGs where time annotations are represented in various forms, i.e., time points like [ 2003-01-01 , 2003-01-01 ], beginning or end time like [ 2003 , ## ], and time intervals like [ 2003 , 2005 ].",
"YAGO15k, Wiki-data11k (Garca-Durn et al., 2018), YAGO11k and Wikidata12k (Dasgupta et al., 2018) are subsets of YAGO3 and Wikidata.",
"In YAGO15k and Wikidata11k, time information is represented as either begin time or end time of each fact and some facts do not include time annotations.",
"In this paper, we focus on performing link prediction on time-aware facts where time annotations are represented as various forms.",
"Based on this consideration, we use YAGO11k and Wikidata12k as datasets, where all of facts involve time annotations.",
"In the previous work (Garca-Durn et al., 2018; Goel et al., 2020; Lacroix et al., 2020), the time granularity of ICEWS14 and ICEWS05-15 was set as 1 day.",
"For YAGO11k and Wikidata12k, Dasgupta et.al (2018) and Xu et.al (2019) dropped the month and day information.",
"They took care of the unbalance that might occur in terms of number of facts in a particular interval by clubbing neighboring years which are less frequently mentioned into the same time step and applying a minimum threshold of 300 facts per interval during construction.",
"To illustrate, in Wikidata12k, there were time steps like [ 1596-1777 ], [ 1791-1815 ] with a large span as the facts occurring on those years were relatively less in KG.",
"The years like 2013 being highly frequent were self-contained.",
"This setting was used to alleviate the effect of the long-tail property of time data in YAGO11k and Wikidata12k.",
"As shown in Figure 1, the time distribution of facts in ICEWS14 is relatively uniform, while the frequency distribution of time data in YAGO11k has a long tail.",
"In this work, we study the effect of time granularity on TKG completion.",
"For ICEWS datasets, we test our model with different time units, denoted as u , in a range of {1, 2, 3, 7, 14, 30, 90 and 365} days.",
"Dasgupta et al. (2018) and Xu et al. (2019) applied a minimum threshold of 300 triples per interval during construction for YAGO11k and Wikidata12k.",
"We follow their time-division approaches for these two datasets and test different minimum thresholds, denoted as tr , amongst {1, 10, 100, 1000, 10000} for grouping years into different time steps.",
"The change of time granularity will reconstruct the set of time steps T .",
"To illustrate, the total number of time steps in ICEWS14 is 365 with u = 1 .",
"When the time unit u changes from 1 to 2, the set of time steps T will be reconstructed and include 188 different time steps.",
"In YAGO11k, there are totally 388 different time steps when tr = 1 .",
"Years like -453 , 100 and 2008 are taken as independent time steps.",
"When tr for YAGO11k rises to 100, the number of time steps drops to 118 and years between -431 and 100 are clubbed into a same time step.",
"We evaluate our models on link prediction over the above-mentioned TKG benchmarks.",
"To perform ICEWS14 ICEWS05-15 Metrics MRR Hits@1 Hits@3 Hits@10 MRR Hits@1 Hits@3 Hits@10 ComplEx-N3 (cid:5) .47 .35 .54 .71 .49 .37 .55 .73 TTransE* .255 .047 -.601 .271 .084 -.616",
"a time-aware link prediction query ( s, r, ? , T ) , we first generate the candidate list C = { ( s, r, o (cid:48) , T ) : o (cid:48) E} .",
"Following the time-wise filtered setting used in most previous TKGE-related work, e.g., TComplEx (Lacroix et al., 2020), we then remove the candidate quadruples appearing in the train train , valid train and test set train from the candidate list.",
"The filtered candidate list is denoted as C = { : C , / train valid test } .",
"We get the rank of test quadruple ( s, r, o, T ) among the candidate quadruples C by sorting their scores.",
"We use Mean Reciprocal Rank (MRR) and Hits@N as evaluation metrics.",
"The Mean Reciprocal Rank (MRR) is the average of the reciprocal values of all computed ranks.",
"The percentage of testing quadruples which are ranked lower than N is considered as Hits@N.",
"We compare our models with the state-of-the-art KGE model, ComplEx-N3 (Lacroix et al., 2018) and several existing TKGE approaches including TTransE (Leblay and Chekol, 2018), HyTE (Dasgupta et al., 2018), TA-TransE, TA-DistMult (Garca-Durn et al., 2018), ATiSE (Xu et al., 2019), TeRo (Xu et al., 2020a), DE-SimplE (Goel et al., 2020), TIME-PLEX(base) (Jain et al., 2020) and TComplEx (Lacroix et al., 2020).",
"We do not use the complete TIME-PLEX model and the TNTCom-plEx model as baselines since the former incorporates additional temporal constraints for some specific relations and the latter is designed for modelling a KG where some facts involve time information and others do not.",
"Among the existing TKGE approaches, TComplEx achieves state-of-the-art results on TKG completion.",
"We implement our proposed model TeLM in Py-Torch.",
"We use the Adagrad optimizer with a learning rate of 0.1 to train both models.",
"The batch size b is fixed as 1000.",
"The regularization weights and T are tuned in a range of {0, 0.001, 0.0025, 0.005, 0.0075, 0.01, . . . , 0.1}.",
"To avoid too much memory consumption, we follow the setting in (Lacroix et al., 2020) to make the maximum embedding no more than 2000.",
"The above experimental setup is also used for evaluating TComplEx on YAGO11k and Wikidata12k.",
"Notably, the time granularity parameters u and tr are also regraded as hyperparameters for TeLM as mentioned in the previous section.",
"The optimal hyperparameters for TeLM are as follows: = 0 .",
"0075 , T = 0 .",
"01 , u = 1 on ICEWS14; = 0 .",
"0025 , T = 0 .",
"1 , u = 1 on ICEWS05-15; = 0 .",
"025 , T = 0 .",
"001 , tr = 100 on YAGO11k; = 0 .",
"025 , T = 0 .",
"0025 , tr = 1 on Wikidata12k.",
"The optimal embedding dimension is k = 2000 in all cases.",
"The training processes of a TeLM model with k = 2000 on ICEWS14, YAGO11K and Wikidata12k all cost less than half an hour with a GeForce RTX 2080 GPU.",
"On ICEWS05-15, It takes about 2 hours to train a 2000-dimensional TeLM model.",
"datasets.",
"As shown in Table 2, TeLM surpasses all baseline models on ICEWS datasets regarding all metrics.",
"Compared to TComplEx, TeLM obtains the improvements of 1.5 MRR points on ICEWS14 and 1.8 MRR points on ICEWS05-15.",
"TA-TransE is not included in Table 3 since there is no literature reporting the results of TA-TransE on YAGO11k and Wikidata12k and the performances of TA-TransE are worse than most baseline models on other TKG datasets.",
"The results of DE-SimplE on YAGO11k and Wikidata12k can not be obtained since DE-SimplE mainly focuses on event-based datasets and cannot model time intervals or time annotations missing moth and day information which are common in YAGO and Wikidata.",
"On YAGO11k, TeLM outperforms all baseline models other than TA-TransE and DE-SimplE regarding MRR, Hits@1 and Hits@10, though performs slightly worse than TeRo on Hits@3.",
"Additionally, TeLM also achieves the state-of-the-art results except the the Hits@1 of TComplEx is 0.1 point higher than TeLM.",
"We compare the performances of the TeLM model trained with various temporal regularizers mentioned before, e.g., the smoothing temporal regularizer, the projective temporal regularizer, the 3-order autoregressive temporal regularizer, and our proposed linear temporal regularizer.",
"As shown in Figure 2, the TeLM model trained with the linear temporal regularizer outperforms the TeLM model trained with other temporal regularizer on ICEWS14.",
"Compared to the smoothing temporal regularizer, the linear temporal regularizer improves MRR by 0.2 point and Hits@1 by 0.3 point.",
"And the linear temporal regularizer is also less sensitive to the temporal regularization weight T amongst the range of {0.001, ..., 0.1} since its bias component is learned during the training process and thus can be partly adaptive to different T .",
"In Figure 3, In we show 2-d PCA projections of the 2000-dimensional time embeddings of TeLM models trained with/without a linear temporal regularizer.",
"Adjacent time embeddings of TeLM trained without the temporal regularization naturally come together.",
"However, the time embeddings representing time points in different months are not well divided.",
"By contrast, time embeddings of TeLM trained with the linear temporal regularizer are forming good clusters in chronological order.",
"Overall, the linear temporal regularizer provides good geometric meanings of time embeddings by effectively retaining the time sequence information in temporal KGs and thus improves the performances Figure 3: The figure illustrates 2-d PCA projection of the 2000 dimensional time embeddings which are obtained after training TeLM on ICEWS14 with a smoothing temporal regularizer and a linear temporal regularizer.",
"In this work, we analyze the effect of the change of the time granularity on the performance of our model.",
"As mentioned in the previous section, we adopt two different time-division approaches for event-based datasets, i.e., ICEWS datasets, and time-wise KGs involving time intervals, i.e., YAGO11k as well as Wikidata12k.",
"As shown in Figure",
"4(a), on ICEWS14 where time distribution of facts is relatively uniform, the performance of TeLM decreases with the time unit u increasing, since representing time with a small time granularity can provide more abundant time information.",
"On the other hand, Figure",
"4(b) illustrates that using the smallest time granularity is non-optimal for YAGO11k due to the long-tail property of time data.",
"An appropriate minimum threshold used for generating time steps, e.g., tr = 100 , can improve the link prediction results of TeLM by alleviating the effect of the long-tail property of time data and decrease the memory usage with fewer time steps.",
"Meanwhile, using overly coarse-grained time units always leads to low performances since the time information is not fully expressed in these cases.",
"Figure",
"4(c) and",
"(d) show that the performances on ICEWS14 and YAGO11k of TGe-omE2 improve with the increasing of the embedding dimension in a range of k = { 20 , 50 , 100 , 200 , 500 , 1000 , 2000 } .",
"TeLM with k = 500 has fewer adjustable parameters than TComplEx with k = 1740 used in (Lacroix et al., 2020) but performs closely (0.612 vs 0.61 on MRR).",
"It will still be interesting to explore the performances of TeLM models with higher-dimensional embeddings, e.g., Ebisu et al. (2018) use 10000-dimensional embeddings for TorusE, although it would bring more memory pressure.",
"We propose a new time-aware approach for TKG completion, TeLM, which performs 4th-order tensor factorization of a temporal knowledge graph using multivector embeddings for knowledge graph representation and a linear temporal regularizer for learning time embeddings.",
"Compared to real-valued and complex-valued embeddings, multivector embeddings provides better generalization capacity and richer expressiveness with higher degree of freedom for TKGE.",
"Moreover, the linear temporal regularizer provides better geometric meanings for time embeddings and improves the performances of TeLM compared to the temporal smoothness.",
"Additionally, two time division methods are used for different types of TKG datasets to study the effect of the time granularity on TKG completion.",
"Our proposed models trained with the linear temporal regularizer achieve the state-of-the-art results on time-wise link prediction over four wellknown datasets involving various forms of time information, e.g., time points, begin or end time, and time intervals.",
"Experimental results also show that choosing a reasonable time division method with an appropriate time granularity is helpful for TKG completion.",
"This work is supported by the EC Horizon 2020 grant LAMBDA (GA no. 809965), the CLEOPA-TRA project (GA no. 812997) and the China Scholarship Council (CSC)."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.