sentences
sequence | labels
sequence |
---|---|
[
"Distributed representations of sentences have become ubiquitous in natural language processing tasks.",
"In this paper, we consider a continual learning scenario for sentence representations: Given a sequence of corpora, we aim to optimize the sentence encoder with respect to the new corpus while maintaining its accuracy on the old corpora.",
"To address this problem, we propose to initialize sentence encoders with the help of corpus-independent features, and then sequentially update sentence encoders using Boolean operations of conceptor matrices to learn corpus-dependent features.",
"We evaluate our approach on semantic textual similarity tasks and show that our proposed sentence encoder can continually learn features from new corpora while retaining its competence on previously encountered corpora.",
"Distributed representations of sentences are essential for a wide variety of natural language processing (NLP) tasks.",
"Although recently proposed sentence encoders have achieved remarkable results (e.g., (Yin and Schutze, 2015; Arora et al., 2017; Cer et al., 2018; Pagliardini et al., 2018)), most, if not all, of them are trained on a priori fixed corpora.",
"However, in open-domain NLP systems such as conversational agents, oftentimes we are facing a dynamic environment, where training data are accumulated sequentially over time and the distributions of training data vary with respect to external input (Lee, 2017; Mathur and Singh, 2018).",
"To effectively use sentence encoders in such systems, we propose to consider the following continual sentence representation learning task : Given a sequence of corpora, we aim to train sentence encoders such that they can continually learn features from new corpora while retaining strong performance on previously encountered corpora.",
"Toward addressing the continual sentence representation learning task, we propose a simple sentence encoder that is based on the summation and linear transform of a sequence of word vectors aided by matrix conceptors.",
"Conceptors have their origin in reservoir computing (Jaeger, 2014) and recently have been used to perform continual learning in deep neural networks (He and Jaeger, 2018).",
"Here we employ Boolean operations of conceptor matrices to update sentence encoders over time to meet the following desiderata:",
"1. Zero-shot learning .",
"The initialized sentence encoder (no training corpus used) can effectively produce sentence embeddings.",
"2. Resistant to catastrophic forgetting .",
"When the sentence encoder is adapted on a new training corpus, it retains strong performances on old ones.",
"The rest of the paper is organized as follows.",
"We first briefly review a family of linear sentence encoders.",
"Then we explain how to build upon such sentence encoders for continual sentence representation learning tasks, which lead to our proposed algorithm.",
"Finally, we demonstrate the effectiveness of the proposed method using semantic textual similarity tasks.",
"1 Notation We assume each word w from a vocabulary set V has a real-valued word vector v w R n .",
"Let p ( w ) be the monogram probability of a word w .",
"A corpus D is a collection of sentences, where each sentence s D is a multiset of words (word order is ignored here).",
"For a collection of vectors Y = { y i } i I , where y i R l 1 Our codes are available on GitHub https:// github.com/liutianlin0121/contSentEmbed for i in an index set I with cardinality | I | , we let [ y i ] i I R l | I | be a matrix whose columns are vectors y 1 , , y | I | .",
"An identity matrix is denoted by I .",
"We briefly overview linear sentence encoders that are based on linear algebraic operations over a sequence of word vectors.",
"Among different linear sentence encoders, the smoothed inverse frequency (SIF) approach (Arora et al., 2017) is a prominent example it outperforms many neural-network based sentence encoders on a battery of NLP tasks (Arora et al., 2017).",
"Derived from a generative model for sentences, the SIF encoder (presented in Algorithm 1) transforms a sequence of word vectors into a sentence vector with three steps.",
"First, for each sentence in the training corpus, SIF computes a weighted average of word vectors (line 1-3 of Algorithm 1); next, it estimates a common discourse direc-tion of the training corpus (line 4 of Algorithm 1); thirdly, for each sentence in the testing corpus, it calculates the weighted average of the word vectors and projects the averaged result away from the learned common discourse direction (line 5-8 of Algorithm 1).",
"Note that this 3-step paradigm is slightly more general than the original one presented in (Arora et al., 2017), where the training and the testing corpus is assumed to be the same.",
"Building upon SIF, recent studies have proposed further improved sentence encoders (Kho-dak et al., 2018; Pagliardini et al., 2018; Yang et al., 2018).",
"These algorithms roughly share the core procedures of SIF, albeit using more refined methods (e.g., softly remove more than one common discourse direction).",
"In this section, we consider how to design a linear sentence encoder for continual sentence representation learning.",
"We observe that common discourse directions used by SIF-like encoders are estimated from the training corpus.",
"However, incrementally estimating common discourse directions in continual sentence representation learning tasks might not be optimal.",
"For example, consider that we are sequentially given training corpora of tweets and news article .",
"When the first tweets corpus is presented, we can train a SIF sentence encoder using tweets .",
"When the second news article corpus is given, however, we will face a problem on how to exploit the newly given corpus for improving the trained sentence encoder.",
"A straightforward solution is to first combine the tweets and news article corpora and then train a new encoder from scratch using the combined corpus.",
"However, this paradigm is not efficient or effective.",
"It is not efficient in the sense that we will need to re-train the encoder from scratch every time a new corpus is added.",
"Furthermore, it is not effective in the sense that the common direction estimated from scratch reflects a compromise between tweets and news articles, which might not be optimal for either of the stand-alone corpus.",
"Indeed, it is possible that larger corpora will swamp smaller ones.",
"To make the common discourse learned from one corpus more generalizable to another, we propose to use the conceptor matrix (Jaeger, 2017) to characterize and update the common discourse features in a sequence of training corpora.",
"In this section, we briefly introduce matrix conceptors, drawing heavily on (Jaeger, 2017; He and Jaeger, 2018; Liu et al., 2019).",
"Consider a set of vectors { x 1 , , x n } , x i RN for all i { 1 , , n } .",
"A conceptor matrix is a regularized identity map that minimizes 1 n n (cid:88) i =1 (cid:107) x i Cx i (cid:107) 22 + 2 (cid:107) C (cid:107) 2 F .",
"where (cid:107) (cid:107) F is the Frobenius norm and 2 is a scalar parameter called aperture .",
"It can be shown that C has a closed form solution: C = 1 nXX (cid:62) ( 1 nXX (cid:62) + 2 I ) 1 , (2) where X = [ x i ] i { 1 , ,n } is a data collection matrix whose columns are vectors from { x 1 , , x n } .",
"In intuitive terms, C is a soft projection matrix on the linear subspace where the typical components of x i samples lie.",
"For convenience in notation, we may write C ( X, ) to stress the dependence on X and .",
"Conceptors are subject to most laws of Boolean logic such as NOT , AND and OR .",
"For two conceptors C and B , we define the following operations: C := I C, (3) C B :=( C 1 + B 1 I ) 1 (4) C B := ( C B ) (5) Among these Boolean operations, the OR operation is particularly relevant for our continual sentence representation learning task.",
"It can be shown that C B is the conceptor computed from the union of the two sets of sample points from which C and B are computed.",
"Note that, however, to calculate C B , we only need to know two matrices C and B and do not have to access to the two sets of sample points from which C and B are computed.",
"We now show how to sequentially characterize and update the common discourse of corpora using the Boolean operation of conceptors.",
"Suppose that we are sequentially given M training corpora D 1 , , DM , presented one after another.",
"Without using any training corpus, we first initialize a conceptor which characterizes the corpus-independent common discourse features.",
"More concretely, we compute C 0 := C ([ v w ] w Z , ) , where [ v w ] w Z is a matrix of column-wisely stacked word vectors of words from a stop word list Z and is a hyper-parameter.",
"After initialization, for each new training corpus D i ( i = 1 , , M ) coming in, we compute a new conceptor C temp : = C ([ q s ] s D i , ) to characterize the common discourse features of corpus D i , where those q s are defined in the SIF Algorithm",
"1. We can then use Boolean operations of conceptors to compute C i := C temp C i 1 , which characterizes common discourse features from the new corpus as well as the old corpora.",
"After all M corpora are presented, we follow the SIF paradigm and use CM to remove common discourse features from (potentially unseen) sentences.",
"The above outlined conceptor-aided (CA) continual sentence representation learning method is presented in Algorithm",
"2. Algorithm 2: CA sentence encoder.",
"GA simple modification of Algorithm 2 yields a zero-shot sentence encoder that requires only pre-trained word embeddings and no training corpus: we can simply skip those corpus-dependent steps (line 2-8) and use C 0 in place of CM in line 11 in Algorithm 2 to embed sentences.",
"This method will be referred to as zero-shot CA. 4 Experiment We evaluated our approach for continual sentence representation learning using semantic textual similarity (STS) datasets (Agirre et al., 2012, 2013, 2014, 2015, 2016).",
"The evaluation criterion for such datasets is the Pearson correlation coefficient (PCC) between the predicted sentence similarities and the ground-truth sentence similarities.",
"We split these datasets into five corpora by their genre: news, captions, wordnet, forums, tweets (for details see appendix).",
"Throughout this section, we use publicly available 300-1 2 3 4 5 first n training corpora used 63 .",
"dimensional GloVe vectors (trained on the 840 billion token Common Crawl) (Pennington et al., 2014).",
"Additional experiments with Word2Vec (Mikolov et al., 2013), Fasttext (Bojanowski et al., 2017), Paragram-SL-999 (Wieting et al., 2015) are in the appendix.",
"We use a standard continual learning experiment setup (cf.",
"(Zenke et al., 2017, section 5.1)) as follows.",
"We sequentially present the five training datasets in the order 2 of news, captions, wordnet, forums, and tweets, to train sentence encoders.",
"Whenever a new training corpus is presented, we train a SIF encoder from scratch 3 (by combining all available training corpora which have been already presented) and then test it on each corpus.",
"At the same time, we incrementally adapt a CA encoder 4 using the newly presented corpus and test it on each corpus.",
"The lines of each panel of Figure 1 show the test results of SIF and CA on each testing corpus (specified as the panel subtitle) as a function of the number of training corpora used (the first n corpora of news, captions, wordnet, forums, and tweets for this experiment).",
"To give a concrete example, consider the blue line in the first 2 The order can be arbitrary.",
"Here we ordered the corpora from the one with the largest size (news) to the smallest size (tweets).",
"The results from reversely ordered corpora are reported in the appendix.",
"3 We use a = 0 .",
"001 as in (Arora et al., 2017).",
"The word frequencies are available at the GitHub repository of SIF.",
"4 We used hyper-parameter = 1 .",
"Other parameters are set to be the same as SIF.",
"panel of Figure",
"1. This line shows the test PCC scores ( y -axis) of SIF encoder on the news corpus when the number of training corpora increases ( x -axis).",
"Specifically, the left-most blue dot indicates the test result of SIF encoder on news corpus when trained on news corpus itself (that is, the first training corpus is used); the second point indicates the test results of SIF encoder on news corpus when trained on news and captions corpora (i.e., the first two training corpora are used); the third point indicates the test results of SIF encoder on news corpus when trained on news, captions, and wordnet corpora (that is, the first three training corpora are used), so on and so forth.",
"The dash-lines in panels show the results of a corpus-specialized SIF, which is trained and tested on the same corpus, i.e., as done in (Arora et al., 2017, section 4.1).",
"We see that the PCC results of CA are better and more forgetting-resistant than train-from-scratch SIF throughout the time course where more training data are incorporated.",
"Consider, for example, the test result of news corpus (first panel) again.",
"As more and more training corpora are used, the performance of train-from-scratch SIF drops with a noticeable slope; by contrast, the performance CA drops only slightly.",
"As remarked in the section 3.2, with a simple modification of CA, we can perform zero-shot sentence representation learning without using any training corpus.",
"The zero-shot learning results are presented in Table 1, together with the time-course averaged results of CA and train-from-scratch SIF (i.e., the averaged values of those CA or SIF scores in each panel of Figure 1).",
"We see that the averaged results of our CA method performs the best among these three methods.",
"Somewhat surprisingly, the results yielded by zero-shot CA are better than the averaged results of train-from-scratch SIF in most of the cases.",
"We defer additional experiments to the appendix, where we compared CA against more baseline methods and use different word vectors other than GloVe to carry out the experiments.",
"In this paper, we formulated a continual sentence representation learning task: Given a consecutive sequence of corpora presented in a time-course manner, how can we extract useful sentence-level features from new corpora while retaining those from previously seen corpora?",
"We identified that the existing linear sentence encoders usually fall short at solving this task as they leverage on common discourse statistics estimated based on a priori fixed corpora.",
"We proposed two sentence encoders (CA encoder and zero-shot CA encoder) and demonstrate their the effectiveness at the continual sentence representation learning task using STS datasets.",
"As the first paper considering continual sentence representation learning task, this work has been limited in a few ways it remains for future work to address these limitations.",
"First, it is worthwhile to incorporate more benchmarks such as GLUE (Wang et al., 2019) and SentEval (Con-neau and Kiela, 2018) into the continual sentence representation task.",
"Second, this work only considers the case of linear sentence encoder, but future research can attempt to devise (potentially more powerful) non-linear sentence encoders to address the same task.",
"Thirdly, the proposed CA encoder operates at a corpus level, which might be a limitation if boundaries of training corpora are ill-defined.",
"As a future direction, we expect to lift this assumption, for example, by updating the common direction statistics at a sentence level using Autoconceptors (Jaeger, 2014, section 3.14).",
"Finally, the continual learning based sentence encoders should be applied to downstream applications in areas such as open domain NLP systems.",
"The authors thank anonymous reviewers for their helpful feedback.",
"This work was partially supported by Joao Sedoc's Microsoft Research Dissertation Grant."
] | [
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"objective",
"objective",
"objective",
"objective",
"abstain",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"objective",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"other"
] |
[
"Task variance regularization, which can be used to improve the generalization of Multitask Learning (MTL) models, remains unexplored in multi-task text classification.",
"Accordingly, to fill this gap, this paper investigates how the task might be effectively regularized, and consequently proposes a multi-task learning method based on adversarial multiarmed bandit.",
"The proposed method, named BanditMTL, regularizes the task variance by means of a mirror gradient ascent-descent algorithm.",
"Adopting BanditMTL in the multitask text classification context is found to achieve state-of-the-art performance.",
"The results of extensive experiments back up our theoretical analysis and validate the superiority of our proposals.",
"Multi-task Learning (MTL), which involves the simultaneous learning of multiple tasks, can achieve better performance than learning each task independently (Caruana, 1993; Ando and Zhang, 2005).",
"It has achieved great success in various applications, ranging from summary quality estimation (Kriz et al., 2020) to text classification (Liu et al., 2017).",
"In the multi-task text classification context, MTL simultaneously learns the tasks by minimizing their empirical losses together; for example, by minimizing the mean of the empirical losses for the included tasks.",
"However, it is common for these tasks to be competing.",
"Minimizing the losses of some tasks increases the losses of others, which accordingly increases the task variance (variance between the task-specific loss).",
"Large task variance can lead to over-fitting in some tasks and under-fitting in others, which degenerates the generalization performance of an MTL model.",
"To illustrate this issue, *Corresponding author.",
"it is instructive to consider a case of two-task learning, where task 1 and task 2 are conflicting binary classification tasks.",
"When the task variance is uncontrolled, it is possible that the empirical loss of task 1 will converge to 0 , while the empirical loss of task 2 will converge to 0 .",
"5 .",
"In such a case, although the mean of the empirical losses is decreasing, task 1 overfits and task 2 underfits, which leads to poor generalization performance.",
"To address the problem caused by uncontrolled task variance, it is necessary to implement task variance regularization, which regularizes the variance between the task-specific losses during training.",
"However, existing deep MTL methods, including both adaptive weighting sum methods (Kendall et al., 2018; Chen et al., 2018; Liu et al., 2017) and multi-objective optimization-based methods (Sener and Koltun, 2018; Mao et al., 2020b), ignore the task variance.",
"Overlooking task variance degenerates an MTL model's generalization ability.",
"To fill this gap and further improve the generalization ability of MTL models, this paper proposes a novel MTL method, dubbed BanditMTL, which jointly minimizes the empirical losses and regularizes the task variance.",
"BanditMTL is proposed based on linear adversarial multi-armed bandit and implemented with a mirror gradient ascent-descent algorithm.",
"Our proposed approach can improve the performance of multi-task text classification.",
"Moreover, to verify our theoretical analysis and validate the superiority of BanditMTL in the text classification context, we conduct experiments on two classical text classification problems: sentiment analysis (on reviews) and topic classification (on news).",
"The results demonstrate that applying variance regularization can improve the performance of a MTL model; moreover, BanditMTL is found to outperform several state-of-the-art multitask text classification methods.",
"Multi-task Learning methods jointly minimize task-specific empirical loss based on multi-objective optimization (Sener and Koltun, 2018; Lin et al., 2019; Mao et al., 2020a) or optimizing the weighted sum of the task-specific loss (Liu et al., 2017; Kendall et al., 2018; Chen et al., 2018).",
"The multi-objective optimization based MTL can converge to an arbitrary Pareto stationary point, the task variance of which is also arbitrary.",
"While the weighted sum methods focus on minimizing the weighted average of the task-specific empirical loss, they do not consider the task variance.",
"To fill the gap in existing methods, this paper proposes to regularize the task variance, which will significantly impact the generalization performance of MTL models.",
"Variance-based regularization has been used previously in Single-task Learning to balance the tradeoff between approximation and estimation error (Bartlett et al., 2006; Koltchinskii et al., 2006; Namkoong and Duchi, 2017).",
"In the Single-task Learning setting, the goal of variance-based regularization is to regularize the variance between the loss of training samples (Namkoong and Duchi, 2016; Duchi and Namkoong, 2019).",
"While these variance-based regularization methods can improve the generalization ability of Single-task Learning models, they do not fit the Multi-task Learning setting.",
"This paper thus first proposes a novel variance-based regularization method for Multi-task Learning to improve MTL models' generalization ability by regularizing the between-task loss variance.",
"Consider a multi-task learning problem with T tasks over an input space X and a collection of task spaces {Y t } Tt =1 .",
"For each task, we have a set of i.i.d. training samples D t = ( X t , Y t ) and ( X t , Y t ) = { x ti , y ti } n t i =1 , where n t is the number of training samples of task t .",
"In this paper, we focus on the neural network-based multi-task learning setting, in which the tasks are jointly learned by sharing some parameters (hidden layers).",
"Let h ( , ) : {X } Tt =1 {Y t } Tt =1 be the multitask learning model, where is the vector of the model parameters.",
"= ( sh , 1 , ..., T ) consists of sh (the parameters shared between tasks) and t (the task-specific parameters).",
"We denote h t ( , sh , t ) : X Y t as the task-specific map.",
"The task-specific loss function is denoted as l t ( , ) : Y t Y t [0 , 1] T .",
"The empirical loss of the task t is defined as L t ( sh , t )= 1 n t (cid:2) n t i =1 l t ( h ( x ti , sh , t ) , y ti ) .",
"The transpose of the vector/matrix is represented by the superscript (cid:4) , and the logarithms to base e are denoted by log.",
"Under the Empirical Risk Minimization paradigm, multi-task learning aims to optimize the vector of task-specific empirical losses.",
"The learning objective of multi-task learning is formulated as a vector optimization objective, as in equation (1).",
"In order to optimize the learning objective, existing multi-task learning methods tend to adopt either global criterion optimization strategies (Liu et al., 2017; Kendall et al., 2018; Chen et al., 2018; Mao et al., 2020b) or multiple gradient descent strategies (Sener and Koltun, 2018; Lin et al., 2019; Debabrata Mahapatra, 2020).",
"In this paper, we choose to adopt the typical linear-combination strategy, which can achieve proper Pareto Optimality (Mietti-nen, 2012) and is widely used in the multi-task text classification context (Liu et al., 2017; Yadav et al., 2018; Xiao et al., 2018).",
"The linear-combination strategy is defined in (2): min 1 TT (cid:3) t =1 L t ( sh , t ) , (2) 3.2 Adversarial Multi-armed Bandit Adversarial multi-armed bandit, a case in which a player and an adversary simultaneously address the trade-off between exploration and exploitation, is one of the fundamental multi-armed bandit problems (Bubeck and Cesa-Bianchi, 2012).",
"In this paper, we consider the linear multi-armed bandit, which is a generalized adversarial multi-armed bandit.",
"In our linear multi-armed bandit setting, the set of arms is a compact set A RT .",
"At each time step k = 1 , 2 , ..., K the player chooses an arm from A while; simultaneously, the adversary chooses a loss vector from [0 , 1] T .",
"For linear multi-armed bandit, the Online Mirror Descent (OMD) algorithm is a powerful technology that can be used to achieve proper regret (Srebro et al., 2011).",
"cision problems.",
"Rather than taking gradient steps in the primal space, the mirror descent approach involves taking gradient steps in the dual space.",
"The bijection and its inverse are used to map back and forth between primal and dual points.",
"To obtain a good regret bound, must be a Legendre function (Definition 1).",
"Assume that we update u k with gradient g k using OMD.",
"The OMD algorithm consists of three steps: (1) select a Legendre function ; (2) perform a gradient descent step in the dual space v k +1 = ( ( u k ) g k ) , where and are as defined in Definition 2 and is the step length; (3) project back to the primal space according to the Bregman divergence (Definition 3): u k +1 = arg min u D ( u, v k +1 ) .",
"Definition 1 (Legendre Function) .",
"Let O RT be an open convex set, and let O be the closure of O .",
"A continuous function : O R is Legendre if:",
"Definition 2 (Fenchel Conjugate) .",
"The Fenchel conjugate of is ( u ) = sup v {(cid:9) u, v (cid:10) + ( v ) } , and ( u ) = arg max v {(cid:9) u, v (cid:10) + ( v ) } .",
"Definition 3 (Bregman Divergence) .",
"The Bregman divergence D : O O R associated with a Legendre function is defined by D ( u, v ) = ( u ) ( v ) ( u v ) (cid:3) ( v ) .",
"This paper adopts the most prevalent and efficient hard parameter-sharing MTL model (Kendall et al., 2018; Chen et al., 2018; Sener and Koltun, 2018; Mao et al., 2020b) to perform multi-task text classification.",
"As shown in Figure 1, the hard parameter-sharing MTL model learns multiple related tasks simultaneously by sharing the hidden layers (fea-ture extractor) across all tasks while retaining task-specific output layers for each task.",
"In multitask text classification, the feature extractor can be LSTM (Hochreiter and Schmidhuber, 1997), TextCNN (Kim, 2014), and so on.",
"The task-specific layers are typically formulated by fully connected layers, ending with a softmax function.",
"To avoid uncontrolled task variance, we need to develop a learning method that regularizes the task variance during training.",
"Regularized Loss Minimization (RLM) is a learning method that jointly minimizes the empirical risk and a regularization function, and is thus a natural choice.",
"While RLM is widely used in Single-task Learning, it cannot be directly used in Multi-task Learning to regularize the task variance.",
"In this section, we propose a surrogate for RLM in MTL and accordingly develop a novel MTL method, namely BanditMTL.",
"RLM is a natural choice for regularizing the task variance.",
"RLM for task-variance-regularized MTL can be formulated as in equation (3): min 1 TT (cid:3) t =1 L t ( sh , t ) + (cid:4) V ar ( L t ( sh , t )) , (3) where V ar ( L t ( sh , t )) = 1 T (cid:2) Tt =1 ( L t ( sh , t ) 1 T (cid:2) Tt =1 L t ( sh , t )) 2 is the empirical variance between the task-specific losses.",
"However, formulation (3) is generally non-convex and associated NP-hardness.",
"To handle the non-convexity, we select a convex surrogate for (3) based on its equivalent formulation (4) (Ben-Tal et al., 2013; Bertsimas et al., 2018).",
"sup p P , T 1 T (cid:2) Tt =1 p t L t ( sh , t ) is convex and can be used as a convex surrogate for (3).",
"This paper proposes to perform task-variance-regularized multi-task-learning with the following learning objective: min sup p P , T 1 TT (cid:3) t =1 p t L t ( sh , t ) (5) Optimizing (5) is equivalent to optimizing (3).",
"In the proposed learning objective (5), is the regularization parameter that controls the trade-off between the mean empirical loss and the task variance.",
"Experimental analysis on the influence of is presented in Section 5.6.",
"To learn an MTL model via learning objective (5), we formulate the learning problem as an adversarial multi-armed bandit problem in Section 4.2 and further propose the BanditMTL algorithm in Section 4.3.",
"In deep multi-task learning, an MTL model is typically learnt by iteratively optimizing the learning objective.",
"To iteratively optimize the proposed learning objective (5), we formulate it as an adversarial multi-armed bandit problem in which the player chooses an arm from P , T and the adversary assigns a loss vector L ( ) = ( L 1 ( sh , 1 ) , ..., LT ( sh , T )) (cid:3) to each arm.",
"In each learning iteration, the player chooses an arm from P , T to increase the weighted sum loss, while the adversary aims to decrease the loss by updating the learning model.",
"Moreover, both the player and the adversary aim to find a trade-off between exploration and exploitation to achieve proper regret.",
"When l t ( , ) is convex and is compact, the adversarial multi-armed bandit problem can achieve a saddle point ( , p ) (Boyd and Vandenberghe, 2014).",
"The saddle point sat-isfies L p sup p (cid:3) L ( ) L inf , where L p sup = sup { p (cid:3) L ( ) | p P , T } and L inf = inf { p (cid:3) L ( ) | } .",
"To achieve a proper regret and saddle point, we adopts mirror gradient ascent for the player and mirror gradient descent for the adversary.",
"The mirror gradient ascent-descent algorithm for MTL, namely BanditMTL, is proposed in the next section.",
"In this paper, the task-variance-regularized multitask learning is formulated as a linear adversarial multi-armed bandit problem.",
"For a problem of this kind, mirror gradient descent (ascent) is a powerful technique for the adversary and the player to achieve proper regret (Bubeck and Cesa-Bianchi, 2012; Namkoong and Duchi, 2016).",
"Moreover, based on the mirror gradient ascent-descent, we can reach the saddle point of the minimax optimization problem when the task-specific loss functions are convex and the parameter space is compact (Boyd and Vandenberghe, 2014).",
"return k with best validation performance.",
"In this paper, we propose a task-variance-regularized multi-task learning algorithm based on mirror gradient ascent-descent, dubbed BanditMTL.",
"The proposed method is presented in algorithmic form in Algorithm",
"1. We assume that the training procedure has K learning iterations.",
"In each learning iteration 1 k < K , the player and the adversary update via mirror gradient ascent and descent.",
"For the player, considering the constraint in P ,T , we choose the Legendre function p ( p ) = (cid:2) Tt =1 p t log p t .",
"Based on the Legendre function, we propose the update rule of p in (6) (see the 5510 Appendix for derivations of the update rule).",
"where p is the step size for the player.",
"Moreover, is the solution of equation, where f ( ) is defined in (7).",
"f ( ) is non-increasing and 0 .",
"f ( ) = (cid:2) Tt =1 (log q t ) q t 11+ (cid:2) Tt =1 (1 + ) q t 11+ log T (cid:3) t =1 q t 11+ + log T , (7) where q t = e (log p kt + p L t ( ksh , kt )) .",
"To solve f ( ) = 0 , we propose a bisection search-based algorithm, as outlined in Algorithm",
"2. 4.3.2 Mirror Gradient Descent for the Adversary For the adversary, to simplify calculation, we choose the Legendre function ( ) = 12 (cid:7) (cid:7) 22 .",
"By using ( ) , the update rule of mirror gradient descent (presented in (8)) is the same as that of same with the common gradient descent.",
"(see the Appendix for derivations of the update rule).",
"where a is the learning rate for the adversary.",
"In this section, we perform experimental studies on sentiment analysis and topic classification respectively to evaluate the performance of our proposed BanditMTL and verify our theoretical analysis.",
"The implementation is based on PyTorch (Paszke et al., 2019).",
"The code is attached in the supplementary materials.",
"Sentiment Analysis .",
"We evaluate our algorithm on product reviews from Amazon.",
"The dataset (Blitzer et al., 2007) contains product reviews from 14 domains, including books, DVDs, electronics, kitchen appliances and so on.",
"We consider each domain as a binary classification task.",
"Reviews with rating > 3 were labeled positive, those with rating < 3 were labeled negative, reviews with https://www.cs.jhu.edu/mdredze/ datasets/sentiment/ rating = 3 are discarded as the sentiments were ambiguous and hard to predict.",
"Topic Classification .",
"We select 16 newsgroups from the 20 Newsgroup dataset, which is a collection of approximately 20,000 newsgroup documents that is partitioned (nearly) evenly across 20 different newsgroups, then formulate them into four 4-class classification tasks (as shown in Table 1) to evaluate the performance of our algorithm on topic classification.",
"We compare BanditMTL with following baselines.",
"Single-Task Learning: learning each task independently.",
"Uniform Scaling: learning the MTL model with learning objective (2), the uniformly weighted sum of task-specific empirical loss.",
"Uncertainty: using the uncertainty weighting method proposed by (Kendall et al., 2018).",
"GradNorm: using the gradient normalization method proposed by (Chen et al., 2018).",
"MGDA: using the MGDA-UB method proposed by (Sener and Koltun, 2018).",
"AdvMTL: using the adversarial Multi-task Learning method proposed by (Liu et al., 2017).",
"Tchebycheff: using the Tchebycheff procedure proposed by (Mao et al., 2020b).",
"We adopt the hard parameter-sharing MTL model shown in Fig.",
"1. The shared feature extractor is formulated via a TextCNN which is structured with three parallel convolutional layers with kernels size of 3, 5, 7 respectively.",
"The task-specific module is formulated by means of one fully connected layer ending with a softmax function.",
"To ensure consistency with the state-of-the-art multi-task classification methods (Liu et al., 2017; Mao et al., 2020b) and ensure fair comparison, we adopt Pre-trained http://qwone.com/jason/20Newsgroups/ Figure 2: Classification accuracy of Single Task Learning, Uniform Scaling, AdvMTL, MGDA, Tchebycheff, GradNorm, Uncertainty, and BanditMTL on the sentiment analysis dataset.",
"GloVe (Pennington et al., 2014) word embeddings in our experimental analysis.",
"We train the deep MTL network model in line with Algorithm",
"1. The learning rate for the adversary is 1 e 3 for both sentiment analysis and topic classification.",
"We use the Adam optimizer (Kingma and Ba, 2015) and train over 3000 epochs for both sentiment analysis and topic classification.",
"The batch size is 256.",
"We use dropout with a probability of 0.5 for all task-specific modules.",
"We compare the proposed BanditMTL with the baselines and report the results over 10 runs by plotting the classification accuracy of each task for both sentiment analysis and topic classification.",
"The results are shown in Fig. 2 and",
"3. Figure 4: Evolution of task variance during training of baseline methods and BanditMTL on the sentiment analysis and topic classification datasets.",
"All experimental results show that our proposed BanditMTL significantly outperforms Uniform Scaling, which demonstrates that adopting task variance regularization can boost the performance of MTL models.",
"Moreover, BanditMTL can be seen to outperform all baselines and achieve state-of-the-art performance.",
"In this section, we experimentally investigate how BanditMTL regularizes the task variance during training and compare the task variance of BanditMTL with the baselines.",
"The results are plotted in Fig.",
"4. As the figure shows, all MTL methods have lower task variance than single task learning during training.",
"Moreover, BanditMTL has lower task variance and smoother evolution during training than other MTL methods.",
"After considering the results obtained in Section 5.4, we conclude that task variance has a significant impact on multi-task text classification performance.",
"In BanditMTL, is the regularization parameter.",
"In this section, we experimentally investigate the impact of on task variance and average classification accuracy over the tasks of interest.",
"Fig. 5 plots how the task variance evolves during training w.r.t different values of .",
"The task variance decreases as increases.",
"It reveals that we can control the task variance by adjusting .",
"The changes in BanditMTL's average classification accuracy w.r.t different values of is illustrated in Fig.",
"6. In this figure, as increases, the average accuracy of BanditMTL first increases and then decreases.",
"This reveals that significantly impacts the performance of multi-task text classification.",
"As controls the trade-off between the empirical loss and the task variance, we can conclude that this trade-off significantly impacts the multi-task text classification performance.",
"Thus, in the multitask text classification, it is necessary for us to find a proper trade-off between the empirical loss and the task variance rather than focusing only on empirical loss.",
"These results verify the necessary of task variance regularization.",
"In BanditMTL, p is a hyper-parameter.",
"To determine whether the performance of BanditMTL is sensitive to p , we conduct experiments on the classification performance of BanditMTL w.r.t different values of p .",
"The results of these experiments are presented in Fig.",
"7. As the figure shows, the performance of our proposed method is not very sensitive to p when p is within the range of 0.3 Figure 8: Comparison of task weight adaption processes between BanditMTL, Uncertainty, Gradnorm, and MGDA for topic classification.",
"to 0.9 for both sentiment analysis and topic classification.",
"Setting p to between 0.3 and 0.9 can generally provide satisfactory results.",
"In this section, we observe the changes in p t during training and compare these changes with the task weight adaption process of three weight adaptive MTL methods (i.e., Uncertainty, Gradnorm, and MGDA).",
"The results for topic classification are reported in Fig. 9.",
"Due to space limitations, the sentiment analysis results are presented in the appendix.",
"From the results, we can see that the weight adaption process of BanditMTL is more stable than that of Uncertainty, Gradnorm, and MGDA.",
"This paper proposes a novel Multi-task Learning algorithm, dubbed BanditMTL.",
"It fills the task variance regularization gap in the field of MTL and achieves state-of-the-art performance in real-world text classification applications.",
"Moreover, our proposed BanditMTL is model-agnostic; thus, it could potentially be used in other natural language processing applications, such as Multi-task Named Entity Recognition.",
"This work is supported by the National Natural Science Foundation of China under Grants 61976161 and 61976162."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"abstain",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"objective",
"other"
] |
[
"Mental health conditions remain underdiag-nosed even in countries with common access to advanced medical care.",
"The ability to accurately and efficiently predict mood from easily collectible data has several important implications for the early detection, intervention, and treatment of mental health disorders.",
"One promising data source to help monitor human behavior is daily smartphone usage.",
"However, care must be taken to summarize behaviors without identifying the user through personal (e.g., personally identifiable information) or protected (e.g., race, gender) attributes.",
"In this paper, we study behavioral markers of daily mood using a recent dataset of mobile behaviors from adolescent populations at high risk of suicidal behaviors.",
"Using computational models, we find that language and multimodal representations of mobile typed text (spanning typed characters, words, keystroke timings, and app usage) are predictive of daily mood.",
"However, we find that models trained to predict mood often also capture private user identities in their intermediate representations.",
"To tackle this problem, we evaluate approaches that obfuscate user identity while remaining predictive.",
"By combining multimodal representations with privacy-preserving learning, we are able to push forward the performance-privacy frontier.",
"Mental illnesses can have a damaging permanent impact on communities, societies, and economies all over the world (World Health Organization, 2003).",
"Individuals often do not realize they are at risk of mental disorders even when they have symptoms.",
"As a result, many are late in seeking professional help and treatment (Thornicroft et al., 2016), particularly among adolescents where suicide is the second leading cause of death (Curtin (cid:63) first two authors contributed equally.",
"and Heron, 2019).",
"In addition to deaths, 16 % of high school students report having serious suicidal thoughts each year, and 8 % of them make one or more suicide attempts (CDC, 2015).",
"This problem is particularly exacerbated as an echo pandemic of mental health problems have arisen in the wake of the COVID19 pandemic (Inkster et al., 2021; Saha et al., 2020).",
"Intensive monitoring of behaviors via adolescents' natural use of smartphones may help identify real-time predictors of mood in high-risk youth as a proxy for suicide risk (Nahum-Shani et al., 2018).",
"While there are inherent limitations in the mismatch between mood prediction and ultimately developing real-time intervention against imminent suicide risk (Coppersmith et al., 2018; Ophir et al., 2020), we believe that the former is a reasonable starting point to tackle similar machine learning problems surrounding affective computing and privacy-preserving learning.",
"Studying mood in this high-risk population is a valuable goal given that suicide attempts are often decided within a short time-lapse and just-in-time assessments of mood changes can be a stepping stone in this direction (Rizk et al., 2019; Oquendo et al., 2020).",
"Technologies for mood prediction can also be a valuable component of decision support for clinicians and healthcare providers during their assessments (Mann et al., 2006; Cho et al., 2019).",
"Recent work in affective computing has begun to explore the potential in predicting mood from mobile data.",
"Studies have found that typing patterns (Cao et al., 2017; Ghosh et al., 2017a; Huang et al., 2018; Zulueta et al., 2018), self-reporting apps (Suhara et al., 2017), and wearable sensors (Ghosh et al., 2017b; Sano et al., 2018) are particularly predictive.",
"In addition, multimodal modeling of multiple sensors (e.g., wearable sensors and smartphone apps) was shown to further improve performance (Jaques et al., 2017; Taylor et al., 2017).",
"While current work primarily relies on self-report apps for long-term mood assessments (Glenn and Nock, 2014), our work investigates mobile behaviors from a high-risk teenage population as a predictive signal for daily mood (Franklin et al., 2017; Large et al., 2017).",
"Prior work has also shown that private information is predictable from digital records of human behavior (Kosinski et al., 2013), which is dangerous especially when sensitive user data is involved.",
"As a result, in parallel to improving predictive performance, a recent focus has been on improving privacy through techniques such as differential privacy (Dankar and El Emam, 2012, 2013; Dankar et al., 2012) and federated learning (McMa-han et al., 2016; Geyer et al., 2017; Liang et al., 2020b), especially for healthcare data (e.g., electronic health records (Xu and Wang, 2019)) and wearable devices (Chen et al., 2020).",
"In this paper , as a step towards using multimodal privacy-preserving mood prediction as fine-grained signals to aid in mental health assessment, we analyze a recent dataset of mobile behaviors collected from adolescent populations at high suicidal risk.",
"With consent from participating groups, the dataset collects fine-grained features spanning online communication, keystroke patterns, and application usage.",
"Participants are administered daily questions probing for mood scores.",
"By collecting and working on ground-truth data for this population, we are able to benchmark on a more accurate indicator of mood rather than proxy data such as mood signals inferred from social media content or behavior (Ernala et al., 2019).",
"This unique dataset presents an opportunity to investigate a different medium of natural language processing typed text which presents new challenges beyond conventionally studied written (Marcus et al., 1993) and spoken (Marslen-Wilson and Tyler, 1980) text.",
"We propose multimodal models that contextualize text with their typing speeds and app usage.",
"However, these models often capture private user identities in their intermediate representations when predicting mood.",
"As a step towards privacy-preserving learning, we also propose approaches that obfuscate user identity while remaining predictive of daily mood.",
"By combining multimodal contextualization with privacy-preserving learning, we are able to push forward the performance-privacy frontier.",
"Finally, we conclude with several observations regarding the uniqueness of typed text as an opportunity for NLP on mobile data.",
"Intensive monitoring of behaviors via adolescents' frequent use of smartphones may shed new light on the early risk of suicidal thoughts and ideations (Nahum-Shani et al., 2018).",
"Smartphones provide a valuable and natural data source with rich behavioral markers spanning online communication, keystroke patterns, and application usage.",
"Learning these markers requires large datasets with diversity in participants, variety in features, and accuracy in annotations.",
"As a step towards this goal, we recently collected a dataset of mobile behaviors from high-risk adolescent populations with consent from participating groups.",
"We begin with a brief review of the data collection process.",
"This data monitors adolescents spanning",
"(a) recent suicide attempters (past 6 months) with current suicidal ideation,",
"(b) suicide ideators with no past suicide attempts, and",
"(c) psychiatric controls with no history of suicide ideation or attempts.",
"Passive sensing data is collected from each partic-ipant's smartphone across a duration of 6 months.",
"Participants are administered clinical interviews probing for suicidal thoughts and behaviors (STBs), and self-report instruments regarding symptoms and acute events (e.g., suicide attempts, psychiatric hospitalizations) are tracked weekly via a questionnaire.",
"All users have given consent for their mobile data to be collected and shared with us for research purposes.",
"This study has been carefully reviewed and approved by an IRB.",
"We follow the NIH guidelines, with a central IRB (single IRB) linked to secondary sites.",
"We have IRB approval for the central institution and all secondary sites.",
"Every day at 8 am, users are asked to respond to the following question In general, how have you been feeling over the last day? with an integer score between 0 and 100 , where 0 means very negative and 100 means very positive.",
"To construct our prediction task, we discretized these scores into the following three bins: negative ( 0 33 ), neutral ( 34 66 ), and positive ( 67 100 ), which follow a class distribution of 12 .",
"43% , 43 .",
"63% , and 43 .",
"94% respectively.",
"For our 3 -way classification task, participants with fewer than 50 daily self-reports were removed since these participants do not provide enough data to train an effective model.",
"In total, our dataset consists of 1641 samples, consisting of data coming from 17 unique participants.",
"We focused on keyboard data, which includes the time of data capture, the mobile application used, and the text entered by the user.",
"For each daily score response at 8 am, we use information collected between 5 am on the previous day to 5 am on the current day.",
"We chose this 5 am5 am window by looking at mobile activity and finding the lowest activity point when most people ended their day: 5 am.",
"Since users report the previous day's mood (when prompted at 8 am), we decided to use this 5 am5 am time period to summarize the previous day's activities.",
"Through prototyping, this prompt time and frequency were found to give reliable indicators of the previous day's mood.",
"From this window, we extracted the following features to characterize and contextualize typed text.",
"Text : After removing stop-words, we collected the top 1000 words (out of approximately 3 . 2 million) used across all users in our dataset and created a bag-of-words feature that contains the daily number of occurrences of each word.",
"Keystrokes : We also extracted keystroke features that record the exact timing that each character was typed on a mobile keyboard (including alphanumeric characters, special characters, spaces, backspace, enter, and autocorrect).",
"By taking the increase in recorded timing after each keystroke, we obtain the duration that each key was pressed in a sequence of keystrokes during the day.",
"When extracting keystrokes, we removed all small timings under 10 2 seconds.",
"App usage : We count the number of mobile applications used per day, creating a bag-of-apps feature for each day.",
"We discard applications that are used by less than 10% of the participants so that our features are generalizable to more than just a single user in the dataset, resulting in 137 total apps (out of the original 640 ).",
"In a preliminary analysis, we observed that predictive models performed well when binarizing our feature vectors into boolean vectors, which signify whether a word or app was used on a given day (i.e., mapping values greater than 0 to 1 ).",
"Our final feature vectors consist of a concatenation of a normalized and a binarized feature vector, resulting in 2000 and 274 -dimensional vectors for text and app features respectively.",
"For keystrokes, we found that summarizing the sequence of timings using a histogram (i.e., defining a set of timing buckets and creating a bag-of-timings feature) for each day performed well.",
"We chose 100 fine-grained buckets, resulting in a 100 -dimensional keystroke vector.",
"Please refer to Appendix B for additional details about the dataset and extracted features.",
"In this paper, we focus on studying approaches for learning privacy-preserving representations from mobile data for mood prediction.",
"Our processed data comes in the form of { ( x t,i , x k,i , x a,i , y i ) } ni =1 with x t N | V t | =2000 denoting the bag-of-words features, x k N | V k | =100 denoting the bag-of-timings features, and x a N | V a | =274 denoting the bag-of-apps features.",
"y denotes the label which takes on one of our 3 mood categories: negative, neutral, and positive.",
"In parallel, we also have data representing the corresponding (one-hot) user identity x id which will be useful when learning privacy-preserving representations that do not encode information about user identity x id and evaluating privacy performance.",
"1. Support Vector Machines (SVMS ) project training examples to a chosen kernel space and finds the optimal hyperplane that maximally separates each class of instances.",
"We apply an SVM classifier on input data x uni { x t , x k , x a } and use supervised Figure 2: Diagram of the NI-MLP algorithm learned via the (1) pretrain , (2) selection , and (3) addition phases.",
"2. Multilayer Perceptrons (MLPS ) have seen widespread success in supervised prediction tasks due to their ability in modeling complex nonlinear relationships.",
"Because of the small size of our dataset, we choose a simple multilayer perceptron with two hidden layers.",
"Similarly, we apply an MLP classifier on input data x uni { x t , x k , x a } to predict daily mood labels y .",
"We extend both SVM and MLP classifiers using early fusion (Baltruaitis et al., 2018) of text and app usage to model multimodal interactions.",
"Specifically, we align the input through concatenating the bag-of-words, bag-of-keystrokes, and bag-of-apps features for each day resulting in an input vector x multi = x t x k x a , before using an SVM/MLP classifier for prediction.",
"While classifiers trained with traditional supervised learning can learn useful representations for mood prediction, they carry the risk of memorizing the identity of the user along with their sensitive mobile usage and baseline mood scores, and possibly revealing these identities to adversarial third-parties (Abadi et al., 2016).",
"Therefore, it is crucial to perform mood prediction while also protecting the privacy of personal identities.",
"We adapt the Selective-Additive Learning (SAL) framework (Wang et al., 2017) for the purpose of privacy-preserving learning.",
"While SAL was originally developed with a very different goal in mind: improving model generalization, we expand SAL to a very important problem in healthcare: preserving privacy.",
"We adapted SAL to learn disentangled representations separated into identity-dependent private information and identity-independent population-level information using three phases: (1) Pretrain phase: The input is a set of (mul-timodal) features x that are likely to contain both identity-dependent and independent information.",
"The intermediate representation z feat = f feat ( x ; feat ) is obtained from an MLP classifier pretrained for mood prediction.",
"f feat denotes the classifier with pretrained parameters feat .",
"(2) Selection phase: Our goal is to now disentangle the identity-dependent and independent information within z feat .",
"We hypothesize that dependent and independent information are encoded in separate subspaces of the feature vector z feat .",
"This allows us to disentangle them by training a separate classifier to predict z feat as much as possible given only the user identity: id = arg min id ( z feat f id ( x id ; id )) 2 + || z id || 1 , (1) where x id denotes a one hot encoding of user identity as input, f id denotes the identity encoder with parameters id , and denotes a hyperparameter that controls the weight of the (cid:96) 1 regularizer.",
"f id projects the user identity encodings to the feature space learned by f feat .",
"By minimizing the objective in equation (1) for each ( x, x id ) pair, f id learns to encode user identity into a sparse vector z id = f id ( x id ; id ) representing identity-dependent features: the nonzero values of z id represent dimensions of the identity-dependent subspace in z feat , while the remaining dimensions belong to the Table 1: Comparison of mood prediction performance across different modalities.",
"(3) Addition phase: Given two factors z feat and z id , to ensure that our prediction model does not capture identity-related information z id , we add multiplicative Gaussian noise to remove information from the identity-related subspace z id while repeatedly optimizing for mood prediction with a final MLP classification layer g ( z feat , z id ; ) .",
"This resulting model should only retain identity-independent features for mood prediction: y = g ( z feat + (cid:15) (cid:12) z id ) (2) where (cid:15) N (0 , 2 ) is repeatedly sampled across batches and training epochs.",
"We call this approach NOISYIDENTITYMLP, or NI-MLP for short, and summarize the final algorithm in Figure 2. Controlling the tradeoff between performance and privacy: There is often a tradeoff between privacy and prediction performance.",
"To control this tradeoff, we vary the parameter , which is the variance of noise added to the identity-dependent subspace across batches and training epochs.",
"= 0 recovers a standard MLP with good performance but reveals user identities, while large effectively protects user identities but at the possible expense of mood prediction performance.",
"In practice, the optimal tradeoff between privacy and performance varies depending on the problem.",
"For our purposes, we automatically perform model selection using this performance-privacy ratio R computed on the validation set, where R = s MLP s NI-MLP t MLP t NI-MLP (3) is defined as the improvement in privacy per unit of performance lost.",
"Here, s is defined as the accuracy in user prediction and t is defined as the F1 score on mood prediction.",
"We perform experiments to test the utility of text, keystroke, and app features in predicting daily mood while keeping user privacy in mind.",
"Data splits: Given that our data is longitudinal, we split our data into 10 partitions ordered chronologically by users.",
"We do so in order to maintain independence between the train, validation, and test splits in the case where there is some form of time-level dependency within our labels.",
"Evaluation: For each model, we run a nested k fold cross-validation (i.e., we perform 9 -fold validation within 10 -fold testing).",
"For each test fold, we identify the optimal parameter set as the one that achieves the highest mean validation score over the validation folds.",
"To evaluate NI-MLP, we use the best performing MLP model for each test fold as our base classifier before performing privacy-preserving learning.",
"For all experiments, we report the test accuracy and macro F1 score because our classes are imbalanced.",
"Given the low number of cross-validation folds, we use the Wilcoxon signed-rank test (Wilcoxon, 1992) at 5% significance level for all statistical comparisons (see Appendix C for more experimental details).",
"We make the following observations regarding the learned language and multimodal representations for mood prediction:",
"Observation 1: Text, keystroke, and app usage features are individually predictive of mood.",
"To evaluate how predictive our extracted text, keystroke timings, and app usage features are, we first run experiments using SVM, MLP, and NI-MLP on each individual feature separately.",
"Since we have unbalanced classes, we chose a majority classifier (i.e., most common class in the training Table 2: Mood prediction from text using extended pretrained LM encoders. We find that these models struggle on extremely long contexts of typed text. Models F1 SCOREACCURACY BoW 56 . 27 60 . 61 BERT 51 . 42 58 . 06 XLNet 19 . 85 42 . 40 LongFormer 19 . 85 42 . 40 set) as our baseline.",
"From Table 1, we observe that using these three feature types individually outperforms the baseline with respect to accuracy and F1 score.",
"Using the Wilcoxon signed-rank test (Wilcoxon, 1992) at 5% significance level, we found that these improvements over the baseline in both F1 score and accuracy are statistically significant (p-value << 0 . 05 ).",
"Observation 2: Pretrained sentence encoders struggle on this task.",
"We also applied pretrained sentence encoders such as BERT (Devlin et al., 2019) on the language modality for mood prediction.",
"Surprisingly, we found that none of these approaches performed stronger than a simple bag-of-words (see Table 2).",
"We provide two possible explanations for this phenomenon: 1. BERT is suitable for written text on the web (Wikipedia, BookCorpus, carefully human-annotated datasets) which may not generalize to informal typed text that contains emojis, typos, and abbreviations (see Section 4.4 for a qualitative analysis regarding the predictive abilities of emojis and keystrokes for mood prediction).",
"2. We hypothesize that it is difficult to capture such long sequences of data (> 1000 time steps) spread out over a day.",
"Current work has shown that BERT struggles with long sequence lengths (Beltagy et al., 2020).",
"We trained two extensions XLNet (Yang et al., 2019) and LongFormer (Beltagy et al., 2020) specifically designed to take in long-range context but found that they still underperform as compared to a simple bag-of-words approach.",
"timings improves performance.",
"This dataset presents a unique opportunity to study representations of typed text as an alternative to conventionally studied written or spoken text.",
"While the latter two use language alone, typed text includes keystroke features providing information about the timings of when each character was typed.",
"In Table 1, we present some of our initial results in learning text and keystroke representations for mood Table 3: Mood prediction using a MLP from text and keystroke features tallied from (1) all characters, (2) a split between types of characters, as well as (3) aggregated across words.",
"prediction and show consistent improvements over text alone.",
"We further study the uniqueness of typed text by comparing the following baselines: 1. Text : bag-of-words only.",
"3. Text + split char keystrokes : bag-of-words and bag-of-timings subdivided between 6 groups: alphanumeric characters, symbols, spacebar, enter, delete, and use of autocorrect.",
"This baseline presents a more fine-grained decomposition of the typing speeds across different semantically related character groups.",
"4. Text + word keystrokes : bag-of-words and bag-of-timings summed up over the characters in each word.",
"This presents a more interpretable model to analyze the relationships between words and the distribution of their typing speeds.",
"From Table 3, we observe that keystrokes accurately contextualize text, especially when using fine-grained keystroke distributions across individual characters.",
"Other methods incorporating keystroke features are also all stronger than unimodal models.",
"Different ways of representing keystrokes also provide different levels of interpretability regarding the relationships between words, characters, and keystrokes for mood prediction, which we qualitatively analyze in 4.4.",
"Observation 4: Multimodal representation learning achieves the best performance.",
"In Table 1, we also compare the performance of our models on combined (text + keystroke + apps) features versus the performance on each individual feature set.",
"For both metrics, combining all features gives better performance over either subset.",
"Despite these promising results in mood prediction, we ask an important question: Does the model capture user identities as an intermediate step towards predicting mood?",
"To answer this question, we",
"an-(a) MLP (without privacy-preserving)",
"3: Visualization of representations learned by",
"(a) MLP and",
"(b) NI-MLP, which have been reduced to two dimensions via t-SNE and colored by participant identity.",
"Representations learned by NI-MLP are no longer separable by users which better preserves privacy.",
"alyze the privacy of raw mobile data and trained models.",
"We then study our proposed method of learning privacy-preserving features to determine whether it can obfuscate user identity while remaining predictive of daily mood.",
"How private is the mobile data?",
"We evaluate how much the data reveal user identities by training predictive models with typed text, keystroke timings, and app usage as input and user identity as the prediction target.",
"From Table 4, we observe that all modalities are very predictive of user identity (> 87% accuracy), which further motivates the need to learn privacy-preserving features.",
"We further note that identifiable information can be very subtle: while only 28 / 1000 words were named entities, it was possible to identify the user identity with > 87% accuracy, which means that subtle word choice can be identify the user (similarly for apps and keystrokes).",
"How private are the learned privacy-preserving features?",
"We also study whether our learned features are correlated with user identity through both visualizations and quantitative evaluations.",
"Visualizations: We use t-SNE (Van der Maaten and Hinton, 2008) to reduce the learned features from trained models to 2 dimensions.",
"After color-coding the points by participant identity, we identify distinct clusters in Figure",
"3(a), which implies that mood prediction can be strongly linked to identi-Table 5: Comparison of our privacy-preserving approach (NI-MLP) with the baseline (MLP).",
"We evaluate privacy in predicting user identity from learned representations ( lower accuracy is better), and find that NI-MLP effectively obfuscates user identity while retaining performance.",
"T: text, K: keystrokes, A: apps.",
"fying the person, therefore coming at the price losing privacy.",
"As an attempt to reduce reliance on user identity, we train NI-MLP which is designed to obfuscate user-dependent features.",
"After training NI-MLP, we again visualize the representations learned in Figure",
"3(b) and we find that they are less visually separable by users, indicating that NI-MLP indeed learns more user-independent features.",
"Quantitative evaluation: To empirically evaluate how well our models preserve privacy, we extracted the final layer of each trained model and fit a logistic regression model to predict user identity using these final layer representations as input.",
"The more a model preserves privacy, the harder it should be to predict user identity.",
"From Table 5, we observe that we can predict user identity based on the learned MLP representations with high accuracy (> 85% ) using the most sensitive app usage features.",
"For other modality combinations, user identity can also be decoded with more than 70% accuracy with the exception of keystrokes which are the most private ( 55% ).",
"We achieve significantly more privacy using NI-MLP embeddings roughly 35% Figure 4: Tradeoff between performance (mood prediction F1 score, higher is better) and privacy (identity prediction accuracy, lower is better).",
"for the best multimodal model, which indicates the possibility of NI-MLP as a means of achieving privacy-preserving mood prediction.",
"Understanding the tradeoff between performance and privacy: NI-MLP provides a tunable parameter to control the variance of noise applied on the identity-related dimensions.",
"This parameter has the potential to give a tradeoff between privacy and prediction performance.",
"In Figure 4, we plot this tradeoff between performance (mood prediction F1 score, higher is better) and privacy (iden-tity prediction accuracy, lower is better).",
"We find that keystroke features, while themselves not very useful in predicting mood, are highly private features.",
"It is important to note that keystroke features show strong performance when integrated with text and app usage features while also increasing privacy, thereby pushing the Pareto front outwards.",
"It is also interesting to observe that for most models, performance stays level while privacy improves, which is a promising sign for the real-world deployment of such models which requires a balance between both desiderata.",
"To further shed light on the relationships between mood prediction performance and privacy, we performed a more in-depth study of the text, keystroke, and app usage features learned by the model (see Appendix D.3 for more examples).",
"Understanding the unimodal features: We first analyze how individual words, keystroke timings, and app usage are indicative of positive or negative mood for different users.",
"Text: We find that several words are particularly indicative of mood: can't/cant , don't/don't , and sorry are negative for more users than positive, while yes is overwhelmingly positive across users (9 pos, 1 neg), but yeah is slightly negative (5 pos, 7 neg).",
"We also analyze the use of emojis in typed text and find that while there are certain emojis that lean positive (e.g., ), there are ones (e.g., :( and ) that used in both contexts depending on the user (see Table 6).",
"Apps: In Table 7, we show the top 3 apps associated with positive or negative moods across several users.",
"It is interesting to observe that many outdoor apps (i.e., Weather, MyFitnessPal, Uber ), photo sharing apps (i.e., Photos, Snapchat ), and calling apps (i.e., FaceTime, Phone ) are associated with positive mood, while personal apps such as personal management (i.e., Calendar, Notes, Siri ), web browsing (i.e., Chrome, Safari ), and shopping (i.e., App Store ) are associated with negative mood.",
"However, some of these findings are rather user-specific (e.g., Phone can be both positive or negative depending on the user).",
"Understanding the multimodal features: We also analyze how the same characters and words can contribute to different mood predictions based on their keystroke patterns.",
"As an example, the distribution of keystrokes for the enter character on the keyboard differs according to the daily mood of one user (see Figure 5 and Appendix D.3 for Figure 5: An example where the enter' character keypress is indicative of either positive, neutral, or negative mood depending on the keypress duration.",
"more users).",
"In Table 8, we extend this analysis to entire words.",
"For each of the 500 most common words, we aggregated their accompanying keystroke timings for user-reported positive and negative mood.",
"These two distributions tell us how the same word in different keystroke contexts can indicate different moods.",
"We performed Wilcoxon rank-sum tests at 5% significance level to compare these distributions and recorded the words in which either faster or slower typing was statistically significantly correlated with either mood.",
"Observe how certain semantically positive words like love , thank , and haha become judged as more positive when typed at a faster speed.",
"Therefore, contex-tualizing text with their keystroke timings offers additional information when learning representations of typed text.",
"In this paper, we investigated the learning of language and multimodal representations of typed text collected from mobile data.",
"We studied the challenge of learning markers of daily mood as a step towards early detection and intervention of mental health disorders for social good.",
"Our method also shows promising results in obfuscating user identities for privacy-preserving learning, a direction crucial towards real-world learning from sensitive mobile data and healthcare labels.",
"In addition, our findings illustrate several challenges and opportunities in representation learning from typed text as an understudied area in NLP.",
"Limitations & future work: While our approach shows promises in learning representations for mood prediction, several future directions on the modeling and NLP side include: 1) better models and pre-training algorithms for NLP on typed text, 2) algorithms that provide formal guarantees of privacy (Dwork, 2008), and 3) federated training from decentralized data (McMahan et al., 2016) to improve privacy (Geyer et al., 2017) and fairness (Liang et al., 2020a) of sensitive data.",
"We describe more limitations and future social implications of our work in our broader impact statement in Appendix A. Acknowledgements This material was based upon work partially supported by the National Science Foundation (Awards #1750439 and #1734868) and the National Institutes of Health (Award #U01MH116923).",
"MM was supported by the Swiss National Science Foundation (#P2GEP2_184518).",
"RS was supported by NSF IIS1763562 and ONR Grant N000141812861.",
"Any opinions, findings, and conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation, National Institutes of Health, or Office of Naval Research, and no official endorsement should be inferred.",
"We would also like to acknowledge NVIDIA's GPU support and the anonymous reviewers for their extremely helpful comments."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Existing work on automated hate speech classification assumes that the dataset is fixed and the classes are pre-defined.",
"However, the amount of data in social media increases every day, and the hot topics changes rapidly, requiring the classifiers to be able to continuously adapt to new data without forgetting the previously learned knowledge.",
"This ability, referred to as lifelong learning, is crucial for the real-word application of hate speech classifiers in social media.",
"In this work, we propose lifelong learning of hate speech classification on social media.",
"To alleviate catastrophic forgetting, we propose to use Variational Representation Learning (VRL) along with a memory module based on LB-SOINN (Load-Balancing Self-Organizing Incremental Neural Network).",
"Experimentally, we show that combining variational representation learning and the LB-SOINN memory module achieves better performance than the commonly-used lifelong learning techniques.",
"With the rapid rise in user-generated web content, the scale and complexity of online hate have reached unprecedented levels in recent years.",
"ADL (Anti-Defamation League) conducted a nationally representative survey of Americans in December 2018 and the report shows that over half (53%) of Americans experienced some type of online harassment.",
"1 This number is higher than the 41% reported to a comparable question asked in 2017 by the Pew Research Center (Center, 2017).",
"To address the growing online hate, a great deal of research has focused on automatic hate speech classification.",
"Most of the previous work focuses on binary classification (Warner and Hirschberg, 2012; Zhong et al., 2016; Nobata et al., 2016; Gao et al., 2017; Qian et al., 2018b) or coarse-grained multi-1 https://www.adl.org/onlineharassment Figure 1: An illustration of our proposed task.",
"class classification (Waseem and Hovy, 2016; Bad-jatiya et al., 2017; Davidson et al., 2017).",
"Qian et al. (2018a) argue that fine-grained classification is necessary for fine-grained hate speech analysis.",
"The Southern Poverty Law Center (SPLC) monitors hate groups throughout the United States by a variety of methodologies to determine the activities of groups and individuals, including reviewing hate group publications.",
"2 Therefore, instead of differentiating normal posts from the other offensive ones, Qian et al. (2018a) propose a more fine-grained hate speech classification task that attributes hate groups to individual tweets.",
"However, a common limitation of all the research mentioned above is that they assume the dataset to be static and train the classifiers on each isolated dataset, i.e., isolate learning, ignoring the rapid increase of the amount of data in social media and the rapid change of the hot topic.",
"A report from L1ght 3 , a company that specializes in measuring online toxicity, suggests that 2 https://www.splcenter.org/fighting-hate/extremist-files/ideology 3 https://l1ght.com/Toxicity_during_coronavirus_Report-L1ght.pdf amid the growing threat of the coronavirus, there has been a 900% growth in hate speech towards China and Chinese people on Twitter since February 2020.",
"As a result of the rapid change of social media content, the hate speech classifiers are required to be able to continuously learn and accumulate knowledge from a stream of data, i.e., lifelong learning.",
"Learning on each portion of the data is considered as a task, so a stream of tasks are joined to be trained sequentially.",
"In this work, we propose a novel lifelong fine-grained hate speech classification task, as illustrated in Figure 1.",
"The models trained by isolate learning tend to face catastrophic forgetting (McCloskey and Cohen, 1989; Ratcliff, 1990; McClelland et al., 1995; French, 1999) due to a non-stationary data distribution in lifelong learning.",
"To address this problem, an extensive body of work has been proposed for various lifelong learning tasks.",
"However, our experiments show that the commonly-used lifelong learning methods still exhibit catastrophic forgetting in our proposed tasks.",
"One important difference between the Twitter hate group dataset and the other image datasets commonly used in lifelong learning study is that the similarity among the different tasks is unstable and relatively low, as indicated by the low average Jaccard Indexes of the topic words in Table 1.",
"To alleviate this problem, we introduce VRL to distill the knowledge from each task into a latent variable distribution.",
"We also augment the model with a memory module and adapt the clustering algorithm, LB-SOINN, to select the most important samples from the training dataset of each task.",
"Our contributions are three-fold: This is the first paper on lifelong learning of fine-grained hate speech classification.",
"We propose a novel method that utilizes VRL along with an LB-SOINN memory module to alleviate catastrophic forgetting resulted from a severe change of data distribution.",
"Experimental results show that our proposed method outperforms the state-of-the-art sig-nificantly on the average F1 scores.",
"Most research on lifelong learning alleviates catastrophic forgetting in the following three directions.",
"Regularization-based Methods: These methods impose constraints on the weight update.",
"The goal Ideology Avg.",
"of the constraints is to minimize deviation from trained weights when training on a new task.",
"The constraints are generally modeled by additional regularization terms (Kirkpatrick et al., 2017; Zenke et al., 2017; Fernando et al., 2017; Liu et al., 2018; Ritter et al., 2018).",
"Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) alleviates catastrophic forgetting by slowing down learning on the model parameters which are important to the previous task.",
"The importance of the parameters is estimated by the Fisher information matrix.",
"Instead of the Fisher information matrix, PathNet (Fernando et al., 2017) uses agents embedded in the neural network to determine which parameters of the neural network can be reused for new tasks and the task-relevant pathways are frozen during training on new tasks.",
"Architecture-based Methods: The main idea of this approach is to change architectural properties to dynamically accommodating new tasks, such as assigning a dedicated capacity inside a model for each task.",
"Rusu et al. (2016) propose Progressive Neural Networks, where the model architecture is expanded by allocating a new column of neural network for each new task.",
"Part and Lemon (2016, 2017) combine Convolutional Neural Network with LB-SOINN for incremental online learning of object classes.",
"Although they also use LB-SOINN in their work, the usage of LB-SOINN in this work is completely different.",
"They use LB-SOINN to predict object class while our proposed method adapts the original LB-SOINN to calculate the importance of the training samples without making any prediction on the class.",
"A problem with the methods in this category is that the available computational resources are limited in practice.",
"As a result, the model expansion will be prohibited when the number of tasks increases to a certain degree.",
"Data-based Methods: These methods alleviate catastrophic forgetting by utilizing a memory module, which either stores a small number of real samples from previous tasks or distills knowledge from previous tasks.",
"The main feature of Gradient Episodic Memory (GEM) (Lopez-Paz and Ranzato, 2017) is the episodic memory, storing a subset of the samples from the observed tasks.",
"GEM computes the losses on the episodic memories and treats them as inequality constraints, avoiding them to increase.",
"Averaged GEM (Chaudhry et al., 2019) is a more efficient version of GEM.",
"de Masson d'Autume et al. (2019) propose a lifelong language learning model using a key-value memory module for sparse experience replay and local adaptation.",
"Sun et al. (2020) formulate lifelong language learning as a language modeling task and replay the generated pseudo-samples of previous tasks during training.",
"There are also studies combining multiples methods above.",
"Xia et al. (2017) combine the architecture-based method and the data-based method.",
"Wang et al. (2019) combine the regularization method and the data-based method for lifelong learning on relation extraction.",
"Our proposed method is also a combination of the regularization method and the data-based method but in a different way.",
"We use the dataset as in Qian et al. (2018a), where the tweet handles are collected based on the hate groups identified by SPLC.",
"SPLC categorizes these hate groups according to their hate ideologies.",
"For each hate ideology, the top three Twitter handles are selected in terms of the number of followers.",
"The dataset includes all the content (tweets, retweets, and replies) posted with each Twitter account from the group's inception date, as early as 2009, until 2017.",
"Altogether, the dataset consists of 42 hate groups from 15 different ideologies.",
"Table 1 shows the 15 ideologies.",
"Each instance in the dataset is a text tuple of (tweet, hate group name, hate ideology).",
"reason is that various existing hate speech datasets collect data using keywords or hashtags (Waseem and Hovy, 2016; Davidson et al., 2017; Golbeck et al., 2017), which have a strong relationship with hate ideologies or topics.",
"We also observe that the hot spots of society can lead to a significant shift of major hate speech topics or the emergence of new hate ideologies on social media as mentioned in section 1, indicating that the expansion of the hate speech dataset may be accompanied by the emergence of new hate ideologies.",
"Therefore, we separate the collected data into a sequence of 15 subsets according to their ideologies and sort them by the date of the first tweet post in each subset, from the earliest to the latest.",
"The task on each subset is to identify the hate group given the tweet text.",
"Qian et al. (2018a) propose a hierarchical Conditional Variational Autoencoder model for the fine-grained hate speech classification task.",
"The architecture and the training process of their model require the number of classes to be pre-defined.",
"However, we do not pre-define the number of classes in our task since such kind of information is not available in the real-world application of lifelong learning.",
"The model should be able to incorporate emerging hate groups at any time of training.",
"In order to satisfy this condition, we formulate the task of identifying the group as a ranking task, instead of a classification task.",
"For each tweet, we provide the model with a set of candidate groups, consisting of all the previously seen hate groups, including the ground truth group.",
"The model takes each combination of the tweet and the candidate group as input and outputs a score.",
"The corresponding loss function is: L r = (cid:88) ( x,y s ) D (cid:88) y i Y \\{ y s } h ( f ( x, y s ) f ( x, y i )) (1) where x is the tweet text, y s is the ground truth group of x .",
"Y is candidate group set of x , which consists of all the seen hate groups until x is observed by the model, including the ground truth group y s of x , so y i Y \\{ y s } is the negative candidate group of x .",
"f is the scoring model parameterized by .",
"h ( a ) = max(0 , m a ) , m is the chosen margin.",
"Same as in other lifelong learning studies, we consider learning on each of the hate ideologies in the sequence as a task, so we have a sequence of 15 tasks.",
"As mentioned in section 1, the similarity among our tasks is unstable and relatively low.",
"Therefore, when the model is continuously trained on the tasks, it may encounter a sudden change of vocabulary, topic, and input data distribution.",
"This makes our tasks more challenging compared to the other lifelong learning tasks because the abrupt change can make the catastrophic forgetting problem more severe.",
"This is also the reason that some techniques achieving significant improvement in the image classification tasks do not perform well on our task (see section 5).",
"As mentioned in section 2, one way to alleviate catastrophic forgetting is to use a memory module, storing a small number of real samples from previous tasks and a simple way to utilize the memorized samples is to replay the memory when training on a new task, such as mixing them with the training samples from the current task.",
"The idea behind this approach is that the memorized samples should reflect the data distribution so that the replay of the memory can help the model make invariant predictions on the samples of the previous tasks.",
"However, this approach may not work well when the size of the memory is small.",
"The reason is that when there is only a small amount of data memorized, the memory is not able to reflect the data distribution of the previous task and thus the model can easily overfit on the memorized samples instead of generalizing to all the samples in the previous task.",
"We address this problem from two aspects.",
"First, since the memory size is limited, it is beneficial to select the most representative training samples in the previous tasks to memorize.",
"Second, simply storing the real training samples in the memory may not be sufficient to represent the knowledge of the previous tasks, so we need a better way to distill knowledge from the observed samples along with a method to utilize it when training on a new task.",
"We combine two techniques: Variational Representation Learning (VRL) and Load-Balancing Self-Organizing Incremental Neural Network (LB-SOINN) to achieve these goals.",
"We propose a supervised version of LB-SOINN to select the most important training samples in the current task.",
"VRL not only distills the knowledge from the current training task but also provides an appropriate hidden representation as input for the LB-SOINN, so we introduce VRL first.",
"The distilled knowledge of previous tasks can take various forms, but the key point is that it should be related to the data distribution of the corresponding task so that it can be utilized to alleviate catastrophic forgetting.",
"Inspired by the Variational Autoencoder (VAE) (Kingma and Welling, 2013), we consider the distribution of the hidden representation of the input data as the distilled knowledge.",
"Obj = (cid:88) x X log p ( x ) (2) p ( x ) = (cid:90) z p ( x | z ) p ( z ) dz (3)",
"z is the latent variable, i.e., the hidden representation of the input.",
"Since the integration over z is intractable, we instead try to maximize the corresponding evidence lower bound (ELBO) and the corresponding loss function is as follows: L vae = (cid:88) x XE z p ( z | x ) [ log p ( x | z )]+ DKL [ q ( z | x ) || p ( z )] (4) p ( x | z ) , q ( z | x ) , and p ( z ) are the likelihood distribution, posterior distribution, and prior distribution.",
", , and indicate parameterization.",
"The loss function can be separated into two parts.",
"The first part E [ log p ( x | z )] is the reconstruction loss, trying to reconstruct the input text from the latent variable.",
"It pushes z to reserve as much information of the input as possible.",
"This is consistent with our goal to learn the knowledge of the data distribution.",
"The second part is DKL [ q ( z | x ) || p ( z )] , where DKL is the KullbackLeibler (KL) divergence.",
"Minimizing it pushes the posterior and the prior distributions to be close to each other.",
"By assuming the posterior p ( z | x ) to be a multivariate Gaussian distribution N ( z , z ) , the latent variable z is sampled from N ( z , z ) .",
"In the original VAE, p ( z ) is chosen to be a simple Gaussian distribution N (0 , 1) .",
"However, this is over-simplified in our task because different from the unsupervised generation task of the original VAE, our ranking task is supervised.",
"Our task not only requires z to contain information of the tweet text itself but also requires it to indicate the group information of the tweet.",
"In other words, the distilled distribution should be conditioned on both the Figure 2: An illustration of our method.",
"tweet and its group label to reflect the data distribution in a supervised task.",
"Setting the prior to be the same for all the hate groups pushes z or the distribution of z to ignore the label information.",
"Instead, the prior should be different for each hate group, so we replace p ( z ) with p ( u | y s ) , where y s is the group label of x and u is the latent variable.",
"p ( u | y s ) is assumed to be a multivariate Gaussian distribution N ( u , u ) .",
"Note that the replacement itself can not guarantee p ( u | y s ) to be different for each hate group because the loss function in equation 4 does not push p ( u | y s ) to satisfy this condition.",
"However, the ranking loss function 1 fills in the gap.",
"Therefore, our loss function on the current training task is a combination of these two.",
"L cur = (cid:88) ( x,y s ) D (cid:88) y i Y \\{ y s } h ( f ( x, y s ) f ( x, y i )) + E z p ( z | x ) [ log p ( x | z )] + DKL [ q ( z | x ) || p ( u | y s )] (5) The right part of Figure 2 illustrates the computation process of VRL.",
"VRL provides a way to summarize knowledge into latent variable distributions.",
"However, we still need a method to utilize the learned distribution to alleviate catastrophic forgetting.",
"We do this by incorporating a memory module D mem to store a small subset of important training samples along with their latent variable distributions, so each sample stored in the memory is a tuple of ( x, y z , q (cid:48) ( z | x )) .",
"Here q (cid:48) ( z | x ) is the distribution computed when the model completes training on the task that ( x, y z ) belongs to.",
"The memorized samples are taken as anchor points when training on a new task.",
"We introduce a memory KL divergence loss to push q ( z | x ) computed when training on a new task to be close to the memorized distribution q (cid:48) ( z | x )) .",
"Therefore, the complete loss function is: L = L cur + D KLmem = L cur + (cid:88) ( x,y s ) D mem DKL [ q ( z | x ) || q (cid:48) ( z | x ))] (6) Since the size of the memory is limited, we introduce a supervised version of LB-SOINN to select the most important training samples in the current task.",
"The input for the LB-SOINN is the hidden representation of the tweet text, which is z in the case of Variational Representation Learning (see Figure 2).",
"We refer readers to Zhang et al. (2013) for the detailed explanation of LB-SOINN.",
"The original LB-SOINN is an unsupervised clustering algorithm that clusters unlabeled data by topology learning.",
"We utilize the topology learning of LB-SOINN instead of clustering since our task is supervised.",
"Therefore, we make the following adjustments to the original LB-SOINN.",
"1) The criteria to add a new node: Add a new node to the node set if one of the following condition is satisfied:",
"a) The distance between the input and the winner is larger than the winner's threshold.",
"b) The distance between the input and the second winner is larger than the second winner's threshold.",
"c) The label of the input sample is not the same as the label of the winner.",
"2) Build connections between nodes: Connect the two nodes with an edge only if the winner and the second winner belong to the same class.",
"3) We disable the removal of edges whose ages are greater than a predefined parameter.",
"We disable the deleting of nodes and the algorithm of updating the subclass labels of every node.",
"The node label is the label of the instances assigned to it.",
"Our adjusted algorithm guarantees that each node will only be assigned the samples from one class.",
"LB-SOINN keeps track of the density of each node, which is defined as the mean accumulated points of a node.",
"A node gets points when there is an input sample assigned to it.",
"If the mean distance of the node from its neighbors is large, we give low points to the node.",
"In contrast, if the mean distance of the node from its neighbors is small, we give high points to the node.",
"Therefore, the density of the node reflects the number of nodes close to it and also the number of samples assigned to it.",
"We take the density of the node as a measurement of the importance of the samples assigned to the node.",
"After the LB-SOINN finishes training on the samples from the current task, we sort the samples according to the density of the node they are assigned to and the top K samples are selected to write to the memory.",
"We divide the memory equally for each of the previous tasks, so K = M/t , where M is the total memory size and t is the number of observed tasks, including the current task.",
"The old memory consists of samples from the previous t 1 tasks and each task keeps M/ ( t 1) samples in the old memory.",
"For each of the t 1 tasks, the M/ ( t 1) M/t samples with the lowest node densities are deleted, resulting in K empty slots in the memory, which is then rewritten by the selected K samples in the current task.",
"For each task, we randomly sample 5000 tweets from the 80% of the collected data for training, 10% of the collected data for testing, and the rest 10% for development.",
"We allow the model to make more than one pass over the training samples in the current task or the current memory during training.",
"We use average macro F1 score and average micro F1 score for evaluation.",
"where F 1 t,i is the F1 score, either macro F1 or micro F1, achieved by the model on the i th task after being trained on the t th task.",
"The larger this metric, the better the model.",
"We compare our methods with the following methods: Fine-tuning: The model contains two bidirectional LSTM encoders (Hochreiter and Schmid-huber, 1997; Zhou et al., 2016; Liu et al., 2016) to encode the tweet and the group separately.",
"The score of the group is calculated as the cosine distance between the hidden state of the tweet encoder and that of the group encoder.",
"This model is also the backbone model of all the methods described below, except Fine-tuning + BERT.",
"The model is directly fine-tuned on the stream of tasks, one after another, by the ranking loss function in 1.",
"Fine-tuning+BERT: The training framework is the same as above, but each encoder is replaced by a pre-trained BERT model (Devlin et al., 2019) followed by a linear layer.",
"The linear layers are fine-tuned during training.",
"Fine-tuning+RMR (Random Memory Replay): We augment the fine-tuning method with an additional memory module.",
"Same as in section 4.2, the memory is divided equally for each task, but instead of using LB-SOINN, the K samples are randomly sampled from the current training data and then rewrite K random slots in the old memory.",
"EWC: EWC is a regularization-based method, adding a penalty term (cid:80) i 2 F i ( i i ) 2 to the ranking loss function 1.",
"F i is the diagonal of the Fisher information matrix F , is the model parameter, and i labels each parameter.",
"is the model parameter when the model finishes training on the previous task.",
"is set to 2e6 in our experiments.",
"GEM: We use the episodic memory in the original paper: the memory is populated with m random Number of observed tasks t=5 t=10 t=15 Avg F1 score (%) Macro Micro Macro Micro Macro Micro Multitask 15.26 67.07 5.05 37.20 3.57 38.61 Fine-tuning 6.02 16.44 4.35 5.77 3.96 6.18 Fine-tuning + BERT 6.02 16.44 4.06 5.45 3.03 5.80 Fine-tuning + RMR 11.15 44.40 2.56 15.77 3.51 15.19 EWC 8.57 20.42 2.42 6.81 1.95 7.27 GEM 13.04 30.95 3.07 12.51 2.70 15.07 Ours 12.61 49.75 6.96 47.30 5.13 44.62 Table 2: Experimental results.",
"samples from each task.",
"m is a predefined size of the episodic memory.",
"We set m = 100 in our experiments, so each task can add 100 tweets to the memory.",
"By the end of the 15 tasks, the total memory of GEM contains 1500 tweets.",
"Multitask Learning: The tasks are trained simultaneously.",
"We mix the training data from multiple tasks to train the model.",
"This setting does not follow the lifelong learning setting where the tasks are trained sequentially.",
"We add this setting in our experiments to show the potential room for improvement concerning each lifelong learning method.",
"We do not compare our method with Support Vector Machine (Suykens and Vandewalle, 1999) or Logistic Regression, because they require the number of classes to be fixed and to be known in advance, which is unrealistic in our tasks.",
"We also do not compare our method with Qian et al. (2018a) since the latter also has this requirement, as mentioned in section 3. Adapting their method for the lifelong learning setting requires modifying both the model architecture and the training algorithm, which is beyond the scope of this paper.",
"In all our experiments, we use 1-layer bi-LSTM as encoders except the fine-tuning + BERT setting and we use cosine distance to measure similarity.",
"The input of the group encoder is the concatenation of the group name and its hate ideology.",
"We use 1-layer bidirectional GRU (Cho et al., 2014) as the decoder in VRL.",
"The hidden size of the encoders and the decoders is 64.",
"The latent variable size in VRL is 128.",
"We use 300-dimensional randomly initialized word embeddings.",
"All the neural networks are optimized by Adam optimizer with the learning rate 1e-4.",
"The batch size is 64.",
"The loss margin m = 0 .",
"5 .",
"The maximum number of training epochs for each task is set to 20.",
"For LB-SOINN, =1000 , =1 .",
"04 .",
"The memory size is limited to 1000 tweets for all the methods using a memory module except GEM.",
"We do not set episodic memory size for each task as GEM because for lifelong hate speech classification, the number of tasks keeps increasing in the real world, and assuming unlimited total memory is unrealistic.",
"The experimental results are shown in Table 2. We report the performance of each method after the model finishes training on the first 5 tasks, first 10 tasks, and all the 15 tasks.",
"The average macro-F1 score is much lower than the average micro-F1 score due to the imbalanced data of each task.",
"The large performance gap between the multitask training and fine-tuning shows that there exists severe catastrophic forgetting and that the low average F1 scores in the fine-tuning setting are not due to the model capacity.",
"Replacing the bi-LSTM encoder with the pre-trained BERT encoder does not improve the performance.This reconfirms that the low scores result from catastrophic forgetting, not model capacity.",
"Actually fine-tuning and fine-tuning with BERT achieves the same average F1 scores at t = 5 because both models completely forget the previous tasks after converging on the fifth task, so both models achieve the same F1 scores on the testing data of the fifth task while achieving 0 scores on the previous four tasks.",
"Due to the large model capacity of BERT, fine-tuning with BERT tends to overfit on the training data more seriously, leading to slight performance decline at t = 10 and t = 15 compared to using bi-LSTM encoders.",
"Since model capacity is not the key factor to solve catastrophic forgetting, we simply use bi-LSTM as encoders in our model instead of BERT, considering the computational cost.",
"Adding RMR to the fine-tuning setting achieves significant performance improvement, even better than EWC or GEM.",
"This is related to the characteristic of our tasks mentioned at the end of section 3. EWC remembers previous tasks by slowing down the update of the model parameters important to them, which is more suitable for the sequence of tasks that are similar to each other.",
"However, significant changes in vocabulary, topic, or input data distribution are very common in our sequence of tasks, making memory replay more efficient than EWC.",
"The performance of GEM during the second half of the training is close to that of fine-tuning with RMR, but there exists a gap in the first half.",
"The reason is that GEM sets an episodic memory for each task, of which the size is 100 in our experiments, so before the 10th task in the sequence, the size of the total memory available for GEM is less than that of the memory module used in the fine-tuning with RMR setting.",
"Although RMR improves the performance, the average F1 scores still drop quickly when the number of tasks increases.",
"In the late stage of sequential training, each task can only keep dozens of samples in the memory and the model is not able to generalize well based on the memory.",
"Our method solves this problem by combining VRL and LB-SOINN memory replay.",
"The performance of our model is better and more stable than the other methods when the number of tasks increases.",
"Our method achieves higher scores than multitask training in the last four columns of Table 3 because learning on one task is easier than learning on a mix of tasks simultaneously.",
"Every model in our sequential training experiments can easily achieve high F1 scores on the current task, making a large contribution to the average F1 scores.",
"However, when doing multitask training, the model loses this benefit.",
"moving D KLmem from the final loss function in equation 6 does not lower the performance when the number of observed tasks is small ( t =5 ) because each task can store a few hundreds of samples in the memory at the early stage of sequential training, which is sufficient for the model to learn the previous tasks.",
"However, when the number of tasks increases, D KLmem shows its effect on alleviating catastrophic forgetting.",
"Fine-tuning+LB-SOINN (Table 3) does not perform as well as fine-tuning+RMR (Table 2), while VRL+LB-SOINN (i.e., full model) performs better than VRL+RMR (Table 3).",
"The reason lies in the input for LB-SOINN.Compared to the hidden representations spread evenly in the hidden space, the hidden representations which are well-organized in different group clusters make it easier for LB-SOINN to learn a reasonable topology structure of the training samples.",
"VRL achieves this by explicitly pushing the hidden representation of tweets to follow a learned multivariate Gaussian distribution unique to each group.",
"On the other hand, directly using the hidden state of the tweet encoder does not exhibit such kind of characteristics.",
"VRL not only distills task knowledge but also provides an appropriate input for LB-SOINN, as stated in section 4. 5.3 Error Analysis Although our model achieves significant improvement over the baseline methods, we observe that our method does not perform well on the first task.",
"As shown in Figure 3, there exists a large gap between the performance on the first task and the other tasks, and the micro-F1 score on the first task quickly drops to almost 0 when the number of observed tasks increases.",
"We find the same results after we change the order of tasks in the sequence, so this is not the result of the task difficulty but is the result of our method.",
"We find this problem is due to the reconstruction loss, which is the first part in equation 4. The model observes a very limited number of tweets when training on the first task, making it difficult to learn the language model and reconstruct the tweet.",
"As a result, the tweet representation learned on the first task may not contain the information we require, resulting in a large performance gap.",
"When the number of observed tasks increases, this problem goes away quickly.",
"We anticipate pre-training the VAE in our model (the left branch in Figure 2) on a large Twitter corpus can alleviate this problem at the beginning of training.",
"In this paper, we introduce the lifelong hate speech classification task and propose to use the VRL and LB-SOINN memory module to alleviate catastrophic forgetting.",
"Our proposed method has the potential to benefit other lifelong learning tasks where the similarity between the contiguous tasks can be low.",
"We intend to make our implementation freely available to facilitate more application and investigation of our method in the future."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain"
] |
[
"We present a data-driven, end-to-end approach to transaction-based dialog systems that performs at near-human levels in terms of verbal response quality and factual grounding accuracy.",
"We show that two essential components of the system produce these results: a sufficiently large and diverse, in-domain labeled dataset, and a neural network-based, pre-trained model that generates both verbal responses and API call predictions.",
"In terms of data, we introduce TicketTalk, a movie ticketing dialog dataset with 23,789 annotated conversations.",
"The movie ticketing conversations range from completely open-ended and unrestricted to more structured, both in terms of their knowledge base, discourse features, and number of turns.",
"In qualitative human evaluations, model-generated responses trained on just 10,000 TicketTalk dialogs were rated to make sense 86.5% of the time, almost the same as human responses in the same contexts.",
"Our simple, API-focused annotation schema results in a much easier labeling task making it faster and more cost effective.",
"It is also the key component for being able to predict API calls accurately.",
"We handle factual grounding by incorporating API calls in the training data, allowing our model to learn which actions to take and when.",
"Trained on the same 10,000-dialog set, the model's API call predictions were rated to be correct 93.9% of the time in our evaluations, surpassing the ratings for the corresponding human labels.",
"We show how API prediction and response generation scores improve as the dataset size incrementally increases from 5000 to 21,000 dialogs.",
"Our analysis also clearly illustrates the bene-fits of pre-training.",
"To facilitate future work on transaction-based dialog systems, we have published the TicketTalk dataset at https:// git.io/JL8an .",
"Building a dialog system that handles human conversational behavior is challenging because it must respond sensibly and relevantly to a wide variety of context-sensitive user input over multiple conversation turns.",
"Task-based systems, e.g. those used for ticket booking, food ordering, etc., face further hurdles to incorporate ever changing, real-world knowledge into the dialog and execute transactions.",
"Recently, there has been growing interest in the so-called end-to-end approach to task-based dialog systems (Peng et al., 2020; Hosseini-Asl et al., 2020; Lin et al., 2020; Wen et al., 2017; Bordes et al., 2016) due to its relatively simple and scalable architecture, and promising results in chatbot applications (Vinyals and Le, 2015; Serban et al., 2015b).",
"Inspired by sequence-to-sequence learning (Sutskever et al., 2014), this approach trains a single model on a dialog dataset to form the basis for a given application.",
"For each dialog turn, the model effectively takes the conversation history as its input and generates an appropriate response.",
"To gain wider adoption, the end-to-end approach must overcome challenges with respect to training data and factual grounding.",
"In terms of training data, there is already general concern in the NLP community about the lack of quality, task-oriented dialog datasets, especially domain-specific collections (Wen et al., 2017; Bordes et al., 2016).",
"This problem is compounded for end-to-end approaches since they typically require a large amount of in-domain data to generate competitive results.",
"With respect to grounding, since the end-to-end approach is based on a single neural network, it must either incorporate the knowledge base (KB) into the model itself, or the model must be able to accurately predict which API calls to make and when.",
"In addition, details returned from the API calls must be accurately incorporated in conversational responses.",
"This is contrasted with modular architectures where the user's intent is derived from a structured representation and then used to determine which API calls to make such as in Rastogi et al. (2020) and Madotto (2020).",
"In this work we promote an end-to-end approach to single-domain, transaction-based dialog systems and describe how we overcome both data and grounding challenges described above.",
"In qualitative evaluations, our models perform on par with humans in generating verbal responses as well as predicting API calls.",
"Just two components form the basis for this system: a sufficiently large, in-domain, labeled dataset and a pre-trained transformer model.",
"Combining natural language output and structured API calls into a unified text-to-text-format allows us to leverage general purpose text-to-text transformers to train models.",
"Specifically, we use the T5 infrastructure (Raffel et al., 2019) and show that its pre-training feature has a significant impact on evaluations, boosting scores by 30 percent.",
"Models were trained on our TicketTalk dataset (aka Taskmaster-3), a movie ticketing dialog corpus with 23,789 conversations labeled with a simple yet unique API-based annotation schema.",
"This makes it one of the largest single-domain datasets to date.",
"A public release of the dataset accompanies this paper.",
"We chose movie ticketing since it is both transaction-based and relatively complex, but our overall approach to dialog systems applies to any task-based domain.",
"While there is a lot of recent work on multi-domain task-based dialog systems, human-like interaction for even single-domain tasks has yet to be demonstrated.",
"By first solving the problem for a single domain, we argue that replicating the process for multiple domains will be achievable by simply training on additional high-quality datasets labeled with the same API-focused strategy.",
"Over the past few years the NLP community has responded to the lack of dialog data with larger, publicly released task-oriented datasets spanning multiple domains (Wu et al., 2020; Budzianowski and Vulic, 2019).",
"This underscores the crucial role data plays in any approach to task-based dialog systems.",
"MultiWOZ (Budzianowski et al., 2018) consists of 10,420 dialogs in multiple domains and has become a popular benchmarking corpus for state tracking.",
"It has also undergone a series of subsequent refinements.",
"MSR-E2E, featured in the Microsoft dialog challenge (Li et al., 2018), has 10,087 dialogues in three domains, movie-ticket booking, restaurant reservation, and taxi booking.",
"Taskmaster-1 (Byrne et al., 2019) offers 13,215 dialogs in six domains and has been updated with a second installment, Taskmaster-2 (Byrne et al., 2020), which adds 17,289 more dialogs totalling over 30,000.",
"The Schema Guided Dialogue dataset (Rastogi et al., 2020) has 22,825 dialogs in multiple domains.",
"MetaLWOZ (Lee et al., 2019) has 37,884 dialogs in 227 domains and is aimed at helping models more accurately predict user responses in new domains.",
"Both Schema and MetaLWOZ are used in DSTC8 (Kim et al., 2019).",
"In addition to these, Serban et al. (2018) provides a thorough survey of dialog corpora released in previous years.",
"In contrast to the end-to-end 1 approach, traditional, modular strategies employ a division of labor among the components, e.g. understanding, state tracking, dialog policy, generation, etc., which are either largely hand-crafted or derived from training individual models on labeled datasets (Wen et al., 2017; Young et al., 2013).",
"This architecture is inherently more complex than the single-model end-to-end strategy we propose and can require significantly more design and engineering.",
"Moreover, since each module requires its own supervised training dataset, it is harder to apply to different domains (Serban et al., 2015a).",
"It has also been considered by some to be better equipped to interact with external APIs (Sukhbaatar et al., 2015; Wen et al., 2017) 1 The term end-to-end is sometimes also used when describing parts of modular systems (Li et al., 2017; Wen et al., 2017) but it is fundamentally different from the single text-to-text transformer model approach we present here.",
"and therefore might be better suited for task-based dialogs.",
"As mentioned above, we show that our single model-based approach can accurately generate both the appropriate response as well as predict the correct API call at the right time.",
"Earlier work by Andreas et al. (2020) and Hosseini-Asl et al. (2020) employs a similar modeling approach to predict dialog state in task-based dialogs, which can be seen as a precursor to our API call prediction strategy.",
"Following the annotation strategy used for Taskmaster-1 (Byrne et al., 2019), labels are limited to basic entities and events (i.e. API calls).",
"The dataset was created by over 4000 unique, native or near-native US English speakers.",
"Further demographic information (e.g. gender, dialect, etc.) is not known, and no personal identifi-able information was gathered.",
"The rationale for limiting dialogs to a single domain (movie ticketing) is based on our hypothesis that human-level performance in terms of both response generation and API call prediction for a particular task requires larger (i.e. 10,000+), more diverse datasets than are currently available.",
"In other words, carefully curated, annotated datasets that cover all the idiosyncrasies of a single task or transaction are a key factor in model performance.",
"Concern about the cost and efficiency of creating these larger corpora has led some researchers to look for approaches that alleviate dependencies on annotated data (Budzianowski and Vulic, 2019; Wen et al., 2017).",
"However, significant time and expense can be saved when assembling these corpora by simplifying the collection and annotation procedures.",
"In addition, little to no training is required for workers to be able to perform consistently well.",
"Using self-dialogs (where a worker creates the whole conversation, both user and agent turns) facilitates building large and linguistically rich datasets since it is both simple and cost effective, and allows users to draw on their lifetime of conversational experiences.",
"This in turn ensures the model can handle the wide range of human conversational behaviors that emerge in natural dialog.",
"For this project we extended the self-dialog to include over three dozen sets of user instructions to generate a wider variety of conversations, from open-ended prompts to more specific instructions that require specific types of exchanges.",
"For example, one set simply instructs workers to write the transcription of a conversation in which a person makes a successful ticket transaction with a booking agent.",
"This allows dialog creators to express their unique view of what a typical movie ticketing transaction would be, structuring each conversation how they see fit.",
"They are also instructed to find real values for required details (i.e. slots) such as time, date, theater, movie, etc. using a movie or theater site of their choice for a specific location.",
"This ensures the dataset has a large and diverse KB.",
"In contrast, the more restrictive sets of instructions focus on specific sub-dialogs for error handling, changing a detail, entity resolution, and the like.",
"In such cases we often provide a limited KB with one or more values for all the details so the worker can focus on the primary task of creating a realistic set of exchanges for this type of interaction.",
"In a third type of scenario, the conversation is partially completed and the user's task is focused on a very specific part of the exchange.",
"This allows us to fill holes in the data quickly and cost effectively.",
"That is, we can create large numbers of short, conversational examples that the model does not handle adequately and then retrain for better results.",
"Dialog data annotation can be complex and time consuming even for trained linguists as it typically involves carefully and consistently labeling dialog states, user intents, and dialog acts, among others (Henderson et al., 2013; Wen et al., 2017; Budzianowski et al., 2018).",
"The API-targeted approach is far more straightforward since only basic entities (e.g. name, time, number of tickets, theater, movie attributes, etc.) and API calls (e.g. to find theaters, movies, and show times, book tickets, etc.) are labeled.",
"The task is therefore easier to learn, faster to complete, and cheaper to run.",
"Moreover, as we discuss below, it fits well with the text-to-text format we use in our approach to transaction-based dialog systems.",
"Fifteen workers performed the annotations using a web-based tool that allows for only well-formed labels.",
"To label an API call, the API name is first selected which in turn creates the correct set of possible (arg name, arg value) pairs to choose from, both for inputs and responses.",
"This ensures that the model is trained on syntactically well formed API calls.",
"No annotations were removed from the dialogs.",
"The full annotation schema is included with the dataset release at https://git.io/JL8an .",
"We implement a new approach to end-to-end dialog systems by combining natural language output and structured API calls into a unified text-to-text format where the input and output are always text strings.",
"This allows us to leverage widely available, state of the art, general purpose text-to-text transformers as the foundation of our system.",
"Specifically, we used the publicly available Text-To-Text Transfer Transformer (T5) (Raffel et al., 2019) to train our models.",
"The T5 framework was designed specifically to explore transfer learning techniques for NLP and includes pre-training on the Colossal Clean Crawled Corpus (C4), composed of hundreds of gigabytes of web-based English text (Raf-fel et al., 2019).",
"The original pre-training objective for the C4 corpus in the T5 framework was a de-noising task, i.e. recovering missing words from the input.",
"Since this type of task scales well to multiple downstream tasks, we used our custom inputs/targets from the TicketTalk dataset to represent an end-to-end task based dialog system and ultimately achieve positive results.",
"We use T5-Base (Raffel et al., 2019) as our pre-trained model, which follows the transformer architecture (Vaswani et al., 2017) and consists of 220M parameters.",
"It was pre-trained on the large scale C4 dataset mentioned above for 1M steps with a span corruption objective.",
"We fine-tune this model on the Taskmaster-3 dataset for 40000 steps with a constant learning rate of 0.001 using 16 TPU v3 chips.",
"The batch size was set to 131,072 tokens per batch.",
"The maximum input sequence length and output length were set to 1024 and 256 tokens respectively.",
"The goal of our model is to generate a text string that either serves as a verbal response to the user or that contains one or more API calls with the data required at the current stage of the conversation.",
"Verbal responses come in two flavors: those that depend on a particular API call details and those that do not.",
"For example, when an API is invoked to find theater names for a given movie and location, the details returned from the API call must be correctly incorporated into the system's next response, e.g. I found two theaters, AMC 20 and Century City 16.",
"In contrast, other verbal outputs, e.g. What city do you plan to see the movie in? are derived from the overall conversation history.",
"Given the required text-to-text format used in our approach, we identify the type and function of each string by converting the annotations to a set of tokens.",
"As shown in Table 2 and 3, tokens identify the speaker, i.e. user vs. agent, the string type i.e. utterance vs. API call, and the details of each API call, both names as well as input parameters and values, and response parameters and values.",
"We also tag the conversation context which separates the most recent turn from previous turns.",
"Our token key is shown in Table 2. The first step is to use tokens to represent the user and agent interactions, providing speaker information to the model by the use of < U > and < A > .",
"We then convert any API invocations into their text equivalent using tokens for marking API names, argument types and values, i.e. < PN > , < PAN > , etc.",
"The results of these two steps are shown in Table 3. U user A agent PN program name PAN program argument name PAV program argument value PR program response PRAN program response argument name PRAV program response argument value C conversation context Table 2: Tokens identifying string type and function < U > I'd like to watch a movie.",
"< A > Sure.",
"I can help you with that.",
"What kind of movies are you interested in?",
"< U > Are there any good action movies?",
"API call: < PN > find movies < PAN > name.genre < PAV > action Response: < PR > find movies < PRAN > name.movie < PRAV > John Wick < PRAV > Jack Ryan < A > I found John Wick and Jack Ryan.",
"We use the following algorithm to accomplish this: 1. Initialize conversation context to an empty string.",
"2. Iterate through the interactions and do the following:",
"(a) If the sentence is a user utterance ( < U > ) or a program response( < PR > ), add it to the model input along with the conversation context (if present).",
"(b) If the sentence is an agent utterance ( < A > ) or program invocation ( < PN > ), add it to the model target.",
"(c) If both model input and target have been created, output the (input, target) pair and update the conversation context to reflect this.",
"(d) Continue (2) to generate the next input, target pair.",
"Using the these rules, the model inputs and targets are generated as in Table 4. Once the model has been trained on inputs and targets, we can use the system to accomplish tasks in the following manner: INPUTS TARGETS < U > I'd like to watch a movie.",
"2. Provide the formatted utterance to the model.",
"3. Obtain model prediction",
"(a) If the model prediction contains the agent ( < A > ) token, format it and show it to the user.",
"i.",
"Update conversation context and start again from (1).",
"(b) If the model prediction contains the program ( < PN > ) token: i.",
"Extract program argument name ( < PAN > ) and value ( < PAV > ).",
"ii.",
"Issue the API call by providing it to the API adapter.",
"iii.",
"Format API results and provide it to the model along with the conversation context.",
"iv.",
"Start from (3).",
"This interaction lifecycle is illustrated in Figure 3. Figure 3: System interaction life cycle 4.4 Invoking APIs When we detect an API call in the output, we invoke the API, retrieve the results, and embed the responses in the next model input.",
"As shown in Figure 4, each API call predicted by the model typically contains a generic API name, such as find-movies, or find-theaters, and a list of key value pairs that detail the specific parameters to be used while invoking the API, as shown in Figure 4. Figure 4: Example API invocation (outside model) The API call, while structured, may still include pronouns or other co-referential phrases as input parameters.",
"For example, the date parameter for an API call might contain the value tonight, and the location value might be nearby.",
"The resolution of these entities happens outside the core interaction layer in what can be understood as the API adapter (and not the actual API itself).",
"This not only helps simplify annotation, but also helps leverage existing solutions to these well defined problems.",
"This separation of the API layer is also useful for encapsulating all API specific artifacts, like authentication tokens, endpoint addresses and data formatters.",
"In this way, the end-to-end system is able to interact with the user to solicit details relevant to the task, generate API calls to fetch data from external knowledge sources, and use the responses provided by the API call to construct natural language responses.",
"In this section, we show how our end-to-end approach to transaction-based dialog systems produces verbal responses and predicts API calls with near human-level quality and accuracy.",
"Through human qualitative evaluations, we show that two aspects in particular, dataset size and pre-training, significantly affect performance.",
"Below we describe our evaluation methodology followed by a detailed discussion of the experiment results.",
"Dataset size and pre-training are key factors in creating models for end-to-end dialog systems.",
"To understand the amount of data required for our approach, we trained four models, each on a different number of randomly selected subsets of the TicketTalk dataset, namely 5000, 7500, 10,000 and 21,000 dialogs.",
"To measure the effect of transfer learning, we trained a second 10,000-dialog model without the T5 framework's pre-training component, setting up an A-B comparison with the pre-trained model.",
"As mentioned earlier, our models generate three types of output: API calls, verbal responses based on the results of an API call, and plain verbal responses based on the conversation context (i.e. not dependent on a particular API call response).",
"We set up a pair of evaluations for each type.",
"The first evaluation asked human raters to evaluate the model's output given a specific conversation history (i.e. context) while the second asked raters to evaluate the human's response for the same set of contexts.",
"Each experiment included 1000 context-response pairs of varying lengths, i.e. some conversation histories might have just one exchange (a user and agent turn) while others could have up to nine exchanges.",
"We requested three ratings for each question distributed among a pool of about 900 paid raters for a total of 3000 data points per experiment.",
"Table 5 and Table 6 below shows a sample context-response pair presented to human raters for each type of model output.",
"We use our makes-sense metric to evaluate the model-generated responses and API call predictions against the human standard.",
"For verbal responses, we ask one question: Does the agent's next response make sense?",
"For API call predictions there are two questions: 1. Do all the action types, their details, and their order make sense at this point in the conversation?",
"2. Are there any actions that should be listed here but that are missing (either as additions or replacements)?",
"Again, raters are given options to choose for negative answers.",
"The offline evaluation strategy described above offers scalability and minimal rater training.",
"However, an online, interactive setup would further allow us to evaluate the ability of the model to handle errors in its own output (from previous predictions) and its robustness while dealing with novel inputs.",
"We have begun to build an interactive UI to facilitate such evaluations and show promising results of such an interaction in Table 7 below.",
"The authors of this paper played the USER role.",
"The T5 model was trained on the full TicketTalk dataset which includes nearly 24K dialogs.",
"If the model generates an API call, we create a value that mimics the response from the API adapter and provide it to the model before the next prediction.",
"We also provide the model with fake API responses (for calls like find movies and find theaters) containing entities that have never been used in the conversations in the TicketTalk dataset.",
"The conversation in 7 includes the exact API responses with intentionally made up movie theater names that have been provided to the model to ensure they were not part of the training set.",
"The model behaves correctly when provided with the made up API responses that are not in the dataset.",
"When the dialog flow closely matches the dataset flows, which are significantly diverse and varied, we can recreate interactions like this relatively easily.",
"This particular example took two attempts to generate.",
"Future evaluation of our approach will include this type of interactive task where testers rate both individual as well as the overall conversation.",
"Comparing the makes-sense scores for model-generated vs. human-generated responses, a clear pattern of improvement emerges based on dataset size.",
"Table 8 presents the three types of model-generated responses evaluated: Plain responses (not strictly based on API results), Responses to APIs (based on API results), and API calls themselves.",
"When 5K and 7.5K dialogs are used for the training set, scores for model-generated responses lag behind the human-generated scores by up to 5.5%.",
"At 10K dialogs, the response scores differ by less than 2% and model-generated API predictions outperform human labels by 2.5%.",
"At 21K dialogs, model-generated responses improve to near human-level performance.",
"The 10K model's API call prediction fares better than 21K model for API labeling, which is likely due to the fact that, as more API call combinations are introduced, they are harder for the model to interpret.",
"In contrast, adding general dialog data along with pre-training will improve the model's predictions of English utterances which gives the 21K model an advantage in plain response scores.",
"As an automatic metric, we also provide the BLEU score generated for each model.",
"Maximum n-gram order for computing BLEU score was set to 4. The unrestricted nature of the entities in the datasets makes it much harder to create a robust automatic metric for API call USER I'd like to book some tickets.",
"predictions.",
"This is compounded by the fact that any given dialog context may allow for different sets of API calls.",
"The effect of pre-training is also very clear.",
"After training a fifth model, this time without the T5 framework's pre-training feature, we see a huge drop in evaluation scores.",
"As shown at the bottom of Table 8, we see a decrease of 30% in model performance for verbal responses and about a 25% drop in API call prediction accuracy.",
"Finally, the quality of the model's prediction stays on par with human scores throughout the Size Plain Resp.",
"conversation as the context grows.",
"Figure 5 shows how the model's makes sense score stay on the same path after each exchange.",
"In offline human evaluations, our single-domain models trained on just 10,000 dialogs generate responses and predict API calls with near-human level accuracy.",
"A key aspect of this strategy is combining natural language output and structured API calls into a unified text-to-text format in order to leverage general purpose text-to-text transformers, such as the T5 framework.",
"In this way, predicting which API call to make and when is essentially the same as generating the appropriate utterance at a given point in the conversation.",
"The pre-training component significantly boosts performance on our downstream task of fine tuning models on the our datasets.",
"These carefully curated and sufficiently large datasets are also core to this strategy, and creating them is straightforward using the self-dialog technique and simple, API-focused annotation.",
"The TicketTalk dataset released with this paper is one such example.",
"When compared with more traditional, modular system architectures, our end-to-end approach should significantly reduce design and engineering time and resources needed to build task-based dialog systems.",
"Future work will include interactive evaluation of current models as well as an application of this approach to multiple-domain systems.",
"We would like to thank our colleagues Daniel De Freitas Adiwardana, Noam Shazeer, Filip Radlinksi, and Pedro Moreno for their discussion and insights through several iterations of this paper.",
"We thank Hadar Shemtov for his guidance and support of the overall project."
] | [
"method",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"result",
"method",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Models for natural language understanding (NLU) tasks often rely on the idiosyncratic biases of the dataset, which make them brittle against test cases outside the training distribution.",
"Recently, several proposed debiasing methods are shown to be very effective in improving out-of-distribution performance.",
"However, their improvements come at the expense of performance drop when models are evaluated on the in-distribution data, which contain examples with higher diversity.",
"This seemingly inevitable trade-off may not tell us much about the changes in the reasoning and understanding capabilities of the resulting models on broader types of examples beyond the small subset represented in the out-of-distribution data.",
"In this paper, we address this trade-off by introducing a novel debiasing method, called confidence regularization , which discourage models from exploiting biases while enabling them to receive enough incentive to learn from all the training examples.",
"We evaluate our method on three NLU tasks and show that, in contrast to its predecessors, it improves the performance on out-of-distribution datasets (e.g., 7pp gain on HANS dataset) while maintaining the original in-distribution accuracy.",
"1 1 Introduction Despite the impressive performance on many natural language understanding (NLU) benchmarks (Wang et al., 2018), recent pre-trained language models (LM) such as BERT (Devlin et al., 2019) are shown to rely heavily on idiosyncratic biases of datasets (McCoy et al., 2019b; Schuster et al., 2019; Zhang et al., 2019).",
"These biases are commonly characterized as surface features of input examples that are strongly associated with the target labels, e.g., occurrences of negation words in 1 The code is available at https://github.com/ UKPLab/acl2020-confidence-regularization natural language inference (NLI) datasets which are biased towards the contradiction label (Guru-rangan et al., 2018; Poliak et al., 2018).",
"As a rami-fication of relying on biases, models break on the out-of-distribution data, in which such associative patterns between the surface features and the target labels are not present.",
"This brittleness has, in turn, limited their practical applicability in some extrinsic use cases (Falke et al., 2019).",
"This problem has sparked interest among researchers in building models that are robust against dataset biases .",
"Proposed methods in this direction build on previous works, which have largely explored the format of several prominent label-revealing biases on certain datasets (Belinkov et al., 2019).",
"Two current prevailing methods, product-of-expert (He et al., 2019; Mahabadi and Henderson, 2019) and learned-mixin (Clark et al., 2019a) introduce several strategies to overcome the known biases by correcting the conditional distribution of the target labels given the presence of biased features.",
"They achieve this by reducing the importance of examples that can be predicted correctly by using only biased features.",
"As a result, models are forced to learn from harder examples in which utilizing solely superficial features is not sufficient to make correct predictions.",
"While these two state-of-the-art debiasing methods provide a remarkable improvement on the targeted out-of-distribution test sets, they do so at the cost of degrading the model's performance on the in-distribution setting, i.e., evaluation on the original test data which contains more diverse inference phenomena.",
"It raises a question on whether these debiasing methods truly help in capturing a better notion of language understanding or simply biasing models to other directions.",
"Ideally, if such an improvement is achieved for the right reasons (i.e., better reasoning capabilities by learning a more general feature representation), a debiased model product-of-expert learned-mixin conf-reg(our) in-distribution out-of-distribution calibration requires biased model (cid:52) (cid:52) (cid:52) requires hyperparameter (cid:54) (cid:52) (cid:54) Table 1: Comparison of our method against the state-of-the-art debiasing methods.",
"should still be able to maintain its accuracy on previously unambiguous instances (i.e., instances that are predicted correctly by the baseline model), even when they contain biases.",
"In this work, we address this shortcoming by introducing a novel debiasing method that improves models' performance on the out-of-distribution examples while preserves the in-distribution accuracy.",
"The method, called confidence regularization , draws a connection between the robustness against dataset biases and the overconfidence prediction problem in neural network models (Feng et al., 2018; Papernot et al., 2016).",
"We show that by preventing models from being overconfident on biased examples, they are less likely to exploit the simple cues from these examples.",
"The motivation of our proposed training objective is to explicitly encourage models to make predictions with lower confidence (i.e., assigning a lower probability to the predicted label) on examples that contain biased features.",
"Table 1 shows the comparison of our method with the existing state-of-the-art debiasing methods: product-of-expert and learned-mixin .",
"We show that our method is highly effective in improving out-of-distribution performance while preserving the in-distribution accuracy.",
"For example, our method achieves 7 points gain on an out-of-distribution NLI evaluation set, while slightly improves the in-distribution accuracy.",
"Besides, we show that our method is able to improve models' calibration (Guo et al., 2017) so that the confidences of their predictions are more aligned with their accuracies.",
"Overall, our contributions are the following: We present a novel confidence regularization method to prevent models from utilizing biased features in the dataset.",
"We evaluate the advantage of our method over the state-of-the-art debiasing methods on three tasks, including natural language inference, fact verification, and paraphrase identification.",
"Experimental results show that our method provides competitive out-of-distribution improvement while retaining the original in-distribution performance.",
"We provide insights on how the debiasing methods behave across different datasets with varying degrees of biases and show that our method is more optimal when enough bias-free examples are available in the dataset.",
"Biases in Datasets Researchers have recently studied more closely the success of large fine-tuned LMs in many NLU tasks and found that models are simply better in leveraging biased patterns instead of capturing a better notion of language understanding for the intended task (Bender and Koller, 2020).",
"Models' performance often drops to a random baseline when evaluated on out-of-distribution datasets which are carefully designed to be void of the biases found in the training data.",
"Using such targeted evaluation, McCoy et al. (2019b) observe that models trained on MNLI dataset (Williams et al., 2018) leverage syntactic patterns involving word overlap to blindly predict entailment.",
"Similarly, Schuster et al. (2019) show that the predictions of fact verification models trained for the FEVER task (Thorne et al., 2018) are largely driven by the presence of indicative words in the input claim sentences.",
"Following similar observations across other tasks and domains, e.g., visual question-answering (Agrawal et al., 2016), paraphrase identification (Zhang et al., 2019), and argument reasoning comprehension (Niven and Kao, 2019), researchers proposed improved data collection techniques to reduce the artifacts that result in dataset biases.",
"While these approaches are promising, only applying them without additional efforts in the modeling part may still deliver an unsatisfactory outcome.",
"For instance, collecting new examples by asking human annotators to conform to specific rules may be costly and thus limit the scale and diversity of the resulting data (Kaushik et al., 2020).",
"Recently proposed adversarial filtering methods (Zellers et al., 2019; Sakaguchi et al., 2019) are more cost effective but are not guaranteed to be artifacts-free.",
"It is, therefore, crucial to develop learning methods that can overcome biases as a complement to the data collection efforts.",
"Debiasing Models There exist several methods that aim to improve models' robustness and generalization by leveraging the insights from previous work about the datasets' artifacts.",
"In the NLI task, Belinkov et al. (2019) make use of the finding that partial input information from the hypothesis sentence is sufficient to achieve reasonable accuracy.",
"They then remove this hypothesis-only bias from the input representation using an adversarial training technique.",
"More recently, three concurrent works (Clark et al., 2019a; He et al., 2019; Mahabadi and Henderson, 2019) introduce a model-agnostic debiasing method for NLU tasks called product-of-expert .",
"Clark et al. (2019a) also propose an adaptive variant of this method called learned-mixin .",
"These two methods first identify examples that can be predicted correctly based only on biased features.",
"This step is done by using a biased model 2 , which is a weak classifier that is trained using only features that are known to be insufficient to perform the task but work well due to biases.",
"The output of this pre-trained biased model is then used to adjust the loss function such that it down-weights the importance of examples that the biased model can solve.",
"While this approach prevents models from learning the task mainly using biased features, it also reduces model's ability to learn from examples that can be solved using these features.",
"As a result, models are unable to optimize accuracy on the original training distribution, and they possibly become biased in some other ways.",
"Similar to these methods, our method also uses a biased model to identify examples that exhibit biased features.",
"However, instead of using it to diminish the training signal from these examples, we use it to scale the confidence of models' predictions.",
"This enables the model to receive enough incentive to learn from all of the training examples.",
"Confidence Regularization Methods for regularizing the output distribution of neural network models have been used to improve generalization.",
"Pereyra et al. (2017) propose to penalize the entropy of the output distribution for encouraging models to be less confident in their predictions.",
"Previously, Szegedy et al. (2016) introduce a label smoothing mechanism to reduce overfitting by pre-2 We follow the terminology used by He et al. (2019).",
"venting the model from assigning a full probability to each training example.",
"Our method regularizes models' confidence differently: we first perform an adaptive label smoothing for the training using knowledge distillation (Hinton et al., 2015), which, by itself, is known to improve the overall performance.",
"However, our method involves an additional bias-weighted scaling mechanism within the distillation pipelines.",
"As we will show, our proposed scaling mechanism is crucial in leveraging the knowledge distillation technique for the purpose of overcoming the targeted bias while maintaining high accuracy in the training distribution.",
"Similar to our work, Feng et al. (2018) propose a regularization method that encourages the model to be uncertain on specific examples.",
"However, the objective and the methodology are different: they apply an entropy penalty term on examples that appear nonsensical to humans with the goal of improving models' interpretability.",
"On the contrary, we apply our confidence regularization on every training example with a varying strength (i.e., higher uncertainty on more biased examples) to improve models' performance on the out-of-distribution data.",
"Overview We consider the common formulation of NLU tasks as a multi-class classification problem.",
"Given a dataset D that consists of n examples ( x i , y i ) i [1 ,n ] , with x i X as a pair of sentences, and y i { 1 , 2 , ..., K } where K is the number of classes.",
"The goal is to learn a robust classifier F m , which computes the probability distribution over target labels, i.e., F m ( x i ) = p i .",
"The key idea of our method is to explicitly train F m to compute lower probability , i.e., less confidence, on the predicted label when the input example exhibits a bias.",
"This form of confidence regularization can be done by computing the loss function with the soft target labels that are obtained through our proposed smoothing mechanism.",
"The use of soft targets as the training objective is motivated by the observation that the probability distribution of labels for each sample provides valuable information about the underlying task (Hinton et al., 2015; Pereyra et al., 2017).",
"When the soft targets of certain examples have higher entropy, models can be explicitly taught that some labels are more likely to be correct than the others.",
"Based on this intuition, we argue that adjusting the con-Dataset P: The air defense of America began with this call.",
"given the presence of the biased features.",
"We first produce a meaningful softened target distribution for each training example by performing knowledge distillation (Hinton et al., 2015).",
"In this learning framework, a teacher model F t , which we parameterize identically to the main model F m , is trained on the dataset D using a standard classification loss.",
"We then use F t to compute output probability distribution p i , where F t ( x i ) = p i .",
"In the original knowledge distillation approach, the output of the teacher model p i is then used to train F m .",
"We extend this approach by adding a novel scaling procedure before we distill the teacher model into F m .",
"We define a scaling function S that takes the probability distribution p i and scale it such that the probability assigned to its predicted label is lowered when the example can be predicted well by only relying on the biased features.",
"Training the biased model For several NLU tasks, biased features are known a-priori, e.g., the word overlapping features in NLI datasets are highly correlated with the entailment label (McCoy et al., 2019b).",
"We leverage this a-priori knowledge to design a measure of how well an example can be predicted given only the biased features.",
"We refer to this measure as bias weight , denoted as i for every example x i .",
"Similar to previous debiasing methods (Clark et al., 2019a), we compute bias weights using a biased model .",
"This biased model, denoted as F b , predicts the probability distribution b i , where F b ( x i ) = b i = (cid:104) b i, 1 , b i, 2 , ..., b i,K (cid:105) .",
"We define the bias weight i as the scalar value of the assigned probability by F b to the ground truth label: i = b i,c ( c -th label is the ground truth).",
"Bias-weighted scaling As illustrated in Figure 1, our method involves scaling the teacher output p i using i .",
"We do this by defining a scaling function S : RK RK : S ( p i , i ) j = p i,j (1 i ) (cid:80) Kk =1 p i,k (1 i ) for j = 1 , ..., K .",
"The value of i controls the strength of the scaling: as i 1 , the scaled probability assigned to each label approaches 1 K , which presents a minimum confidence.",
"Conversely, when i 0 , the teacher's probability distribution remains unchanged, i.e., S ( p i , 0) = p i .",
"Training the main model The final step is to train F m by distilling from the scaled teacher model's outputs.",
"Since the main model is parameterized identically to the teacher model, we refer to this step as self-distillation (Furlanello et al., 2018).",
"Self-distillation is performed by training F m on pairs of input and the obtained soft target labels ( x i , S ( p i , i )) .",
"Specifically, F m is learned by minimizing a standard cross-entropy loss between the scaled teacher's output S ( p i , i ) and the current prediction of the main model: L ( x i , S ( p i , i )) = S ( p i , i ) log F m ( x i ) In practice, each S ( p i , i ) is computed only once as a preprocessing step.",
"Our method does not require hyperparameters , which can be an advantage since most out-of-distribution datasets do not provide a development set for tuning the hyperparameters.",
"In this section, we describe the datasets, models, and training details used in our experiments.",
"We use the MNLI dataset (Williams et al., 2018) for training.",
"The dataset consists of pairs of premise and hypothesis sentences along with their inference labels (i.e., entailment, neutral, and contradiction).",
"MNLI has two in-distribution development and test sets, one that matches domains of the training data (MNLI-m), and one with mismatching domains (MNLI-mm).",
"We consider two out-of-distribution datasets for NLI: HANS (Heuristic Analysis for NLI Systems) (McCoy et al., 2019b) and MNLI-hard test sets (Gururangan et al., 2018).",
"HANS The dataset is constructed based on the finding that the word overlapping between premise and hypothesis in NLI datasets is strongly correlated with the entailment label.",
"HANS consists of examples in which such correlation does not exist, i.e., hypotheses are not entailed by their word-overlapping premises.",
"HANS is split into three test cases:",
"(a) Lexical overlap (e.g., The doctor was paid by the actor",
"(cid:59)",
"The doctor paid the actor ),",
"(b)",
"Subsequence",
"(e.g., The doctor near the actor danced",
"(cid:59)",
"The actor danced ), and",
"(c)",
"Constituent",
"(e.g., If the artist slept, the actor ran",
"(cid:59)",
"The artist slept ).",
"Each category contains both entailment and non-entailment examples.",
"MNLI-hard Hypothesis sentences in NLI datasets often contain words that are highly indicative of target labels (Gururangan et al., 2018; Poliak et al., 2018).",
"It allows a simple model that predicts based on the hypothesis-only input to perform much better than the random baseline.",
"Gururangan et al. (2018) presents a hard split of the MNLI test sets, in which examples cannot be predicted correctly by the simple hypothesis-only model.",
"For this task, we use the training dataset provided by the FEVER challenge (Thorne et al., 2018).",
"The task concerns about assessing the validity of a claim sentence in the context of a given evidence sentence, which can be labeled as either support , refutes , and not enough information .",
"We use the Fever-Symmetric dataset (Schuster et al., 2019) for the out-of-distribution evaluation.",
"Fever-Symmetric Schuster et al. (2019) introduce this dataset to demonstrate that FEVER models mostly rely on the claim-only bias, i.e., the occurrence of words and phrases in the claim that are biased toward certain labels.",
"The dataset is manually constructed such that relying on cues of the claim can lead to incorrect predictions.",
"We evaluate the models on the two versions (version 1 and 2) of their test sets.",
"3 4.3 Paraphrase Identification We use the Quora Question Pairs (QQP) dataset for training.",
"QQP consists of pairs of questions which are labeled as duplicate if they are paraphrased, and non-duplicate otherwise.",
"We evaluate the out-of-distribution performance of QQP models on the QQP subset of PAWS (Paraphrase Adversaries from Word Scrambling) (Zhang et al., 2019).",
"PAWS The QQP subset of PAWS consists of question pairs that are highly overlapping in words.",
"The majority of these question pairs are labeled as non-duplicate.",
"Models trained on QQP are shown to perform worse than the random baseline on this dataset.",
"This partly indicates that models largely rely on lexical-overlap features to perform well on QQP.",
"We report models' performance on the duplicate and non-duplicate examples separately.",
"Baseline Model We apply all of the debiasing methods across our experiments on the BERT base model (Devlin et al., 2019), which has shown impressive in-distribution performance on the three tasks.",
"In our method, BERT base is used for both F t and F m .",
"We follow the standard setup for sentence pair classification tasks, in which the two sentences are concatenated into a single input and the special token [CLF] is used for classification.",
"Biased Model ( F b ) We consider the biased features of each of the examined out-of-distribution datasets to train the biased models.",
"For HANS and PAWS, we use hand-crafted features that indicate how words are shared between the two input sentences.",
"Following Clark et al. (2019a), these features include the percentage of hypothesis words that also occur in the premise and the average of cosine distances between word embedding in the premise and hypothesis.",
"4 We then train a simple 3 https://github.com/TalSchuster/ FeverSymmetric 4 We include the detailed description in the appendix.",
"nonlinear classifier using these features.",
"We refer to this biased model as the hans model.",
"For MNLI-hard and Fever-Symmetric, we train a biased model on only hypothesis sentences and claim sentences for MNLI and FEVER, respectively.",
"The biased model is a nonlinear classifier trained on top of the vector representation of the input sentence.",
"We obtain this vector representation by max-pooling word embeddings into a single vector for FEVER, and by learning an LSTM-based sentence encoder for MNLI.",
"State-of-the-art Debiasing Models We compare our method against existing state-of-the-art debiasing methods: product-of-expert (He et al., 2019; Mahabadi and Henderson, 2019) and its variant learned-mixin (Clark et al., 2019a).",
"product-of-expert ensembles the prediction of the main model ( p i ) with the prediction of the biased model ( b i ) using p (cid:48) i = softmax (log p i + log b i ) , where p (cid:48) i is the ensembled output distribution.",
"This ensembling enables the main model to focus on learning from examples that are not predicted well by the biased model.",
"Learned-mixin improves this method by parameterizing the ensembling operation to let the model learn when to incorporate or ignore the output of the biased model for the ensembled prediction.",
"On FEVER, we also compare our method against the example-reweighting method by Schuster et al. (2019).",
"They compute the importance weight of each example based on the correlation of the n-grams within the claim sentences with the target labels.",
"These weights are then used to compute the loss of each training batch.",
"out-of-distribution performance.",
"Therefore, we run each experiment five times and report both average and standard deviation of the scores.",
"5 We also use training configurations that are known to work well for each task.",
"6 For each experiment, we train our confidence regularization method as well as product-of-expert and learned-mixin using the same biased-model.",
"Since the challenge datasets often do not provide a development set, we could not tune the hyperparameter of learned-mixin.",
"We, therefore, use their default weight for the entropy penalty term.",
"7 5 Results The results for the tasks of NLI, fact verification, and paraphrase identification are reported in Table 2, Table 3, and Table 4, respectively.",
"The results on the original development and test sets of each task represent the in-distribution performance.",
"Since we examine two types of biases in NLI, we have two debiased NLI models, i.e., Regularized-conf hans and Regularized-conf hypo which are trained for debiasing HANS and hypothesis-only biases, respectively.",
"We make the following observations from the results: (1) Our method outperforms product-of-expert and learned-mixin when evaluated on the corresponding in-distribution data of all the three tasks; (2) Product-of-expert and learned-mixin drop the original BERT baseline accuracy on most 5 Due to the limited number of possible submissions, we report the MNLI test scores only from a model that holds the median out-of-distribution performance.",
"of the in-distribution experiments; (3) Regardless of the type of bias, our method preserves the in-distribution performance.",
"However, it is not the case for the other two methods, e.g., learned-mixin only results in a mild decrease in the accuracy when it is debiased for HANS, but suffers from substantial drop when it is used to address the hypothesis-only bias; (4) Our method results in a slight in-distribution improvement in some cases, e.g., on FEVER, it gains 0.6pp over BERT baseline.",
"The models produced by Regularized-conf hans also gain 0.1 points to both MNLI-m and MNLI-mm test sets; (5) All methods, including ours decrease the in-distribution performance on QQP, particularly on its duplicate examples subset.",
"We will discuss this performance drop in Section 6.",
"The rightmost columns of each table report the evaluation results on the out-of-distribution datasets for each task.",
"Based on our out-of-distribution evaluations, we observe that: (1) Our method minimizes the trade-off between the in-distribution and out-of-distribution performance compared to the other methods.",
"For example, on HANS, learned-mixin maintains the in-distribution performance but only improves the average HANS accuracy from 61.1% to 64.9%.",
"product-of-expert gains 7 points improvement over the BERT baseline while reducing the MNLI-m test accuracy by 1.6 points.",
"On the other hand, our method achieves the competitive 7 points gain without dropping the in-distribution performance; (2) The performance trade-off is stronger on some datasets.",
"On PAWS, the two compared methods improve the accuracy on the non-duplicate subset while reducing models' ability to detect the duplicate examples.",
"Our method, on the other hand, finds a balance point, in which the non-duplicate accuracy can no longer be improved without reducing the duplicate accuracy; (3) depending on the use of hyperparameters, learned-mixin can make a lower Method QQP dev PAWS test dupl dupl dupl dupl BERT-base 88.4 0.3 92.5 0.3 96.9 0.3 9.8 0.4 LMixin hans 77.5 0.7 91.9 0.2 69.7 4.3 51.7 4.3 Prod-exp hans 80.8 0.2 93.5 0.1 71.0 2.3 49.9 2.3 Reg-conf hans 85.0 0.7 91.5 0.4 91.0 1.8 19.8 1.3 Table 4: Results of the evaluation on the QQP task.",
"out-of-distribution improvement compared to ours, even after substantially degrading in-distribution performance, e.g., on FEVER-symmetric v2 , it only gains 0.5 points while dropping 3 points on the FEVER development set.",
"Ablation studies In this section, we show that the resulting improvements from our method come from the combination of both self-distillation and our scaling mechanism.",
"We perform ablation studies to examine the impact of each of the components including (1) self-distillation : we train a model using the standard self-distillation without bias-weighted scaling, and (2) example-reweighting : we train a model with the standard cross-entropy loss with an example reweighting method to adjust the importance of individual examples to the loss.",
"The weight of each example is obtained from the (scaled) probability that is assigned by the teacher model to the ground truth label.",
"8 The aim of the second setting is to exclude the effect of self-distillation while keeping the effect of our scaling mechanism.",
"Table 5 presents the results of these experiments on MNLI and HANS.",
"We observe that each component individually still gains substantial improvements on HANS over the baseline, albeit not as strong as the full method.",
"The results from the self-distillation suggest that the improvement from our method partly comes from the regularization effect of the distillation objective (Clark et al., 2019b; Furlanello et al., 2018).",
"In the example-reweighting experiment, we exclude the effect of all the scaled teacher's output except for the probability assigned to the ground truth label.",
"Compared to self-distillation , the proposed example-reweighting has a higher impact on improving the performance in both in-distribution and out-of-distribution eval-8 Details of the ablation experiments are included in the supplementary materials.",
"In-distribution performance drop of product-of-expert The difference between our method with product-of-expert and its variants is the use of biased examples during training.",
"Product-of-expert in practice scales down the gradients on the biased training examples to allow the model to focus on learning from the harder examples (He et al., 2019).",
"As a result, models often receive little to no incentive to solve these examples throughout the training, which can effectively reduce the training data size.",
"Our further examination on a product-of-expert model (trained on MNLI for HANS) shows that its degradation of in-distribution performance largely comes from the aforementioned examples.",
"Ensembling back the biased-model to the main model can indeed bring the in-distribution accuracy back to the BERT baseline.",
"However, this also leads to the original poor performance on HANS, which is counterproductive to the goal of improving the out-of-distribution generalization.",
"Impact on Models' Calibration We expect the training objective used in our method to discourage models from making overconfident predictions, i.e., assigning high probability to the predicted labels even when they are incorrect.",
"We investigate the changes in models' behavior in terms of their confidence using the measure of calibration , which quantifies how aligned the confidence of the predicted labels with their actual accuracy are (Guo et al., 2017).",
"We compute the expected calibration error (ECE) (Naeini et al., 2015) as a scalar summary statistic of calibration.",
"Results in Table 6 show that our method improves model's calibration on MNLI-m and MNLI-mm dev sets, with the reduction of ECE ranging from 3.0 to 3.6.",
"The histograms in figure 2 show the distribution of models' confidences in their predictions.",
"Figure 2a demonstrates that the prediction confidences of our resulting model on MNLI-m are more smoothly distributed.",
"In figure 2b, we observe that our debiased model predicts examples that contain lexical overlap features with lower confidence, and when the confidence is higher, the prediction is more likely to be correct.",
"Impact of biased examples ratio To investigate the slight in-distribution drop by our method in QQP (Table 4), we examine the ratio of biased examples in the QQP training data by evaluating the 0 100 250 500 1000 1500 2000 2500 40 60 80 dup l.",
"performance of the biased model on the dataset.",
"We find that almost 80% of the training examples can be solved using the lexical overlap features alone, which indicates a severe lexical overlap bias in QQP.",
"9 Moreover, in 53% of all examples, the biased model makes correct predictions with a very high confidence ( i > 0 . 8 ).",
"For comparison, the same biased model predicts only 12% of the MNLI examples with confidence above 0.8 (more comparisons are shown in the supplementary material.",
"As a result, there are not enough unbiased examples in QQP and the resulting soft target labels in this dataset are mostly close to a uniform distribution, which in turn may provide insufficient training signal to maximize the accuracy on the training distribution.",
"Impact of adding bias-free examples Finally, we investigate how changing the ratio of biased examples affects the behavior of debiasing methods.",
"To this end, we split PAWS data into training and test sets.",
"The training set consists of 2500 examples, and we use the remaining 10K examples as a test set.",
"We train the model on QQP that is gradually augmented with fractions of this PAWS training split and evaluate on a constant PAWS test set.",
"Figure 3 shows the results of this experiment.",
"When more PAWS examples are added to the training data, the accuracy of the BERT baseline gradually improves on the non-duplicate subset while its accuracy slowly drops on the duplicate subset.",
"We observe that product-of-expert exaggerates this effect: it reduces the duplicate accuracy up 9 The random baseline is 50% for QQP.",
"to 40% to obtain the 93% non-duplicate accuracy.",
"We note that our method is the most effective when the entire 2500 PAWS examples are included in the training, obtaining the overall accuracy of 77.05% compared to the 71.63% from the baseline BERT.",
"Existing debiasing methods improve the performance of NLU models on out-of-distribution datasets.",
"However, this improvement comes at the cost of strongly diminishing the training signal from a subset of the original dataset, which in turn reduces the in-distribution accuracy.",
"In this paper, we address this issue by introducing a novel method that regularizes models' confidence on biased examples.",
"This method allows models to still learn from all training examples without exploiting the biases.",
"Our experiments on four out-of-distribution datasets across three NLU tasks show that our method provides a competitive out-of-distribution performance while preserves the original accuracy.",
"Our debiasing framework is general and can be extended to other task setups where the biases leveraged by models are correctly identified.",
"Several challenges in this direction of research may include extending the debiasing methods to overcome multiple biases at once or to automatically identify the format of those biases which simulate a setting where the prior knowledge is unavailable.",
"We thank Leonardo Ribeiro and Max Glockner for the thoughtful discussion on the earlier version of this work and the anonymous reviewers for their constructive comments.",
"We also thank Tal Schuster for the support in using the Fever-Symmetric dataset.",
"This work is supported by the German Research Foundation through the research training group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES, GRK 1994/1) and by the German Federal Ministry of Education and Research and the Hessian State Ministry for Higher Education, Research and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE."
] | [
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"result",
"objective",
"result",
"result",
"result",
"result",
"objective",
"method",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"method",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"objective",
"abstain",
"other",
"other",
"other"
] |
[
"We present LEAR ( L exical E ntailment A ttract-R epel), a novel post-processing method that transforms any input word vector space to emphasise the asymmetric relation of lexical entailment (LE), also known as the IS-A or hyponymy-hypernymy relation.",
"By injecting external linguistic constraints (e.g., WordNet links) into the initial vector space, the LE specialisation procedure brings true hyponymy-hypernymy pairs closer together in the transformed Euclidean space.",
"The proposed asymmetric distance measure adjusts the norms of word vectors to reflect the actual WordNet-style hierarchy of concepts.",
"Simultaneously, a joint objective enforces semantic similarity using the symmetric cosine distance, yielding a vector space specialised for both lexical relations at once.",
"LEAR specialisation achieves state-of-the-art performance in the tasks of hypernymy directionality, hypernymy detection, and graded lexical entailment, demonstrating the effectiveness and robustness of the proposed asymmetric specialisation model.",
"Word representation learning has become a research area of central importance in NLP, with its usefulness demonstrated across application areas such as parsing (Chen and Manning, 2014), machine translation (Zou et al., 2013), and many others (Turian et al., 2010; Collobert et al., 2011).",
"Standard techniques for inducing word embeddings rely on the distributional hypothesis (Harris, 1954), using co-occurrence information from large textual corpora to learn meaningful word representations (Mikolov et al., 2013; Levy and Goldberg, 2014; Pennington et al., 2014; Bojanowski et al., 2017).",
"A major drawback of the distributional hypothesis is that it coalesces different relationships between words, such as synonymy and topical relatedness, into a single vector space.",
"A popular solution is to go beyond stand-alone unsupervised learning and fine-tune distributional vector spaces by using external knowledge from humanor automatically-constructed knowledge bases.",
"This is often done as a post-processing step, where distributional vectors are gradually refined to satisfy linguistic constraints extracted from lexical resources such as WordNet (Faruqui et al., 2015; Mrkic et al., 2016), the Paraphrase Database (PPDB) (Wieting et al., 2015), or BabelNet (Mrkic et al., 2017; Vulic et al., 2017a).",
"One advantage of post-processing methods is that they treat the input vector space as a black box , making them applicable to any input space.",
"A key property of these methods is their ability to transform the vector space by specialising it for a particular relationship between words.",
"1 Prior work has predominantly focused on distinguishing between semantic similarity and conceptual relatedness (Faruqui et al., 2015; Mrkic et al., 2017; Vulic et al., 2017b).",
"In this paper, we introduce a novel post-processing model which specialises vector spaces for the lexical entailment ( LE ) relation.",
"Word-level lexical entailment is an asymmetric semantic relation (Collins and Quillian, 1972; Beckwith et al., 1991).",
"It is a key principle determining the organisation of semantic networks into hierarchical structures such as semantic ontologies (Fellbaum, 1998).",
"Automatic reasoning about LE supports tasks such as taxonomy creation (Snow et al., 2006; Navigli et al., 2011), natural language inference (Dagan et al., 2013; Bowman et al., 2015), text generation (Biran and McKeown, 2013), and metaphor detection (Mohler et al., 2013).",
"Our novel LE specialisation model, termed LEAR ( L exical E ntailment A ttractR epel), is inspired by ATTRACT-REPEL , a state-of-the-art general spe-1 Distinguishing between synonymy and antonymy has a positive impact on real-world language understanding tasks such as Dialogue State Tracking (Mrkic et al., 2017).",
"Figure 1 : An illustration of LEAR specialisation.",
"LEAR controls the arrangement of vectors in the transformed vector space by: 1) emphasising symmetric similarity of LE pairs through cosine distance (by enforcing small angles between terrier and dog or dog and animal ); and 2) by imposing an LE ordering using vector norms, adjusting them so that higher-level concepts have larger norms (e.g., | animal | > | dog | > | terrier | ).",
"cialisation framework (Mrkic et al., 2017).",
"2 The key idea of LEAR , illustrated by Figure 1, is to pull desirable ( ATTRACT ) examples described by the constraints closer together, while at the same time pushing undesirable ( REPEL ) word pairs away from each other.",
"Concurrently, LEAR (re-)arranges vector norms so that norm values in the Euclidean space reflect the hierarchical organisation of concepts according to the given LE constraints: put simply, higher-level concepts are assigned larger norms.",
"Therefore, LEAR simultaneously captures the hierarchy of concepts (through vector norms) and their similarity (through their cosine distance).",
"The two pivotal pieces of information are combined into an asymmetric distance measure which quantifies the LE strength in the specialised space.",
"After specialising four well-known input vector spaces with LEAR , we test them in three standard word-level LE tasks (Kiela et al., 2015b): 1) hypernymy directionality ; 2) hypernymy detection ; and 3) combined hypernymy detection/directionality.",
"Our specialised vectors yield notable improvements over the strongest baselines for each task, with each input space, demonstrating the effectiveness and robustness of LEAR specialisation.",
"The employed asymmetric distance allows one to make graded assertions about hierarchical relationships between concepts in the specialised space.",
"This property is evaluated using HyperLex, a recent graded LE dataset (Vulic et al., 2017).",
"The LEAR -specialised vectors push state-of-the-art Spearman's correlation from 0.540 to 0.686 on the full dataset (2,616 word pairs), and from 0.512 to 0.705 on its noun subset (2,163 word pairs).",
"Let V be the vocabulary, A the set of ATTRACT word pairs (e.g., intelligent and brilliant ), and R the set of REPEL word pairs (e.g., vacant and occupied ).",
"The ATTRACT-REPEL procedure operates over mini-batches of such pairs BA and BR .",
"For ease of notation, let each word pair ( x l , x r ) in these two sets correspond to a vector pair ( x l , x r ) , so that a mini-batch of k 1 word pairs is given by BA = [( x 1 l , x 1 r ) , . . . , ( x k 1 l , x k 1 r )] (similarly for BR , which consists of k 2 example pairs).",
"Next, the sets of pseudo-negative examples TA = [( t 1 l , t 1 r ) , . . . , ( t k 1 l , t k 1 r )] and TR = [( t 1 l , t 1 r ) , . . . , ( t k 2 l , t k 2 r )] are defined as pairs of negative examples for each ATTRACT and REPEL example pair in mini-batches BA and BR .",
"These negative examples are chosen from the word vectors present in BA or BR so that, for each ATTRACT pair ( x l , x r ) , the negative example pair ( t l , t r ) is chosen so that t l is the vector closest (in terms of cosine distance) to x l and t r is closest to x r .",
"Similarly, for each REPEL pair ( x l , x r ) , the negative example pair ( t l , t r ) is chosen from the remaining in-batch vectors so that t l is the vector furthest away from x l and t r is furthest from x r .",
"The negative examples are used to:",
"a) force ATTRACT pairs to be closer to each other than to their respective negative examples; and",
"b) to force REPEL pairs to be further away from each other than from their negative examples.",
"The first term of the cost function pulls ATTRACT pairs together: Att ( BA , TA ) = k 1 X i =1 (cid:2) (cid:0) att + cos ( x il , t il ) cos ( x il , x ir ) (cid:1) + (cid:0) att + cos ( x ir , t ir ) cos ( x il , x ir ) (cid:1) (cid:3) (1) 1135 where cos denotes cosine similarity, ( x ) = max(0 , x ) is the hinge loss function and att is the attract margin which determines how much closer these vectors should be to each other than to their respective negative examples.",
"The second part of the cost function pushes REPEL word pairs away from each other: Rep ( BR , TR ) = k 2 X i =1 (cid:2) (cid:0) rep + cos ( x il , x ir ) cos ( x il , t il ) (cid:1) + (cid:0) rep + cos ( x il , x ir ) cos ( x ir , t ir ) (cid:1) (cid:3) (2) In addition to these two terms, an additional regularisation term is used to preserve the abundance of high-quality semantic content present in the distributional vector space, as long as this information does not contradict the injected linguistic constraints.",
"If V ( B ) is the set of all word vectors present in the given mini-batch, then: Reg ( BA , BR ) = X x i V ( BA B R ) reg k b x i x i k 2 where reg is the L2 regularization constant and b x i denotes the original (distributional) word vector for word x i .",
"The full ATTRACT-REPEL cost function is given by the sum of all three terms.",
"In this section, the ATTRACT-REPEL framework is extended to model lexical entailment jointly with (symmetric) semantic similarity.",
"To do this, the method uses an additional source of external lexical knowledge: let L be the set of directed lexical entailment constraints such as (corgi, dog) , (dog, animal) , or (corgi, animal) , with lower-level concepts on the left and higher-level ones on the right (the source of these constraints will be discussed in Section 3).",
"The optimisation proceeds in the same way as before, considering a mini-batch of LE pairs BL consisting of k 3 word pairs standing in the (directed) lexical entailment relation.",
"Unlike symmetric similarity, lexical entailment is an asymmetric relation which encodes a hierarchical ordering between concepts.",
"Inferring the direction of the entailment relation between word vectors requires the use of an asymmetric distance function.",
"We define three different ones, all of which use the word vector's norms to impose an ordering between highand low-level concepts: D 1 ( x , y ) = | x | | y | (3) D 2 ( x , y ) = | x | | y | | x | + | y | (4) D 3 ( x , y ) = | x | | y | max( | x | , | y | ) (5) The lexical entailment term (for the j -th asymmetric distance, j 1 , 2 , 3 ) is defined as: LE j ( BL ) = k 3 X i =1 D j ( x i , y i ) (6) The first distance serves as the baseline: it uses the word vectors' norms to order the concepts, that is to decide which of the words is likely to be the higher-level concept.",
"In this case, the magnitude of the difference between the two norms determines the intensity' of the LE relation.",
"This is potentially problematic, as this distance does not impose a limit on the vectors' norms.",
"The second and third metric take a more sophisticated approach, using the ratios of the differences between the two norms and either:",
"a) the sum of the two norms; or",
"b) the larger of the two norms.",
"In doing that, these metrics ensure that the cost function only considers the norms' ratios.",
"This means that the cost function no longer has the incentive to increase word vectors' norms past a certain point, as the magnitudes of norm ratios grow in size much faster than the linear relation defined by the first distance function.",
"C ( BA , TA , BR , TR , BL , TL ) = Att ( BS , TS ) + . . + Rep ( BA , TA ) + Reg ( BA , BR , BL ) + . . . + Att ( BL , TL ) + LE j ( BL )",
"LE Pairs as ATTRACT Constraints The combined cost function makes use of the batch of lexical constraints BL twice: once in the defined asymmetric cost function LE j , and once in the symmetric ATTRACT term Att ( BL , TL ) .",
"This means that words standing in the lexical entailment relation are forced to be similar both in terms of cosine distance (via the symmetric ATTRACT term) and in terms of the asymmetric LE distance from Eq.",
"(6).",
"LE relations in the same vector space.",
"Whereas the similarity can be inferred from the standard cosine distance, the LEAR optimisation embeds lexical entailment as a combination of the symmetric ATTRACT term and the newly defined asymmetric LE j cost function.",
"Consequently, the metric used to determine whether two words stand in the LE relation must combine the two cost terms as well.",
"We define the LE decoding metric as: ILE ( x , y ) = dcos ( x , y ) + D j ( x , y ) (7) where dcos ( x , y ) denotes the cosine distance.",
"This decoding function combines the symmetric and the asymmetric cost term, in line with the combination of the two used to perform LEAR specialisation.",
"In the evaluation, we show that combining the two cost terms has a synergistic effect, with both terms contributing to stronger performance across all LE tasks used for evaluation.",
"Starting Distributional Vectors To test the robustness of LEAR specialisation, we experiment with a variety of well-known, publicly available English word vectors: 1) Skip-Gram with Negative Sampling (SGNS) (Mikolov et al., 2013) trained on the Polyglot Wikipedia (Al-Rfou et al., 2013) by Levy and Goldberg (2014); 2) GLOVE Common Crawl (Pennington et al., 2014); 3) CONTEXT 2 VEC (Melamud et al., 2016), which replaces CBOW contexts with contexts based on bidirectional LSTMs (Hochreiter and Schmidhuber, 1997); and 4) FASTTEXT (Bojanowski et al., 2017), a SGNS variant which builds word vectors as the sum of their constituent character n-gram vectors.",
"3 Linguistic Constraints We use three groups of linguistic constraints in the LEAR specialisation model, covering three different relation types which are all beneficial to the specialisation process: directed 1) lexical entailment ( LE ) pairs ; 2) synonymy pairs ; and 3) antonymy pairs .",
"Synonyms are included as symmetric ATTRACT pairs (i.e., the BA pairs) since they can be seen as defining a trivial symmetric IS-A relation (Rei and Briscoe, 2014; Vulic et al., 2017).",
"For a similar reason, 3 All vectors are 300 -dimensional except for the 600 dimensional CONTEXT 2 VEC vectors; for further details regarding the architectures and training setup of the used vector collections, we refer the reader to the original papers.",
"We also experimented with dependency-based SGNS vectors (Levy and Goldberg, 2014), observing similar patterns in the results.",
"antonyms are clear REPEL constraints as they anti-correlate with the LE relation.",
"4 Synonymy and antonymy constraints are taken from prior work (Zhang et al., 2014; Ono et al., 2015): they are extracted from WordNet (Fellbaum, 1998) and Roget (Kipfer, 2009).",
"In total, we work with 1,023,082 synonymy pairs (11.7 synonyms per word on average) and 380,873 antonymy pairs (6.5 per word).",
"5 As in prior work (Nguyen et al., 2017; Nickel and Kiela, 2017), LE constraints are extracted from the WordNet hierarchy, relying on the transitivity of the LE relation.",
"This means that we include both direct and indirect LE pairs in our set of constraints (e.g., ( pangasius, fish ), ( fish, animal ), and (panga-sius, animal) ).",
"We retained only noun-noun and verb-verb pairs, while the rest were discarded: the final number of LE constraints is 1,545,630.",
"6 Training Setup We adopt the original ATTRACTREPEL model setup without any fine-tuning.",
"Hyper-parameter values are set to: att = 0 .",
"6 , rep = 0 .",
"0 , reg = 10 9 (Mrkic et al., 2017).",
"The models are trained for 5 epochs with the AdaGrad algorithm (Duchi et al., 2011), with batch sizes set to k 1 = k 2 = k 3 = 128 for faster convergence.",
"We test and analyse LEAR -specialised vector spaces in two standard word-level LE tasks used in prior work: hypernymy directionality and detection (Sec-tion 4.1) and graded LE (Section 4.2).",
"The first evaluation uses three classification-style tasks with increased levels of difficulty.",
"The tasks are evaluated on three datasets used extensively in the LE literature (Roller et al., 2014; Santus et al., 2014; Weeds et al., 2014; Shwartz et al., 2017; Nguyen et al., 2017), compiled into an integrated evaluation set by Kiela et al. (2015b).",
"7 4 In short, the question Is X a type of X ? (synonymy) is trivially true, while the question Is X a type of X ? (antonymy) is trivially false.",
"5 https://github.com/tticoin/AntonymDetection 6 We also experimented with additional 30,491 LE constraints from the Paraphrase Database (PPDB) 2.0 (Pavlick et al., 2015).",
"Adding them to the WordNet-based LE pairs makes no significant impact on the final performance.",
"We also used synonymy and antonymy pairs from other sources, such as word pairs from PPDB used previously by Wieting et al. (2015), and BabelNet (Navigli and Ponzetto, 2012) used by Mrkic et al. (2017), reaching the same conclusions.",
"7 http://www.cl.cam.ac.uk/ dk427/generality.html 1137 SGNS Glove context2vec fastText Word vector collection 0.80 0.85 0.90 0.95 1.00 A cc u r a c y D1 D2 D3",
"Figure 2 : Summary of the results on three different word-level LE subtasks:",
"(a) directionality ;",
"(b) detection ;",
"(c) detection and directionality .",
"Vertical bars denote the results obtained by different input word vector spaces which are post-processed/specialised by our LEAR specialisation model using three variants of the asymmetric distance ( D 1 , D 2 , D 3 ), see Section 2.",
"Thick horizontal red lines refer to the best reported scores on each subtask for these datasets; the baseline scores are taken from Nguyen et al. (2017).",
"The first task, LE directionality, is conducted on 1,337 LE pairs originating from the BLESS evaluation set (Baroni and Lenci, 2011).",
"Given a true LE pair, the task is to predict the correct hypernym.",
"With LEAR -specialised vectors this is achieved by simply comparing the vector norms of each concept in a pair: the one with the larger norm is the hypernym (see Figure 1).",
"The second task, LE detection, involves a binary classification on the WBLESS dataset (Weeds et al., 2014) which comprises 1,668 word pairs standing in a variety of relations ( LE , meronymy-holonymy, co-hyponymy, reversed LE , no relation).",
"The model has to detect a true LE pair, that is, to distinguish between the pairs where the statement X is a (type of) Y is true from all other pairs.",
"With LEAR vectors, this classification is based on the asymmetric distance score: if the score is above a certain threshold, we classify the pair as true LE , otherwise as other.",
"While Kiela et al. (2015b) manually define the threshold value, we follow the approach of Nguyen et al. (2017) and cross-validate: in each of the 1,000 iterations, 2% of the pairs are sampled for threshold tuning, and the remaining 98% are used for testing.",
"The reported numbers are therefore average accuracy scores.",
"8 8 We have conducted more LE directionality and detection experiments on other datasets such as EVALution (Santus et al., 2015), the N 1 (cid:15) N 2 dataset of Baroni et al. (2012), and the dataset of Lenci and Benotto (2012) with similar performances and findings.",
"We do not report all these results for brevity and clarity of presentation.",
"The final task, LE detection and directionality, concerns a three-way classification on BIBLESS , a relabeled version of WBLESS .",
"The task is now to distinguish both LE pairs ( 1 ) and reversed LE pairs ( 1 ) from other relations ( 0), and then additionally select the correct hypernym in each detected LE pair.",
"We apply the same test protocol as in the LE detection task.",
"Results and Analysis The original paper of Kiela et al. (2015b) reports the following best scores on each task: 0.88 ( BLESS ), 0.75 ( WBLESS ), 0.57 ( BIBLESS ).",
"These scores were recently surpassed by Nguyen et al. (2017), who, instead of post-processing, combine WordNet-based constraints with an SGNS-style objective into a joint model.",
"They report the best scores to date: 0.92 ( BLESS ), 0.87 ( WBLESS ), and 0.81 ( BIBLESS ).",
"The performance of the four LEAR -specialised word vector collections is shown in Figure 2 (to-gether with the strongest baseline scores for each of the three tasks).",
"The comparative analysis con-firms the increased complexity of subsequent tasks.",
"LEAR specialisation of each of the starting vector spaces consistently outperformed all baseline scores across all three tasks.",
"The extent of the improvements is correlated with task difficulty: it is lowest for the easiest directionality task ( 0 . 92 0 . 96 ), and highest for the most difficult detection plus directionality task ( 0 . 81 0 . 88 ).",
"The results show that the two LEAR variants which do not rely on absolute norm values and 1138 Norm Norm Norm terrier 0.87 laptop 0.60 cabriolet 0.74 dog 2.64 computer 2.96 car 3.59 mammal 8.57 machine 6.15 vehicle 7.78 vertebrate 10.96 device 12.09 transport 8.01 animal 11.91 artifact 17.71 instrumentality 14.56 organism 20.08 object 23.55 Table 1 : L2 norms for selected concepts from the WordNet hierarchy.",
"Input: FASTTEXT ; LEAR : D2.",
"perform a normalisation step in the asymmetric distance (D2 and D3) have an edge over the D1 variant which operates with unbounded norms.",
"The difference in performance between D2/D3 and D1 is even more pronounced in the graded LE task (see Section 4.2).",
"This shows that the use of unbounded vector norms diminishes the importance of the symmetric cosine distance in the combined asymmetric distance.",
"Conversely, the synergistic combination used in D2/D3 does not suffer from this issue.",
"The high scores achieved with each of the four word vector collections show that LEAR is not dependent on any particular word representation architecture.",
"Moreover, the extent of the performance improvements in each task suggests that LEAR is able to reconstruct the concept hierarchy coded in the input linguistic constraints.",
"Moreover, we have conducted a small experiment to verify that the LEAR method can generalise beyond what is directly coded in pairwise external constraints.",
"A simple WordNet lookup baseline yields accuracy scores of 0.82 and 0.80 on the directionality and detection tasks, respectively.",
"This baseline is outperformed by LEAR : its scores are 0.96 and 0.92 on the two tasks when relying on the same set of WordNet constraints.",
"Importance of Vector Norms To verify that the knowledge concerning the position in the semantic hierarchy actually arises from vector norms, we also manually inspect the norms after LEAR specialisation.",
"A few examples are provided in Table 1.",
"They indicate a desirable pattern in the norm values which imposes a hierarchical ordering on the concepts.",
"Note that the original distributional SGNS model (Mikolov et al., 2013) does not normalise vectors to unit length after training.",
"However, these norms are not at all correlated with the desired hierarchical ordering, and are therefore useless for LE -related applications: the non-specialised distributional SGNS model scores 0.44, 0.48, and 0.34 on the three tasks, respectively.",
"Asymmetric distances in the LEAR -specialised space quantify the degree of lexical entailment between any two concepts.",
"This means that they can be used to make fine-grained assertions regarding the hierarchical relationships between concepts.",
"We test this property on HyperLex (Vulic et al., 2017), a gold standard dataset for evaluating how well word representation models capture graded LE , grounded in the notions of concept (proto)typicality (Rosch, 1973; Medin et al., 1984) and category vagueness (Kamp and Partee, 1995; Hampton, 2007) from cognitive science.",
"HyperLex contains 2,616 word pairs (2,163 noun pairs and 453 verb pairs) scored by human raters in the [0 , 6] interval following the question To what degree is X a (type of) Y? 9 As shown by the high inter-annotator agreement on HyperLex (0.85), humans are able to consistently reason about graded LE .",
"10 However, current state-of-the-art representation architectures are far from this ceiling.",
"For instance, Vulic et al. (2017) evaluate a plethora of architectures and report a high-score of only 0.320 (see the summary table in Figure 3).",
"Two recent representation models (Nickel and Kiela, 2017; Nguyen et al., 2017) focused on the LE relation in particular (and employing the same set of WordNet-based constraints as LEAR ) report the highest score of 0.540 (on the entire dataset) and 0.512 (on the noun subset).",
"Results and Analysis We scored all HyperLex pairs using the combined asymmetric distance described by Equation (7), and then computed Spearman's rank correlation with the ground-truth ranking.",
"Our results, together with the strongest baseline scores, are summarised in Figure 3.",
"The summary table in Figure",
"3(c) shows the HyperLex performance of several prominent LE models.",
"We provide only a quick outline of these models here; further details can be found in the original papers.",
"FREQ-RATIO exploits the fact that more general concepts tend to occur more frequently in textual corpora.",
"SGNS ( COS ) uses non-specialised 9 From another perspective, one might say that graded LE provides finer-grained human judgements on a continuous scale rather than simplifying the judgements into binary discrete decisions.",
"For instance, the HyperLex score for the pair (girl, person) is 5.91/6, the score for (guest, person) is 4.33, while the score for the reversed pair (person, guest) is 1.73.",
"10 For further details concerning HyperLex, we refer the reader to the resource paper (Vulic et al., 2017).",
"The dataset is available at: http://people.ds.cam.ac.",
"uk/iv250/hyperlex.html 1139 SGNS Glove context2vec fastText Word vector collection 0.50 0.55 0.60 0.65 0.70 S p e a r m a n ' s c o rr e l a t i o n w i t h H y p e r L e x ( A ll ) D1 D2 D3",
"Figure 3 : Results on the graded LE task defined by HyperLex.",
"Following Nickel and Kiela (2017), we use Spearman's rank correlation scores on:",
"a) the entire dataset (2,616 noun and verb pairs); and",
"b) its noun subset (2,163 pairs).",
"The summary table shows the performance of other well-known architectures on the full HyperLex dataset, compared to the best results achieved using LEAR specialisation.",
"SGNS vectors and quantifies the LE strength using the symmetric cosine distance between vectors.",
"A comparison of these models to the best-performing LEAR vectors shows the extent of the improvements achieved using the specialisation approach.",
"LEAR -specialised vectors also outperform SLQSSIM (Santus et al., 2014) and VISUAL (Kiela et al., 2015b), two LE detection models similar in spirit to LEAR .",
"These models combine symmetric semantic similarity (through cosine distance) with an asymmetric measure of lexical generality obtained either from text ( SLQS-SIM ) or visual data ( VISUAL ).",
"The results on HyperLex indicate that the two generality-based measures are too coarse-grained for graded LE judgements.",
"These models were originally constructed to tackle LE directionality and detection tasks (see Section 4.1), but their performance is surpassed by LEAR on those tasks as well.",
"The VISUAL model outperforms SLQS-SIM .",
"However, its numbers on BLESS (0.88), WBLESS (0.75), and BIBLESS (0.57) are far from the top-performing LEAR vectors (0.96, 0.92, 0.88).",
"11 WN-BEST denotes the best result with asymmetric similarity measures which use the WordNet structure as their starting point (Wu and Palmer, 1994; Pedersen et al., 2004).",
"This model can be observed as a model that directly looks up the full WordNet structure to reason about graded lexical entailment.",
"The reported results from Figure",
"3(c) suggest it is more effective to quantify the LE re-11 We note that SLQS and VISUAL do not leverage any external knowledge from WordNet, but the VISUAL model leverages external visual information about concepts.",
"lation strength by using WordNet as the source of constraints for specialisation models such as HYPERVEC or LEAR .",
"WORD 2 GAUSS (Vilnis and McCallum, 2015) represents words as multivariate K -dimensional Gaussians rather than points in the embedding space: it is therefore naturally asymmetric and was used in LE tasks before, but its performance on HyperLex indicates that it cannot effectively capture the subtleties required to model graded LE .",
"However, note that the comparison is not strictly fair as WORD 2 GAUSS does not leverage any external knowledge.",
"An interesting line for future research is to embed external knowledge within this representation framework.",
"Most importantly, LEAR outperforms three recent (and conceptually different) architectures: ORDER-EMB (Vendrov et al., 2016), P OINCAR (Nickel and Kiela, 2017), and HYPERVEC (Nguyen et al., 2017).",
"Like LEAR , all of these models complement distributional knowledge with external linguistic constraints extracted from WordNet.",
"Each model uses a different strategy to exploit the hierarchical relationships encoded in these constraints (their approaches are discussed in Section 5).",
"12 However, LEAR , as the first LE -oriented post-processor, is able to utilise the constraints more effectively than its competitors.",
"Another advantage of LEAR is its applicability to any input 12 As discussed previously by Vulic et al. (2017), the off-the-shelf ORDER-EMB vectors were trained for the binary ungraded LE detection task: this limits their expressiveness in the graded LE task.",
"Table 2 : Analysing the importance of the synergy in the FULL LEAR model on the final performance on WBLESS , BLESS , HyperLex-All ( HL-A ) and HyperLex-Nouns ( HL-N ).",
"Input: FASTTEXT .",
"D2.",
"Figures",
"3(a) and",
"3(b) indicate that the two LEAR variants which rely on norm ratios (D2 and D3), rather than on absolute (unbounded) norm differences (D1), achieve stronger performance on HyperLex.",
"The highest correlation scores are again achieved by D2 with all input vector spaces.",
"Why Symmetric + Asymmetric?",
"In another experiment, we analyse the contributions of both LE related terms in the LEAR combined objective function (see Section 2.2).",
"We compare three variants of LEAR : 1) a symmetric variant which does not arrange vector norms using the LE j ( BL ) term ( SYMONLY ); 2) a variant which arranges norms, but does not use LE constraints as additional symmetric ATTRACT constraints ( ASYM-ONLY ); and 3) the full LEAR model, which uses both cost terms ( FULL ).",
"The results with one input space (similar results are achieved with others) are shown in Table 2.",
"This table shows that, while the stand-alone ASYM-ONLY term seems more beneficial than the SYM-ONLY one, using the two terms jointly yields the strongest performance across all LE tasks.",
"LE and Semantic Similarity We also test whether the asymmetric LE term harms the (norm-independent) cosine distances used to represent semantic similarity.",
"The LEAR model is compared to the original ATTRACT-REPEL model making use of the same set of linguistic constraints.",
"Two true semantic similarity datasets are used for evaluation: SimLex-999 (Hill et al., 2015) and SimVerb-3500 (Gerz et al., 2016).",
"There is no significant difference in performance between the two models, both of which yield similar results on SimLex (Spear-man's rank correlation of 0.71) and SimVerb ( 0.70).",
"This proves that cosine distances remain preserved during the optimization of the asymmetric objective performed by the joint LEAR model.",
"Vector Space Specialisation A standard approach to incorporating external information into vector spaces is to pull the representations of similar words closer together.",
"Some models integrate such constraints into the training procedure: they modify the prior or the regularisation (Yu and Dredze, 2014; Xu et al., 2014; Bian et al., 2014; Kiela et al., 2015a), or use a variant of the SGNS-style objective (Liu et al., 2015; Osborne et al., 2016; Nguyen et al., 2017).",
"Another class of models, popularly termed retrofitting , fine-tune distributional vector spaces by injecting lexical knowledge from semantic databases such as WordNet or the Paraphrase Database (Faruqui et al., 2015; Jauhar et al., 2015; Wieting et al., 2015; Nguyen et al., 2016; Mrkic et al., 2016; Mrkic et al., 2017).",
"LEAR falls into the latter category.",
"However, while previous post-processing methods have focused almost exclusively on specialising vector spaces to emphasise semantic similarity (i.e., to distinguish between similarity and relatedness by explicitly pulling synonyms closer and pushing antonyms further apart), this paper proposed a principled methodology for specialising vector spaces for asymmetric hierarchical relations (of which lexical entailment is an instance).",
"Its starting point is the state-of-the-art similarity specialisation framework of Mrkic et al. (2017), which we extend to support the inclusion of hierarchical asymmetric relationships between words.",
"Word Vectors and Lexical Entailment Since the hierarchical LE relation is one of the fundamental building blocks of semantic taxonomies and hierarchical concept categorisations (Beckwith et al., 1991; Fellbaum, 1998), a significant amount of research in semantics has been invested into its automatic detection and classification.",
"Early work relied on asymmetric directional measures (Weeds et al., 2004; Clarke, 2009; Kotlerman et al., 2010; Lenci and Benotto, 2012, i.a.) which were based on the distributional inclusion hypothesis (Geffet and Dagan, 2005) or the distributional informativeness or generality hypothesis (Herbelot and Ganesalingam, 2013; Santus et al., 2014).",
"However, these approaches have recently been superseded by methods based on word embeddings.",
"These methods build dense real-valued vectors for capturing the LE relation, either directly in the LE-focused space (Vilnis and McCallum, 2015; Vendrov et al., 1141 2016; Henderson and Popa, 2016; Nickel and Kiela, 2017; Nguyen et al., 2017) or by using the vectors as features for supervised LE detection models (Tuan et al., 2016; Shwartz et al., 2016; Nguyen et al., 2017; Glava and Ponzetto, 2017).",
"Several LE models embed useful hierarchical relations from external resources such as WordNet into LE -focused vector spaces, with solutions coming in different flavours.",
"The model of Yu et al. (2015) is a dynamic distance-margin model optimised for the LE detection task using hierarchical WordNet constraints.",
"This model was extended by Tuan et al. (2016) to make use of contextual sentential information.",
"A major drawback of both models is their inability to make directionality judgements.",
"Further, their performance has recently been surpassed by the HYPERVEC model of Nguyen et al. (2017).",
"This model combines WordNet constraints with the SGNS distributional objective into a joint model.",
"As such, the model is tied to the SGNS objective and any change of the distributional modelling paradigm implies a change of the entire HYPERVEC model.",
"This makes their model less versatile than the proposed LEAR framework.",
"Moreover, the results achieved using LEAR specialisation achieve substantially better performance across all LE tasks used for evaluation.",
"Another model similar in spirit to LEAR is the ORDER-EMB model of Vendrov et al. (2016), which encodes hierarchical structure by imposing a partial order in the embedding space: higher-level concepts get assigned higher per-coordinate values in a d -dimensional vector space.",
"The model minimises the violation of the per-coordinate orderings during training by relying on hierarchical WordNet constraints between word pairs.",
"Finally, the P OINCAR model of Nickel and Kiela (2017) makes use of hyperbolic spaces to learn general-purpose LE embeddings based on n -dimensional Poincar balls which encode both hierarchy and semantic similarity, again using the WordNet constraints.",
"A similar model in hyperbolic spaces was proposed by Chamberlain et al. (2017).",
"In this paper, we demonstrate that LE -specialised word embeddings with stronger performance can be induced using a simpler model operating in more intuitively interpretable Euclidean vector spaces.",
"symmetric and asymmetric constraints into existing vector spaces, performing joint specialisation for two properties: lexical entailment and semantic similarity .",
"Since the former is not symmetric, LEAR uses an asymmetric cost function which encodes the hierarchy between concepts by manipulating the norms of word vectors, assigning higher norms to higher-level concepts.",
"Specialising the vector space for both relations has a synergistic effect: LEAR -specialised vectors attain state-of-the-art performance in judging semantic similarity and set new high scores across four different lexical entailment tasks.",
"The code for the LEAR model is available from: github.com/nmrksic/lear .",
"In future work, we plan to apply a similar methodology to other asymmetric relations (e.g., meronymy ), as well as to investigate fine-grained models which can account for differing path lengths from the WordNet hierarchy.",
"We will also extend the model to reason over words unseen in input lexical resources, similar to the recent post-specialisation model oriented towards specialisation of unseen words for similarity (Vulic et al., 2018).",
"We also plan to test the usefulness of LE-specialised vectors in downstream natural language understanding tasks.",
"Porting the model to other languages and enabling cross-lingual applications such as cross-lingual lexical entailment (Upadhyay et al., 2018) is another future research direction.",
"We thank the three anonymous reviewers for their insightful comments and suggestions.",
"We are also grateful to the TakeLab research group at the University of Zagreb for offering support to computationally intensive experiments in our hour of need.",
"This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909)."
] | [
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Decisions on state-level policies have a deep effect on many aspects of our everyday life, such as health-care and education access.",
"However, there is little understanding of how these policies and decisions are being formed in the legislative process.",
"We take a data-driven approach by decoding the impact of legislation on relevant stakeholders (e.g., teachers in education bills) to understand legislators' decision-making process and votes.",
"We build a new dataset for multiple US states that interconnects multiple sources of data including bills, stakeholders, legislators, and money donors.",
"Next, we develop a textual graph-based model to embed and analyze state bills.",
"Our model predicts winners/losers of bills and then utilizes them to better determine the legislative body's vote breakdown according to demographic/ideological criteria, e.g., gender.",
"State-level legislation is the cornerstone of national policies and has long-lasting effects on residents of US states.",
"Thus, decoding the processes that shape state bills is crucial yet involved.",
"State legislatures vote on 23 times more bills than Federal legislatures, exceeding 120K bills per year (King, 2019).",
"In addition, these state bills cover a broader range of local problems as each state possesses lawmaking power effective within its boundaries.",
"E.g., the State of Washington Health Care Committee addresses health service issues including licensing and regulation of health care facilities and providers.",
"Moreover, it regulates pharmacies, pharmaceutical drugs, state public health programs, and private/public insurance markets (House, 2021).",
"We argue that recent NLP architectures can provide new insights into the state-level legislative efforts.",
"In particular, contextualized graph and text embedding can better represent policies within and across states via a shared political context.",
"However, most of the prior efforts are focused on analyzing congressional bills with traditional techniques, e.g., (Gerrish and Blei, 2011, 2012).",
"A few state-level studies (Eidelman et al., 2018; Davoodi et al., 2020) took great steps in predicting the progression of state bills towards a vote on the floor and the breakdown of votes based on demographic metrics (e.g., gender).",
"But their main downside is they evaluate policies in a limited context and do not capture cross-state patterns.",
"Medical malpractice actions.",
"Permits a patient to bring an action against a health care provider without submitting the complaint to the medical review board if: (1) the amount of the claim is not more than $15,000; (2) the cause of action is based on the removal of the wrong body part Cleavage Yea Nay Figure 1: A health bill leading to voting cleavage based on the party metric primarily due to its specific losers (red) and winners (green).",
"Winners-Losers analysis .",
"In this work, we take a new data-driven approach to analyzing state legislation.",
"Our key insight is that each state bill inevitably produces some winners and losers to provide practical solutions to specific in-state and local problems.",
"Thus, we argue that it is important to examine state bills in the larger context of their impact on different population segments as well as commercial and professional stakeholders.",
"To help clarify this idea, consider the example in Figure 1. This state bill makes it easier for patients ( winners) to take legal actions against healthcare providers ( losers) .",
"This analysis of winners and losers (WLs) can foster transparency in legislative efforts in each state, while interconnecting different states through common stakeholders and revealing cross-state patterns.",
"In addition, the context of WLs can enable a new category of NLP models for predicting the roll-call behavior of legislators.",
"based on the ideological and demographic identities of legislators (Section 2).",
"Each of such metrics (e.g., party, gender, district, ideology) splits legislators into groups.",
"Measuring lack of consensus within and across these groups, which has political and social benefits, can be done using two classification tasks (Section 5): For a given metric, we say a bill is competitive (Figure 2) if the majority vote of legislators from a group (e.g., Democrat, male, urban, liberal) is different from that of the opposite group (e.g., Republican, female, rural, conserva-tive).",
"Similarly, a bill is inverse-competitive if there is a tie in votes of members of the same group (e.g., liberals).",
"For instance, the health bill in Figure 1 resulted in a party competitive vote.",
"Another example is a state bill on abortion that requires... physician performing an abortion to admitting privileges at a hospital in the county resulted in a gender competitive vote.",
"We show the context of winners/losers of these bills could hint at such cleavages prior to voting (Sections 4, 6).",
"Framework overview.",
"To achieve this goal, we address multiple NLP challenges in our proposed framework: (1) Data: The legislative process in US states does not track the stakeholders of bills and the impact of bills on them.",
"Thus, we design a reliable crowd-sourcing pipeline to extract and analyze winners and losers of state bills from their text and form a new annotated dataset.",
"(2) Modeling: To automate the WL analysis, next, we provide a nationwide graph abstraction to model the state legislative process, as well as a joint text and graph embedding architecture for predicting winners and losers.",
"Our model captures the interplay of different entities, e.g., bills, stakeholders, legislators, and money donors, while maintaining dependencies between their textual attributes.",
"We leverage RGCN (Schlichtkrull et al., 2018), a relational graph convolutional network, to represent diverse relations.",
"We also adopt the RoBERTa transformer (Liu et al., 2019) after performing domain-adaptive pretraining on political texts using the MLM (Masked Language Model) task.",
"(3) Application: Finally, we showcase the ability of our WL analysis and prediction model in decoding the voting behavior of state legislators.",
"In summary, we make three technical contributions : We provide the first definition and realization of winners/losers analysis for state bills using the latest NLP advances.",
"(Sections 2, 3, 4).",
"We developed a new joint graph and text embedding model that both predicts win-ners/losers of bills and legislators' votes.",
"In particular, it incorporates the winners/losers inference into the vote prediction task, to evaluate bills in a broader context (Section 5).",
"We operationalized the winners/losers analysis for several legislative topics (e.g., health) and created a new dataset.",
"The extensive evaluation shows our approach delivers a higher F1, than existing models (Sections 3, 6) 2 Related Works Our work is inspired by some promising studies: Roll-call classification.",
"Eidelman et al. 2018 associate the bill's text with partisan information of its sponsors to predict the likelihood of a mem-ber of the U.S. Congress voting in support of a bill.",
"Similarly, Gerrish and Blei 2011 embed the combined topic and text of Congress bills in the ideological space and develop ideal point models for inferring votes.",
"Peng et al., 2016; Kornilova et al., 2018; Kraft et al., 2016; Patil et al., 2019; Karimi et al., 2019; Pujari and Goldwasser, 2021 augment this model using data on social networks, thus generating better embeddings.",
"Bill text classification.",
"Instead of leveraging bill text in models to describe the behavior of each legislator, Yano et al. 2012 include the bill's text in a model that directly predicts whether a bill comes out from a standing committee.",
"Particularly, they develop features based on the urgency of the problem being solved by the bill and the set of legislators co-sponsoring the bill.",
"Eidelman et al. 2018 conduct a similar study on US states.",
"Winners-losers analysis.",
"Analyzing the impact of bills on its stakeholders is a well-studied topic in the political science literature.",
"Gamm and Kousser, 2010 reveal state legislators are more likely to write bills aimed at a particular local stakeholder when the legislative body is dominated by one party.",
"Similarly, Bagashka and Clark, 2016 show state legislators are motivated to introduce particularistic bills designed to help a specific geographical area within 271 their district.",
"Pennock, 1979 analyzes legislation based on its generalized and particularized impact on different interest groups.",
"By leveraging recent NLP advances (e.g., contextualized language models, graph embedding, crowdsourcing), our work extends these studies and provides the first automated framework for the stakeholders analysis on state bills.",
"Voting cleavages.",
"Research has covered multiple ways that the demographic background of legislators can affect roll-call voting.",
"Frederick 2010 demonstrates gender affects the roll-call vote in the Senate by changing the influence of partisanship for GOP women.",
"Broach 1972 describes that urban-rural voting cleavages happen in less partisan states and on bills that separate urban and rural interests.",
"Similar to us, Davoodi et al. 2020 build a textual graph to predict such cleavages.",
"While our focus is on a different problem, stakeholders analysis, we outperform this prior study by representing bills in a broader context containing their stakeholders.",
"Graph embedding in NLP.",
"Our work uses Graph convolutional networks (GCNs), which have been applied to various NLP tasks, e.g., Semantic role labeling (SRL) (Marcheggiani and Titov, 2017) and relation classification in clinical narratives (Li et al., 2018).",
"In these tasks, GCNs encode the syntactic structure of sentences.",
"Similarly, Deffer-rard et al., 2016; Peng et al., 2018; Henaff et al., 2015 use graph neural networks (GNNs) to represent a network of documents based on their references.",
"Similar to our work but for a different problem and objective, Sawhney et al., 2020 analyze speech-level stance of members of the parliament, by performing node classification on graph attention networks (GATs), and Pujari and Goldwasser, 2021 analyze social media content generated by politicians using a graph transformer model.",
"We first provide an overview of key players in the state-level legislative process.",
"Then, we model them using an efficient text-based graph abstraction (Figure 3), which will enable us to embed and evaluate state policies in a broad context and perform the stakeholder and roll-call analysis on them.",
"Our model, unlike prior works, fully captures the interplay of main players in the lawmaking process:",
"sists of two chambers 1 : the House and the Senate.",
"The legislative process starts with legislators sponsoring a bill in a chamber.",
"The idea of a bill can come from different sources.",
"Next, the bill goes through multiple roll-call votes in the origin chamber, where it can fail at any stage.",
"It is first referred to the proper committee by the chamber leader.",
"Committee members, before casting their votes, may set up a public hearing with the sponsors and interested parties.",
"If the bill passes out of the committee, it reaches the second reading, where the full chamber debates, amends, and votes on the bill.",
"If the bill passes by a majority vote, it is scheduled for the third reading and final vote.",
"A bill must go through a similar procedure in the other chamber before it is acted on by the governor.",
"2. Contributors .",
"While legislators navigate through bills, external contributors influence their decisions.",
"Individual and corporate money donors aim at developing changes in the outcome and theme of bills starting from the election times.",
"Lobbyists launch campaigns to persuade legislators towards certain policies.",
"Such efforts inevitably lead to new bills or amendments to existing laws.",
"3. Stakeholders .",
"A state bill cannot benefit everyone and it produces beneficial or detrimental effects on its stakeholders.",
"Identifying winners and losers of a bill from its text is crucial, which can hint at the fate of a bill.",
"Particularly, legislators do not always write bills themselves.",
"Corporations and interest groups (e.g., ALEC) sell fill-in-the-blank bills to legislators.",
"Thus, we can see voting patterns on bills with the same winners and losers.",
"1 Nebraska's legislature is unique in the nation because it has a single-house system.",
"To model these players and their interactions, we design a legislative graph with three important properties (Figure 3).",
"First, since each of the players (e.g., stakeholders, legislators) has different textual attributes, our proposed graph supports heterogeneous textual nodes.",
"Second, we form a nationwide graph to capture cross-state patterns (abla-tion study in Appendix A.2) by building common entities (e.g., stakeholders in Section 4).",
"Finally, our abstraction supports multiple relations between each pair of entities (e.g., legislators voting and sponsoring a bill).",
"With this overview, we present the nodes and relations that we will realize based on the real data: Node types .",
"The nodes in the legislative graph contain a rich set of textual features: (1) Bill nodes embed title, abstract, and body of state bills.",
"(2) Stakeholder nodes come with short texts on political interests and constituent entities of stakeholders of policies in bills (will be detailed shortly).",
"(3) Legislator nodes contain diverse textual information on legislators, e.g., their biography, political interests, committee assignments, and demographic profile (e.g., party, gender, ideology, and district).",
"(4) Contributors nodes have text-based attributes on money donors covering their specific/general business interests, party, and their type (individual or non-individual).",
"Relation types .",
"Based on the legislative process, legislator and bill nodes participate in Bill Sponsorship , 'No Vote , and Yes' Vote relations in the graph (See Appendix A.4 for handling abstain votes.) A stakeholder node forms Winner , Loser , or Neutral relations with a bill node, which we will extract it based on the bill text.",
"Similarly, we form two types of relations between contributors and legislators: Positive Donation realized based on the real donation data, and Negative Donation , which we infer when a contributor shows a lack of interest in a demographic of legislators (e.g., never donates to women).",
"We sample and connect such legislators and the contributor via a negative relation.",
"Next, we describe how we build up the legislative graph, by collecting data on legislators, bills, and contributors.",
"US states do not record the impact of bills on relevant stakeholders.",
"Thus, we explain how to derive stakeholders from bill nodes, perform winners-losers analysis on them, and interconnect different US states by forming common stakeholder nodes.",
"We highlight how our analysis can be used (1) to inform the public about the dynamic and direction of state policies, and (2) to determine legislators' roll-call behavior with different demographic and ideological profiles.",
"Topic Education Health Law Agriculture # of bills 957 942 1140 758 Table 2: Bills sampled for the stakeholders analysis.",
"Bills.",
"From the LegiScan website (LegiScan, 2019), we collected data on bills introduced in Indiana, Oregon, and Wisconsin from 2011 through 2018 (details in Appendix 7).",
"We developed a crawler that uses the LegiScan API to fetch legislative information on every bill, including: (1) bill metadata, e.g., the bill type, title, description, sponsors, and links to its texts; (2) vote metadata, e.g., legislator's roll-call vote; and (3) legislator metadata, e.g., party and district info.",
"Then, our crawler converts bill texts in PDF format to text files.",
"In total, we collected 35k bills and sampled 58% of them that had both roll-call data and full texts.",
"Our focus is on the 2nd/3rd reading, in which the full chambers vote, so we selected 32% of the bills for building the legislative graph (Table 1).",
"In LegiScan, each bill is associated with a main topic (e.g., health), used for referral to a proper committee.",
"For the four most frequent topics (Table 2), we will define a group of generic stakeholders for the winners-losers analysis.",
"Legislators.",
"Our crawler also used Ballotpedia (Ballotpedia, 2019) to collect text information on each legislator's biography, political interests, and committee assignments.",
"Also, it consumed other publicly available datasets to identify a legislator's demographic profile, e.g., ideology, gender, and district.",
"The ideology scores for legislators (Shor and McCarty, 2011) were grouped into conservatives, moderates, and liberals.",
"The district identifier was combined with GIS census data (Census, 2019) to identify each legislator as representing an urban or 273 State Gender Party Geography Ideology F M D R UR RU C M L IN 50 176 67 159 161 64 125 94 7 OR 47 103 83 67 133 17 28 61 61 WI 51 157 84 124 160 48 78 49 81 All 148 436 234 350 454 129 231 204 149 Table 3: Aggregated legislators' attributes UR: Urban, RU: Rural, C: Conservative, M: Moderate, L: Liberal.",
"Topic Stakeholders W (%) L (%) E du c a t i o n Edu.",
"companies & service providers 1.4 1 Educational institutions and schools 23.9 8.7 State education agencies 6.3 8.6 Teachers and education workers 13.2 1.3 Students 34.2 1.6 A g r i c u l t u re Agriculture and food-related companies 4.5 4.1 Agricultural and food producers 24.4 6.9 End consumers or retail customers 11.6 11.2 State agriculture and food agencies 14.5 1.4 Grocery stores or food providers 11.6 9.8 H e a l t h Healthcare facilities 16.7 7.7 Healthcare providers and professionals 6.8 3.3 Insurance providers and companies 11.4 10.5 Patients and insurance owners 16.7 6.3 Pharma and medical device companies 4.6 0.5 State healthcare agencies 11.7 4 L a w Law enforcement agencies and officers 15.7 24.7 Judges 11.5 9.4 Victims, offenders, suspects 9.9 11.2 Lawyers 9.8 7.7 Table 4: Stakeholders of different bill topics and their frequency distribution as winners (W) and losers (L).",
"Contributors : FollowTheMoney (FollowThe-Money, 2019) records donations to state legislators and candidates.",
"Our crawler collected the information of donors for each legislator in our dataset (See Table 1).",
"This includes multiple textual attributes for each contributor: type (individual or non-individual), general party, and economic and business information.",
"While the contributor data can be utilized in more sophisticated ways, we focused on major contributors by setting a minimum donation threshold and pruning donors who contributed to a single legislator; We set the fraction of negative donations (Section 3) to 30% of the positive ones extracted from the data.",
"For each select bill topic, we (authors) randomly sampled 10% of bills and carefully analyzed their texts.",
"We recorded entities discussed in the bill texts as well as the detrimental or beneficial impact of the suggested policies on them (regardless of the legislative outcome, i.e., passed in a vote or not).",
"To interconnect different states and optimize the legislative graph, we deduplicated entities and clustered those whose interests align (e.g., surgeons, doctors, dentists, and etc.) into generic ones (e.g., healthcare providers).",
"Table 4 shows the final list of stakeholders for the select topics.",
"With detailed annotation guidelines, we leveraged Amazon MTurk for labeling 4k bill texts from these topics (Table 2), where 3-5 workers identified the effect of the suggested policies in each bill on the relevant stakeholders.",
"As will be detailed in Appendix A.1, we ensured the accuracy of the labeled data is 90%+.",
"Based on the outcomes of the previous two steps, we formed a legislative graph for our target states.",
"We briefly provide two results from the winners-losers analysis on the graph to highlight its importance.",
"First, we show the frequency distribution of the stakeholders as a winner vs. a loser for each topic in Table 4, which would inform the public about the dynamics and directions of state-level policies.",
"E.g., under the education topic, students were the largest winners, while educational institutions were the major losers.",
"For law bills, law enforcement agencies were the top losers given the recent nationwide focus on police use of force.",
"Also, our winners-losers analysis captures the policy preferences of different ideological and demographic groups of legislators.",
"For example, Democrats are more likely to support legislation benefiting teachers, compared to Republicans (GOP).",
"This fact is also reflected in our models predicting voting cleavages in Section 6 (e.g., our naive model, WL-Correlation, outperforming other models in its category).",
"Here, to motivate the need for building such models, we are interested in 274 measuring the rate of 'Yes' votes from each demographic and ideological group of legislators on bills of a given topic, where a stakeholder is a loser and a winner.",
"E.g., on education bills benefiting a stakeholder (e.g., Students) as a winner, we compute, A = [ # of yes votes ] / [ total # of votes ] in the GOP legislators.",
"Similarly, on educations bills, where this stakeholder is a loser, we calculate B = [ # of yes votes ] / [ total # of votes ] for GOP.",
"We then report the difference, A-B, in Table 5, where a large positive value indicates the stakeholder is being advantaged by the respective group of legislators.",
"E.g., we see GOP has significantly more Yes votes when students are winners, compared to Yes votes when students are losers.",
"By running queries on the legislative graph containing all players (e.g., donors), we were able to see the voting behavior of GOP could be motivated by major donations to this party from corporations representing students (e.g., School Choice).",
"The stakeholder analysis based on human data annotation is expensive and time-consuming.",
"To automate the analysis and better leverage its results in other applications, we build up a contextualized embedding architecture and define two classification tasks on the legislative graph: 5.1 Classification Tasks on Legislative Graph Task 1: winners-losers prediction.",
"Our first task is to predict the relation between a bill node and each relevant stakeholder node (based on its topic in Table 4).",
"Such predicted relations will bring valuable insights into the bills, while also clarifying legislators' roll-call behavior (Section 6).",
"Thus, we define the next task to showcase these benefits.",
"Task 2: bill cleavage and survival prediction.",
"For a bill, we predict if (1) it shows identifiable voting cleavages and (2) it can advance by getting a pass.",
"We achieve these by predicting and aggregating roll-call relations (between legislators and bills) in the graph.",
"In particular, we assign 9 labels to each bill: (1) Competitive labels : For voting cleavages, we split legislators into groups based on their demographic and ideological pro-files (party, gender, ideology, and the urban/rural nature of their district as defined in Section 4).",
"For an attribute (e.g., gender), we say a voting round is competitive if the majority of legislators from one group (e.g., Women) and the majority of the opposite group (e.g., Men) cast different votes (Figure 2a).",
"(2) Inverse-competitive labels : Similarly, for an attribute (e.g., gender), a voting round is inverse-competitive if we identify a partial or complete tie (Appendix A.4) in the vote of legislators of the same group (e.g., Men in Figure 2b).",
"(3) Survival label : Finally, a bill passes its current voting round by getting a majority vote.",
"At a high-level, we propose a unified model to jointly embed and classify both roll-call and winner-loser relations in the legislative graph (Fig-ure 4a):",
"(a) We first train our model to predict relations between bill nodes and their stakeholders.",
"One can use the result of this stage for further analysis of state policies (e.g., Section ).",
"(b) Our key insight is that knowing winner-loser relations enhances the embedding of nodes in the legislative graph.",
"Thus, we conduct inference on bills that lack such relations (if any) using the pretrained model from step",
"(a) and add these predicted relations to the graph.",
"(c) Next, continue training on the updated graph to fine-tune the model for the roll-call (vote) prediction task.",
"Finally, we aggregate the predicted votes for the bill cleavages/survival analysis.",
"In all these steps, our model generates and jointly optimizes both text and graph embeddings for each node, and consumes them to classify the two types of relations.",
"Thanks to jointly optimizing 275 the tasks over the textual and graph information, our architecture outperforms existing models (Sec-tion 6).",
"Hereafter, we detail the layers in our model using a bottom-up approach: 5.3 Contextualized Text Embedding Layers The lower half of our model generates a contextual embedding for textual attributes of nodes in the legislative graph.",
"We leverage the RoBERTa architecture (Liu et al., 2019).",
"For improved performance, one of our contributions is that we will pretrain RoBERTa on unlabeled bill texts using the MLM task (Section 6).",
"In more detail, for each bill node, we feed three pieces of textual information to RoBERta: title, abstract, and body.",
"RoBERTa does not support input sequences longer than 512 tokens.",
"Thus, we take the representation of each of these components separately (the embedding of their [CLS] token) and do average pooling to output the final representation of the bill.",
"Similarly, the text embedding of stakeholder , legislator , and contributor nodes are the average of the vectors representing their key textual attributes (Section 4).",
"On top of the text embedding layer, we place a Relational Graph Convolutional Network (RGCN) to create a graph embedding for each node.",
"The RGCN uses the text embedding of each node to initialize its graph representation.",
"In parallel, we build a feed-forward neural network (FFNN), taking the text embeddings of nodes to a concatenation layer for our joint text-graph optimization.",
"The (non-relational) GCN has multiple layers and each layer performs two operations: propagation and aggregation .",
"In the propagation, nodes update their neighbors by sharing their features or hidden states.",
"In the aggregation, each node adds up the messages coming to it to update its representation.",
"In GCN, at layer l + 1 , the hidden representation of node i , h il +1 , with neighbours N i is: h il +1 = (cid:32) (cid:88) j N i 1 c i W l h lj (cid:33) (1) GCN uses the same weight matrix in each layer, W l , and normalization factor, c i = | N i | , for all relation types in a graph.",
"We choose RGCN as it uses unique parameters for each relation type, thus better handling our multi-relational graph.",
"In RGCN, the embedding of node i in layer ( l + 1) is: h il +1 = (cid:32) W l 0 h li + (cid:88) r R (cid:88) j N ri 1 c i,r W lr h lj (cid:33) (2) A 3-layer RGCN turns out to be sufficient in our case to capture the 3rd order relations between contributors and stakeholder nodes.",
"By combining the outputs of the RGCN and FFNN, we train a relation classification layer by using the DistMult scoring function (Schlichtkrull et al., 2018; Yang et al., 2014).",
"For each relation ( s, r, d ) being predicted, this layer computes f ( s, r, d ) = e Ts W r e d .",
"e s and e r are the joint text and graph embeddings of the source and destination nodes and w r is a diagonal relational weight matrix.",
"Our loss function is L = LCLS + L Text + L Graph enabling us to jointly optimize the text and graph embeddings as well as the relation prediction.",
"LCLS is the cross-entropy loss of the relation classification; L Graph and L Text are the L2 regularization of RGCN's and FFNN's weights, optimizing the graph and text representations, respectively.",
"We first evaluate the efficiency of our legislative graph abstraction and text+graph embedding model in the winners-losers prediction.",
"Then, we show the benefits of our combined inference of stakeholders and roll-calls in decoding state bills.",
"Data Split and metric.",
"We split the legislative graph (Formed in Section 4) based on bill nodes.",
"We randomly select 20% of the bills for testing and keep the rest for training and validation.",
"We study three settings in terms of the winners-losers (stakeholders) information in the graph:",
"(a) Unknown winners-losers relations.",
"(b) Known relations based on our human-labeled annotation.",
"(c) Predicted: 30% of bills in the train graph come with such relations and we predict them for the rest of bills.",
"In Appendix A.3, we will report the results of stateand time-based splits.",
"Finally, given our data is highly skewed, we choose Macro F1 as the main metric over accuracy.",
"Settings/parameters.",
"We build our joint model (Figure 4) on top of PyTorch, DGL, and spaCy.",
"We set the initial embedding dimension in RoBERTa and RGCN to 1024.",
"The FFNN and RGCN take 276 the embeddings to a 256-dimensional space.",
"We also used Adam optimizer, and for each observed relation (Table 1), we sampled a negative example.",
"We devise robust baselines for both of our tasks: 1. Text-based models.",
"We build a logistic regression (LR) classifier that takes the text embedding of a bill and predicts if it shows a certain cleavage or passes/fails.",
"A similar classifier takes the text embeddings of a bill and a stakeholder to classify their relation.",
"We evaluate three embedding architectures:",
"(a) BoW , where unigram and bigram features (top 5K highest-scoring) are used to represent textual information.",
"(b) RoBERTa (Liu et al., 2019).",
"(c) Pretrained RoBERTa that we adapted its domain by applying MLM on 10K unlabeled state bills (39K sentences) (Gururangan et al., 2020).",
"We study two additional variations of these models (only for winners-losers prediction due to limited space): Sponsors , where the bill sponsors are represented using a one-hot vector and concatenated to the bill text representation.",
"Roll-Call , where we concatenate a vector containing cleav-age/survival info.",
"of each bill to its text embedding.",
"2. Graph-based models : We build a relation classifier over edge embeddings, generated by three widely-used graph models, to predict roll-call and winner-loser relations (for the bill cleavages/survival task, we aggregate votes):",
"(a) DeepWalk (Perozzi et al., 2014) that generates embeddings for nodes and edges by running Skip-Gram on random walks on the graph.",
"(b) GCN (Kipf and Welling, 2016) is a basic 3-layer GCN model with random node features in its first layer.",
"(c) RGCN (Schlichtkrull et al., 2018) is the relational GCN handling different relation types in the legislative graph.",
"3. Naive models .",
"We evaluate three naive classi-fiers:",
"(a) Majority : A baseline predicting the most frequent class in the train data.",
"(b) Sponsor : An LR classifier that uses the one-hot embedding of bill sponsors to determine bill survival/cleavages (simi-larly winner/loser relations).",
"(c) WL-Correlation (solely for the survival/cleavage task) predicts a legislator's vote on a test bill with known win-ners/losers based on his historical votes on train bills with the same winners/losers.",
"We compare these models in predicting relations between bills and their relevant stakeholders (Table 6).",
"winner-loser relations between bills and stakeholders.",
"(1) In the vanilla text-based category , RoBERTa shows 2.9 higher F1 than BoW.",
"Our pretrained RoBERTa generates more efficient contextual embedding for text information of bills and stakeholders (e.g., summary), and thus better determines the impact of a bill on its stakeholders.",
"Including the sponsors' info in the pretrained RoBERTa leads to the best text model.",
"(2) In the graph-based models , Deepwalk/GCN exhibits a sharp drop in F1, by ignoring the heterogeneity of relations in the graph and thus producing inefficient representations for them.",
"RGCN overcomes this issue and approaches the best text model with F1 of 63.9.",
"(3) Our joint text-graph model combines the strengths of the graph and text models and delivers 3.3 points higher F1.",
"Next, we focus on the performance of different models in determining voting cleavages/survival, with Unknown , Known , Predicted winners-losers in the legislative graph.",
"In Table 7, we report the results for the bill survival and party-based voting cleavages (results for the other cleavages in Appendix, Table 11).",
"We can make a few observations: First, our stakeholder analysis helps all models to better decode state policies, when comparing the same model in the Unknown and Known winners-losers settings: (1) In the text-based models , prediction on the textual information of both bills and known winners-losers delivers a higher F1 than only on the text of bills (e.g., Pretrained RoBERTa model gets a 5.4% boost in F1 in predicting party competitive bills).",
"Similarly, (2) In the graph-based models : RGCN overcomes 277 Models Pass/Fail",
"demographic voting cleavages in Table 11.",
"the limitations of Deepwalk in handling heterogeneous relation types (winner-loser vs. roll-call) and delivers consistent gains in the setting Known .",
"(3) Our model has the best performance due to generating and optimizing a joint graph and text representation for legislators, bills, money donors, and stakeholders in the setting with known winners and losers.",
"Second, by focusing on the models with the Predicted winners-losers information, we observe: (5) Our model still beats the other baselines, due to our unified model for roll-call and winner-loser training as well as our text-based legislative graph abstraction (Section 5).",
"Of course, there is an expected drop in F1 across different models including ours, when we consume predicted winner-loser relations instead of human-labeled ones.",
"This drop could be tolerable in most cases, thus not hindering the automation of our stakeholder analysis and leveraging its results in downstream vote analysis tasks (ethical considerations in Section 7).",
"We took a new data-driven approach to analyze state legislation in the US.",
"We showed that identifying the winners/losers of state bills can (1) inform the public on the directions of state policies, and (2) build a nationwide context for a better understanding of legislators' roll-call behaviors.",
"Thus, we proposed a text-based graph abstraction to model the interplay of key players in the state legislative process, e.g., bills, stakeholders, legislators, and donors to legislators' campaigns.",
"Next, to automate our analysis, we developed a shared text and graph embedding architecture to jointly predict winners/losers of bills and legislators' votes on them.",
"We created a new dataset using different data sources and human annotation and evaluated the strength of our architecture against existing models.",
"We hope this work will provide a starting point for further studies examining the impact of policy decisions on individuals and groups, an important step towards making the democratic process more transparent.",
"Analyzing state legislation is a sensitive task, where unexpected results of research and deployed ML systems can create misguided beliefs on the government policies on important topics (e.g., health, education).",
"Thus, we would like to discuss some ethical aspects related to our work in terms of data and model (considering potential scenarios suggested by Chandrabose et al., 2021): 1. Selection of data sources .",
"While there can be different inherent imbalances in the state legislature (e.g., gender and party distribution), we were not able to identify that our data sources adding systematic political and social biases to our study, e.g., towards demographic populations of legislators.",
"All our data sources (e.g., LegiScan and FollowTheMoney) are publicly available and have been used by the political science community over the years.",
"LegiScan (LegiScan, 2019) is a nonpartisan and impartial legislative tracking and reporting service for state bills.",
"FollowTheMoney (Fol-lowTheMoney, 2019) is a nonpartisan, nonprofit organization revealing the influence of campaign money on state-level elections and public policy in all US states.",
"Finally, Ballotpedia (Ballotpedia, 2019) is a nonpartisan, nonprofit organization providing a brief introduction, biography, committee assignment, and general information on legislators across different years.",
"Our study combined these data sources for analyzing state bills in a broad context, thus contributing to reduced data bias for all models evaluated in this paper.",
"2. Selection of states .",
"In addition, to help mitigate the risk of data collection bias or topic preference that can be introduced through the choice of specific state legislatures, we randomly picked a red, a blue, and a purple state (indicating 278 a significant majority for Republicans, Democrats or more balanced state legislature, respectively).",
"There were some restrictions in terms of collecting the data from the above sources (e.g., FollowTheMoney and Ballotpedia).",
"These data sources and services often limit the number of API calls and queries for retrieving the data for educational institutions.",
"Besides this, annotating the data through Amazon MTurk was expensive for us so we conducted our study on four highly discussed topics in state bills (i.e., health, education, agriculture, and law).",
"We will explore ways of expanding our dataset to more states and topics over time.",
"3. Disguised winners and losers .",
"In theory, the authors of state bills (e.g., interest groups selling fill-in-the-blank bills to legislators) may try to reframe bills (disguise winners or losers) to further their political aims.",
"At the first glance, this could pose a challenge to our bill annotation, dataset, and stakeholder analysis.",
"As described in Section 3, the state legislative process has a multi-stage reviewing process in two chambers (e.g., first reading, second reading, and third reading).",
"Thus, we have observed that it is hard to hide the impact of bills on their relevant stakeholders from our qualified annotators, i.e., the authors and multiple vetted MTurk workers for each example, in practice.",
"In addition, our work on MTurk maps the impact of policies suggested by bills to winners and losers.",
"Thus, it already considers those stakeholders that are not mentioned in the text explicitly (More details in Appendix A.1).",
"4. Winners and losers analysis .",
"The analysis, aligning demographic cleavages with winners and losers preferences, is done at an aggregate level based on the data we annotated.",
"These preferences could be influenced by other factors beyond demographics.",
"Deriving conclusions from this analysis could require longitudinal studies, capturing the change of these patterns over time, for example when analyzing policies intended to help correct inequities towards marginalized groups.",
"Our goal is to provide a tool for domain experts that would point at nuanced, stakeholder specific, legislative preferences that can be studied further in order to determine their significance.",
"5. Handling abstain votes .",
"There are abstain (absent and N/A) votes in our dataset.",
"However, we did not include them in our study due to their extremely low frequency (for our proposed model and other baseline models).",
"We leave this evaluation as a future work.",
"6. Handling other countries and languages.",
"While our dataset is specific to the US, the the problem we studied, stakeholder analysis, can be generalized to legislation from other countries and in different languages.",
"Although we have not evaluated such bills (due to lack of data sources), we expect such legislation to produce winners or losers to provide practical solutions to their local problems.",
"In particular, our framework offers a multi-relational graph abstraction and prediction models to analyze stakeholders of bills (winners/losers) and the voting behavior of legislators.",
"These techniques can support non-US national and state-level legislative processes.",
"To accommodate other languages, one could adopt cross-lingual embedding models, e.g., XLM-R (Conneau et al., 2019) instead of RoBERTa, in our architecture.",
"We would like to acknowledge the members of the PurdueNLP lab.",
"We also thank the reviewers for their constructive feedback.",
"The funding for the use of mTurk was part of the Purdue University Integrative Data Science Initiative: Data Science for Ethics, Society, and Policy Focus Area.",
"This work was partially supported by an NSF CAREER award IIS-2048001."
] | [
"abstain",
"abstain",
"method",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"objective",
"result",
"objective",
"objective",
"objective",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"other",
"other",
"other",
"other"
] |
[
"Training data for NLP tasks often exhibits gender bias in that fewer sentences refer to women than to men.",
"In Neural Machine Translation (NMT) gender bias has been shown to reduce translation quality, particularly when the target language has grammatical gender.",
"The recent WinoMT challenge set allows us to measure this effect directly (Stanovsky et al., 2019).",
"Ideally we would reduce system bias by simply debiasing all data prior to training, but achieving this effectively is itself a challenge.",
"Rather than attempt to create a balanced' dataset, we use transfer learning on a small set of trusted, gender-balanced examples.",
"This approach gives strong and consistent improvements in gender debiasing with much less computational cost than training from scratch.",
"A known pitfall of transfer learning on new domains is catastrophic forgetting', which we address both in adaptation and in inference.",
"During adaptation we show that Elastic Weight Consolidation allows a performance trade-off between general translation quality and bias reduction.",
"During inference we propose a lattice-rescoring scheme which outperforms all systems evaluated in Stanovsky et al. (2019) on WinoMT with no degradation of general test set BLEU, and we show this scheme can be applied to remove gender bias in the output of black box online commercial MT systems. We demonstrate our approach translating from English into three languages with varied linguistic properties and data availability. 1 Introduction As language processing tools become more prevalent concern has grown over their susceptibility to social biases and their potential to propagate bias (Hovy and Spruit, 2016; Sun et al., 2019). Natural language training data inevitably reflects biases present in our society. For example, gender bias manifests itself in training data which features more examples of men than of women. Tools trained on such data will then exhibit or even amplify the biases (Zhao et al., 2017). Gender bias is a particularly important problem for Neural Machine Translation (NMT) into gender-inflected languages. An over-prevalence of some gendered forms in the training data leads to translations with identifiable errors (Stanovsky et al., 2019). Translations are better for sentences involving men and for sentences containing stereotypical gender roles. For example, mentions of male doctors are more reliably translated than those of male nurses (Sun et al., 2019; Prates et al., 2019). Recent approaches to the bias problem in NLP have involved training from scratch on artificially gender-balanced versions of the original dataset (Zhao et al., 2018; Zmigrod et al., 2019) or with debiased embeddings (Escude Font and Costa-juss`a, 2019; Bolukbasi et al., 2016). While these approaches may be effective, training from scratch is inefficient and gender-balancing embeddings or large parallel datasets are challenging problems (Gonen and Goldberg, 2019). Instead we propose treating gender debiasing as a domain adaptation problem, since NMT models can very quickly adapt to a new domain (Fre-itag and Al-Onaizan, 2016). To the best of our knowledge this work is the first to attempt NMT bias reduction by fine-tuning, rather than retraining. We consider three aspects of this adaptation problem: creating less biased adaptation data, parameter adaptation using this data, and inference with the debiased models produced by adaptation. Regarding data, we suggest that a small, trusted gender-balanced set could allow more efficient and effective gender debiasing than a larger, noisier set. To explore this we create a tiny, handcrafted profession-based dataset for transfer learning. For contrast, we also consider fine-tuning on a counterfactual subset of the full dataset and propose a straightforward scheme for artificially gender-balancing parallel text for NMT. We find that during domain adaptation improvement on the gender-debiased domain comes at the expense of translation quality due to catastrophic forgetting (French, 1999). We can balance improvement and forgetting with a regularised training procedure, Elastic Weight Consolidation (EWC), or in inference by a two-step lattice rescoring procedure. We experiment with three language pairs, assessing the impact of debiasing on general domain BLEU and on the WinoMT challenge set (Stanovsky et al., 2019). We find that continued training on the handcrafted set gives far stronger and more consistent improvements in gender-debiasing with orders of magnitude less training time, although as expected general translation performance as measured by BLEU decreases. We further show that regularised adaptation with EWC can reduce bias while limiting degradation in general translation quality. We also present a lattice rescoring procedure in which initial hypotheses produced by the biased baseline system are transduced to create gender-inflected search spaces which can be rescored by the adapted model. We believe this approach, rescoring with models targeted to remove bias, is novel in NMT. The rescoring procedure improves WinoMT accuracy by up to 30% with no decrease in BLEU on the general test set. Recent recommendations for ethics in Artificial Intelligence have suggested that social biases or imbalances in a dataset be addressed prior to model training (HLEG, 2019). This recommendation presupposes that the source of bias in a dataset is both obvious and easily adjusted. We show that debiasing a full NMT dataset is difficult, and suggest alternative efficient and effective approaches for debiasing a model after it is trained. This avoids the need to identify and remove all possible biases prior to training, and has the added benefit of preserving privacy, since no access to the original data or knowledge of its contents is required. As evidence, in section 3.4.5, we show this scheme can be applied to remove gender bias in the output of black box online commercial MT systems. 1.1 Related work Vanmassenhove et al. (2018) treat gender as a domain for machine translation, training from scratch by augmenting Europarl data with a tag indicating the speaker's gender. This does not inherently remove gender bias from the system but allows control over the translation hypothesis gender. Moryossef et al. (2019) similarly prepend a short phrase at inference time which acts as a gender domain label for the entire sentence. These approaches are not directly applicable to text which may have more than one gendered entity per sentence, as in coreference resolution tasks. Escude Font and Costa-juss`a (2019) train NMT models from scratch with debiased word embeddings. They demonstrate improved performance on an English-Spanish occupations task with a single profession and pronoun per sentence. We assess our fine-tuning approaches on the WinoMT coreference set, with two entities to resolve per sentence. For monolingual NLP tasks a typical approach is gender debiasing using counterfactual data augmentation where for each gendered sentence in the data a gender-swapped equivalent is added. Zhao et al. (2018) show improvement in coreference resolution for English using counterfactual data. Zmigrod et al. (2019) demonstrate a more complicated scheme for gender-inflected languages. However, their system focuses on words in isolation, and is difficult to apply to co-reference and conjunction situations with more than one term to swap, reducing its practicality for large MT datasets. Recent work recognizes that NMT can be adapted to domains with desired attributes using small datasets (Farajian et al., 2017; Michel and Neubig, 2018). Our choice of a small, trusted dataset for adaptation specifically to a debiased domain connects also to recent work in data selection by Wang et al. (2018), in which fine-tuning on less noisy data reduces translation noise. Similarly we propose fine-tuning on less biased data to reduce gender bias in translations. This is loosely the inverse of the approach described by Park et al. (2018) for monolingual abusive language detection, which pre-trains on a larger, less biased set. 2 Gender bias in machine translation We focus on translating coreference sentences containing professions as a representative subset of the gender bias problem. This follows much recent work on NLP gender bias (Rudinger et al., 2018; Zhao et al., 2018; Zmigrod et al., 2019) including the release of WinoMT, a relevant challenge set for NMT (Stanovsky et al., 2019). A sentence that highlights gender bias is: The doctor told the nurse that she had been busy. A human translator carrying out coreference resolution would infer that she' refers to the doctor, and correctly translate the entity to German as Die Arztin .",
"An NMT model trained on a biased dataset in which most doctors are male might incorrectly default to the masculine form, Der Arzt .",
"Data bias does not just affect translations of the stereotyped roles.",
"Since NMT inference is usually left-to-right, a mistranslation can lead to further, more obvious mistakes later in the translation.",
"For example, our baseline en-de system translates the English sentence The cleaner hates the developer because she always leaves the room dirty.",
"Here not only is developer' mistranslated as the masculine den Entwickler instead of the feminine die Entwicklerin , but an unambiguous pronoun translation later in the sentence is incorrect: er (he') is produced instead of sie (she').",
"This would likely be translated with a masculine entity according to the conventions of a language, unless extra-sentential context was available.",
"As well, some languages have adopted gender-neutral singular pronouns and profession terms, both to include non-binary people and to avoid the social biases of gendered language (Misersky et al., 2019).",
"However, the target languages supported by WinoMT lack widely-accepted non-binary inflection conventions (Ackerman, 2019).",
"This paper addresses gender bias that can be resolved at the sentence level and evaluated with existing test sets, and does not address these broader challenges.",
"WinoMT (Stanovsky et al., 2019) is a recently proposed challenge set for gender bias in NMT.",
"Moreover it is the only significant challenge set we are aware of to evaluate translation gender bias comparably across several language pairs.",
"It permits automatic bias evaluation for translation from English to eight target languages with grammatical gender.",
"The source side of WinoMT is 3888 concatenated sentences from Winogender (Rudinger et al., 2018) and WinoBias (Zhao et al., 2018).",
"These are coreference resolution datasets in which each sentence contains a primary entity which is co-referent with a pronoun the doctor in the first example above and the developer in the second and a secondary entity the nurse and the cleaner respectively.",
"WinoMT evaluation extracts the grammatical gender of the primary entity from each translation hypothesis by automatic word alignment followed by morphological analysis.",
"WinoMT then compares the translated primary entity with the gold gender, with the objective being a correctly gendered translation.",
"The authors emphasise the following metrics over the challenge set: Accuracy percentage of hypotheses with the correctly gendered primary entity.",
"G difference in F 1 score between the set of sentences with masculine entities and the set with feminine entities.",
"S difference in accuracy between the set of sentences with pro-stereotypical (pro') entities and those with anti-stereotypical (anti') entities, as determined by Zhao et al. (2018) using US labour statistics.",
"For example, the pro' set contains male doctors and female nurses, while anti' contains female doctors and male nurses.",
"Our main objective is increasing accuracy.",
"We also report on G and S for ease of comparison to previous work.",
"Ideally the absolute values of G and S should be close to 0.",
"A high positive G indicates that a model translates male entities better, while a high positive S indicates that a model stereotypes male and female entities.",
"Large negative values for G and S , indicating a bias towards female or anti-stereotypical translation, are as undesirable as large positive values.",
"We note that S can be significantly skewed by low-accuracy systems.",
"A model generating male forms for most test sentences, stereotypical roles or not, will have very low S , since its proand anti-stereotypical class accuracy will both be about 50%.",
"Consequently in Appendix A we report: M:F ratio of hypotheses with male predictions to those with female predictions.",
"This should be close to 1.0, since WinoMT balances maleand female-labelled sentences.",
"M:F correlates strongly with G , but we consider M:F 7726 easier to interpret, particularly since very high or low M:F reduce the relevance of S .",
"Finally, we wish to reduce gender bias without reducing translation performance.",
"We report BLEU (Papineni et al., 2002) on separate, general test sets for each language pair.",
"WinoMT is designed to work without target language references, and so it is not possible to measure translation performance on this set by measures such as BLEU.",
"Our hypothesis is that the absence of gender bias can be treated as a small domain for the purposes of NMT model adaptation.",
"In this case a well-formed small dataset may give better results than attempts at debiasing the entire original dataset.",
"We therefore construct a tiny, trivial set of gender-balanced English sentences which we can easily translate into each target language.",
"The sentences follow the template: The [ PROFESSION ] finished [ his | her ] work.",
"We refer to this as the handcrafted set 1 .",
"Each profession is from the list collected by Prates et al. (2019) from US labour statistics.",
"We simplify this list by removing field-specific adjectives.",
"For example, we have a single profession engineer', as opposed to specifying industrial engineer, locomotive engineer, etc.",
"In total we select 194 professions, giving just 388 sentences in a gender-balanced set.",
"With manually translated masculine and feminine templates, we simply translate the masculine and feminine forms of each listed profession for each target language.",
"In practice this translation is via an MT first-pass for speed, followed by manual checking, but given available lexicons this could be further automated.",
"We note that the handcrafted sets contain no examples of coreference resolution and very little variety in terms of grammatical gender.",
"A set of more complex sentences targeted at the coreference task might further improve WinoMT scores, but would be more difficult to produce for new languages.",
"We wish to distinguish between a model which improves gender translation, and one which improves its WinoMT scores simply by learning the vocabulary for previously unseen or uncommon professions.",
"We therefore create a handcrafted no-overlap set, removing source sentences with profes-1 Handcrafted sets available at https://github.",
"sions occurring in WinoMT to leave 216 sentences.",
"We increase this set back to 388 examples with balanced adjective-based sentences in the same pattern, e.g. The tall [ man | woman ] finished [ his | her ] work .",
"Counterfactual data augmentation is an intuitive solution to bias from data over-representation (Lu et al., 2018).",
"It involves identifying the subset of sentences containing bias in this case gendered terms and, for each one, adding an equivalent sentence with the bias reversed in this case a gender-swapped version.",
"While counterfactual data augmentation is relatively simple for sentences in English, the process for inflected languages is challenging, involving identifying and updating words that are co-referent with all gendered entities in a sentence.",
"Gender-swapping MT training data additionally requires that the same entities are swapped in the corresponding parallel sentence.",
"A robust scheme for gender-swapping multiple entities in inflected language sentences directly, together with corresponding parallel text, is beyond the scope of this paper.",
"Instead we suggest a rough but straightforward approach for counterfactual data augmentation for NMT which to the best of our knowledge is the first application to parallel sentences.",
"We first perform simple gender-swapping on the subset of the English source sentences with gendered terms.",
"We use the approach described in Zhao et al. (2018) which swaps a fixed list of gendered stopwords (e.g. man / woman , he / she ).",
"2 .",
"We then greedily forward-translate the gender-swapped English sentences with a baseline NMT model trained on the the full source and target text, producing gender-swapped target language sentences.",
"This lets us compare four related sets for gender debiasing adaptation, as illustrated in Figure 1: Original : a subset of parallel sentences from the original training data where the source sentence contains gendered stopwords.",
"Forward-translated (FTrans) original : the source side of the original set with forward-translated target sentences.",
"Forward-translated (FTrans) swapped : the original source sentences are gender-swapped, then forward-translated to produce gender-swapped target sentences.",
"Balanced : the concatenation of the original and FTrans swapped parallel datasets.",
"This is twice the size of the other counterfactual sets.",
"Comparing performance in adaptation of FTrans swapped and FTrans original lets us distinguish between the effects of gender-swapping and of obtaining target sentences from forward-translation.",
"Fine-tuning a converged neural network on data from a distinct domain typically leads to catastrophic forgetting of the original domain (French, 1999).",
"We wish to adapt to the gender-balanced domain without losing general translation performance.",
"This is a particular problem when fine-tuning on the very small and distinct handcrafted adaptation sets.",
"Regularized training is a well-established approach for minimizing catastrophic forgetting during domain adaptation of machine translation (Barone et al., 2017).",
"One effective form is Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017) which in NMT has been shown to maintain or even improve original domain performance (Thompson et al., 2019; Saunders et al., 2019).",
"In EWC a 2 The stopword list and swapping script are provided by the authors of Zhao et al. (2018) at https://github.com/ uclanlp/corefBias regularization term is added to the original log likelihood loss function L when training the debiased model (DB): L (cid:48) ( DB ) = L ( DB )+ (cid:88) j F j ( DBj Bj ) 2 (1) Bj are the converged parameters of the original biased model, and DBj are the current debiased model parameters.",
"F j = E (cid:2) 2 L ( Bj ) (cid:3) , a Fisher information estimate over samples from the biased data under the biased model.",
"We apply EWC when performance on the original validation set drops, selecting hyperparameter via validation set BLEU.",
"2.3.2 Gender-inflected search spaces for rescoring with debiased models",
"(a) A subset of flower transducer T .",
"T maps vocabulary to itself as well as to differently-gendered inflections.",
"(b) Acceptor YB representing the biased first-pass translation y B for source fragment 'the doctor'.",
"The German hypothesis has the male form.",
"(c) Gender-inflected search space constructed from the biased hypothesis der Arzt'.",
"Projection of the composition YB T contains paths with differently-gendered inflections of the original biased hypothesis.",
"This lattice can now be rescored by a debiased model.",
"An alternative approach for avoiding catastrophic forgetting takes inspiration from lattice rescoring for NMT (Stahlberg et al., 2016) and Grammatical Error Correction (Stahlberg et al., 2019).",
"We assume we have two NMT models.",
"With one we decode fluent translations which contain gender bias ( B ).",
"For the one-best hypothesis we would translate: y B = argmax y p B ( y | x ) (2) The other model has undergone debiasing ( DB ) at a cost to translation performance, producing: y DB = argmax y p DB ( y | x ) (3) We construct a flower transducer T that maps each word in the target language's vocabulary to itself, as well as to other forms of the same word with different gender inflections (Figure 2a).",
"We also construct YB , a lattice with one path representing the biased but fluent hypothesis y B (Figure 2b).",
"The acceptor P ( y B ) = proj output ( YB T ) de-fines a language consisting of all the gender-inflected versions of the biased first-pass translation y B that are allowed by T (Figure 2c).",
"We can now decode with lattice rescoring ( LR ) by constraining inference to P ( y B ) : y LR = argmax y P ( y B ) p DB ( y | x ) (4) In practice we use beam search to decode the various hypotheses, and construct T using heuristics on large vocabulary lists for each target language.",
"WinoMT provides an evaluation framework for translation from English to eight diverse languages.",
"We select three pairs for experiments: English to German (en-de), English to Spanish (en-es) and English to Hebrew (en-he).",
"Our selection covers three language groups with varying linguistic properties: Germanic, Romance and Semitic.",
"Training data available for each language pair also varies in quantity and quality.",
"We filter training data based on parallel sentence lengths and length ratios.",
"We validate on newstest17 and test on newstest18.",
"For en-es we use 10M sentence pairs from the United Nations Parallel Corpus (Ziemski et al., 2016).",
"While still a large set, the UNCorpus exhibits far less diversity than the en-de training data.",
"We validate on newstest12 and test on newstest13.",
"For en-he we use 185K sentence pairs from the multilingual TED talks corpus (Cettolo et al., 2014).",
"This is both a specialized domain and a much smaller training set.",
"We validate on the IWSLT 2012 test set and test on IWSLT 2014.",
"Table 1 summarises the sizes of datasets used, including their proportion of gendered sentences and ratio of sentences in the English source data containing male and female stopwords.",
"A gendered sentence contains at least one English gendered stopword as used by Zhao et al. (2018).",
"Interestingly all three datasets have about the same proportion of gendered sentences: 11-12% of the overall set.",
"While en-es appears to have a much more balanced gender ratio than the other pairs, examining the data shows this stems largely from sections of the UNCorpus containing phrases like empower women' and violence against women', rather than gender-balanced professional entities.",
"For en-de and en-es we learn joint 32K BPE vocabularies on the training data (Sennrich et al., 2016).",
"For en-he we use separate source and target vocabularies.",
"The Hebrew vocabulary is a 2k-merge BPE vocabulary, following the recommendations of Ding et al. (2019) for smaller vocabularies when translating into lower-resource languages.",
"For the en-he source vocabulary we experimented both with learning a new 32K vocabulary and with reusing the joint BPE vocabulary trained on the largest set en-de which lets us initialize the en-he system with the pre-trained en-de model.",
"The latter resulted in higher BLEU and faster training.",
"For all models we use a Transformer model (Vaswani et al., 2017) with the base' parameter settings given in Tensor2Tensor (Vaswani et al., 2018).",
"We train baselines to validation set BLEU convergence on one GPU, delaying gradient updates by factor 4 to simulate 4 GPUs (Saunders et al., 2018).",
"During fine-tuning training is continued without learning rate resetting.",
"Normal and lattice-constrained decoding is via SGNMT 3 with beam size",
"4. BLEU scores are calculated for cased, detokenized output using SacreBLEU (Post, 2018) 3.3 Lattice rescoring with debiased models For lattice rescoring we require a transducer T containing gender-inflected forms of words in the target vocabulary.",
"To obtain the vocabulary for German we use all unique words in the full target training dataset.",
"For Spanish and Hebrew, which have smaller and less diverse training sets, we use 2018 3 https://github.com/ucam-smt/sgnmt OpenSubtitles word lists 4 .",
"We then use DEMorphy (Altinok, 2018) for German, spaCy (Honnibal and Montani, 2017) for Spanish and the small set of gendered suffixes for Hebrew (Schwarzwald, 1982) to approximately lemmatize each vocabulary word and generate its alternately-gendered forms.",
"While there are almost certainly paths in T containing non-words, we expect these to have low likelihood under the debiasing models.",
"For lattice compositions we use the efficient OpenFST implementations (Allauzen et al., 2007).",
"In Table 2 we compare our three baselines to commercial systems on WinoMT, using results quoted directly from Stanovsky et al. (2019).",
"Our baselines achieve comparable accuracy, mascu-line/feminine bias score G and pro/anti stereotypical bias score S to four commercial translation systems, outscoring at least one system for each metric on each language pair.",
"The S for our en-es baseline is surprisingly small.",
"Investigation shows this model predicts male and female entities in a ratio of over 6:1.",
"Since almost all entities are translated as male, proand anti-stereotypical class accuracy are both about 50%, making S very small.",
"This highlights the importance of considering S in the context of G and M:F prediction ratio.",
"Table 3 compares our baseline model with the results of unregularised fine-tuning on the counterfactual",
"counterfactual sets described in Section 2.2.2.",
"Fine-tuning for one epoch on original , a subset of the original data with gendered English stopwords, gives slight improvement in WinoMT accuracy and G for all language pairs, while S worsens.",
"We suggest this set consolidates examples present in the full dataset, improving performance on gendered entities generally but emphasizing stereotypical roles.",
"On the FTrans original set G increases sharply relative to the original set, while S decreases.",
"We suspect this set suffers from bias amplification (Zhao et al., 2017) introduced by the baseline system during forward-translation.",
"The model therefore over-predicts male entities even more heavily 4 Accessed Oct 2019 from https://github.com/ hermitdave/FrequencyWords/ than we would expect given the gender makeup of the adaptation data's source side.",
"Over-predicting male entities lowers S artificially.",
"Adapting to FTrans swapped increases accuracy and decreases both G and S relative to the baseline for en-de and en-es.",
"This is the desired result, but not a particularly strong one, and it is not replicated for en-he.",
"The balanced set has a very similar effect to the FTrans swapped set, with a smaller test BLEU difference from the baseline.",
"We do find that the largest improvement in WinoMT accuracy consistently corresponds to the model predicting male and female entities in the closest ratio (see Appendix A).",
"However, the best ratios for models adapted to these datasets are 2:1 or higher, and the accuracy improvement is small.",
"The purpose of EWC regularization is to avoid catastrophic forgetting of general translation ability.",
"This does not occur in the counterfactual experiments, so we do not apply EWC.",
"Moreover, WinoMT accuracy gains are small with standard fine-tuning, which allows maximum adaptation: we suspect EWC would prevent any improvements.",
"Overall, improvements from fine-tuning on counterfactual datasets ( FTrans swapped and balanced ) are present.",
"However, they are not very different from the improvements when fine-tuning on equivalent non-counterfactual sets ( original and FTrans original ).",
"Improvements are also inconsistent across language pairs.",
"Results for fine-tuning on the handcrafted set are given in lines 3-6 of Table",
"4. These experiments take place in minutes on a single GPU, compared to several hours when fine-tuning on the counterfactual sets and far longer if training from scratch.",
"Fine-tuning on the handcrafted sets gives a much faster BLEU drop than fine-tuning on counterfactual sets.",
"This is unsurprising since the handcrafted sets are domains of new sentences with consistent sentence length and structure.",
"By contrast the counterfactual sets are less repetitive and close to subsets of the original training data, slowing forgetting.",
"We believe the degradation here is limited only by the ease of fitting the small handcrafted sets.",
"Line 4 of Table 4 adapts to the handcrafted set, stopping when validation BLEU degrades by 5% on each language pair.",
"This gives a WinoMT accuracy up to 19 points above the baseline, far more improvement than the best counterfactual result.",
"Difference in gender score G improves by at least en-de en-es en-he Acc G S Acc G S Acc G S Microsoft 74.1 0.0 30.2 47.3 36.8 23.2 48.1 14.9 32.9 Google 59.4 12.5 12.5 53.1 23.4 21.3 53.7 7.9 37.8 Amazon 62.4 12.9 16.7 59.4 15.4 22.3 50.5 10.3 47.3 SYSTRAN 48.6 34.5 10.3 45.6 46.3 15.0 46.6 20.5 24.5 Baseline 60.1 18.6 13.4 49.6 36.7 2.0 51.3 15.1 26.4 Table 2: WinoMT accuracy, masculine/feminine bias score G and pro/anti stereotypical bias score S for our baselines compared to commercial systems, whose scores are quoted directly from Stanovsky et al. (2019).",
"a factor of",
"4. Stereotyping score S also improves far more than for counterfactual fine-tuning.",
"Unlike the Table 3 results, the improvement is consistent across all WinoMT metrics and all language pairs.",
"The model adapted to no-overlap handcrafted data (line 3) gives a similar drop in BLEU to the model in line",
"4. This model also gives stronger and more consistent WinoMT improvements over the baseline compared to the balanced counterfactual set, despite the implausibly strict scenario of no English profession vocabulary in common with the challenge set.",
"This demonstrates that the adapted model does not simply memorise vocabulary.",
"The drop in BLEU and improvement on WinoMT can be explored by varying the training procedure.",
"The model of line 5 simply adapts to handcrafted data for more iterations with no reg-ularisation, to approximate loss convergence on the handcrafted set.",
"This leads to a severe drop in BLEU, but even higher WinoMT scores.",
"In line 6 we regularise adaptation with EWC.",
"There is a trade-off between general translation performance and WinoMT accuracy.",
"With EWC regularization tuned to balance validation BLEU and WinoMT accuracy, the decrease is limited to about 0.5 BLEU on each language pair.",
"Adapting to convergence, as in line 5, would lead to further WinoMT gains at the expense of BLEU.",
"In lines 7-9 of Table 4 we consider lattice-rescoring the baseline output, using three models debiased on the handcrafted data.",
"Line 7 rescores the general test set hypotheses (line 1) with a model adapted to handcrafted data that has no source language profession vocabulary overlap with the test set (line 3).",
"This scheme shows no BLEU degradation from the baseline on any language and in fact a slight improvement on en-he.",
"Accuracy improvements on WinoMT en-de en-es en-he Acc G S Acc G S Acc G S 1 82.0 (74.1) -3.0 (0.0) 4.0 (30.2) 65.8 (47.3) 3.8 (36.8) 1.9 (23.2) 63.9 (48.1) -2.6 (14.9) 23.8 (32.9) 2 80.0 (59.4) -3.0 (12.5) 2.7 (12.5) 68.9 (53.1) 0.6 (23.4) 4.6 (21.3) 64.6 (53.7) -1.8 (7.9) 21.5 (37.8) 3 81.8 (62.4) -2.6 (12.9) 4.3 (16.7) 71.1 (59.4) 0.7 (15.4) 6.7 (22.3) 62.8 (50.5) -1.1 (10.3) 26.9 (47.3) 4 78.4 (48.6) -4.0 (34.5) 5.3 (10.3) 66.0 (45.6) 4.2 (46.3) -2.1 (15.0) 62.5 (46.6) -2.0 (20.5) 10.2 (24.5) Table 5: We generate gender-inflected lattices from commercial system translations, collected by Stanovsky et al. (2019) (1: Microsoft, 2: Google, 3: Amazon, 4: SYSTRAN).",
"3. In line 8, lattice rescoring with the non-converged model adapted to handcrafted data (line 4) likewise leaves general BLEU unchanged or slightly improved.",
"When lattice rescoring the WinoMT challenge set, 79%, 76% and 49% of the accuracy improvement is maintained on en-de, en-es and en-he respectively.",
"This corresponds to accuracy gains of up to 30% relative to the baselines with no general translation performance loss.",
"In line 9, lattice-rescoring with the converged model of line 5 limits BLEU degradation to 0.2 BLEU on all languages, while maintaining 85%, 82% and 58% of the WinoMT accuracy improvement from the converged model for the three language pairs.",
"Lattice rescoring with this model gives accuracy improvements over the baseline of 36%, 38% and 24% for en-de, en-es and en-he.",
"Rescoring en-he maintains a much smaller proportion of WinoMT accuracy improvement than en-de and en-es.",
"We believe this is because the en-he baseline is particularly weak, due to a small and non-diverse training set.",
"The baseline must produce some inflection of the correct entity before lattice rescoring can have an effect on gender bias.",
"Finally, in Table 5, we apply the gender inflection transducer to the commercial system translations 5 listed in Table",
"2. We find rescoring these lattices with our strongest debiasing model (line 5 of Table 4) substantially improves WinoMT accuracy for all systems and language pairs.",
"One interesting observation is that WinoMT accuracy after rescoring tends to fall in a fairly narrow range for each language relative to the performance range of the baseline systems.",
"For example, a 25.5% range in baseline en-de accuracy 5 The raw commercial system translations are provided by the authors of Stanovsky et al. (2019) at https://github.",
"becomes a 3.6% range after rescoring.",
"This suggests that our rescoring approach is not limited as much by the bias level of the baseline system as by the gender-inflection transducer and the models used in rescoring.",
"Indeed, we emphasise that the large improvements reported in Table 5 do not require any knowledge of the commercial systems or the data they were trained on; we use only the translation hypotheses they produce and our own rescoring model and transducer.",
"We treat the presence of gender bias in NMT systems as a domain adaptation problem.",
"We demonstrate strong improvements under the WinoMT challenge set by adapting to tiny, handcrafted gender-balanced datasets for three language pairs.",
"While naive domain adaptation leads to catastrophic forgetting, we further demonstrate two approaches to limit this: EWC and a lattice rescoring approach.",
"Both allow debiasing while maintaining general translation performance.",
"Lattice rescoring, although a two-step procedure, allows far more debiasing and potentially no degradation, without requiring access to the original model.",
"We suggest small-domain adaptation as a more effective and efficient approach to debiasing machine translation than counterfactual data augmentation.",
"We do not claim to fix the bias problem in NMT, but demonstrate that bias can be reduced without degradation in overall translation quality.",
"This work was supported by EPSRC grants EP/M508007/1 and EP/N509620/1 and has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service 6 funded by EPSRC Tier-2 capital grant EP/P020259/1."
] | [
"abstain",
"abstain",
"method",
"result",
"method",
"abstain",
"objective",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"other"
] |
[
"Automatic transfer of text between domains has become popular in recent times.",
"One of its aims is to preserve the semantic content of text being translated from source to target domain.",
"However, it does not explicitly maintain other attributes between the source and translated text, for e.g., text length and descriptiveness.",
"Maintaining constraints in transfer has several downstream applications, including data augmentation and de-biasing.",
"We introduce a method for such constrained unsupervised text style transfer by introducing two complementary losses to the generative adversarial network (GAN) family of models.",
"Unlike the competing losses used in GANs, we introduce cooperative losses where the discriminator and the generator cooperate and reduce the same loss.",
"The first is a contrastive loss and the second is a classification loss aiming to regularize the latent space further and bring similar sentences across domains closer together.",
"We demonstrate that such training retains lexical, syntactic, and domain-specific constraints between domains for multiple benchmark datasets, including ones where more than one attribute change.",
"We show that the complementary cooperative losses improve text quality, according to both automated and human evaluation measures.",
"1 1 Introduction Modern neural networks methods are capable of mapping data from one domain to another.",
"Prominent examples include translation of text between languages (Vaswani et al., 2017; Artetxe et al., 2018; Lample et al., 2017), emoji creation from human faces (Taigman et al., 2017), and stylistic transfer of speech (Yuan et al., 2021).",
"In Natural Language Processing (NLP), the umbrella term attribute transfer (Jin et al., 2020b) (or domain The first two authors contributed equally 1 https://github.com/abhinavkashyap/dct I really loved Murakami's book Personal Pronoun Proper Noun Loved the movie Text Style Transfer Smaller Length No Personal Pronoun No Proper Noun I absolutely enjoyed Spielberg's direction Text Style Transfer + Constraints Similar Length Similar Personal Pronoun Domain appropriate Proper Noun vs. Figure 1: Illustrative example showing transfer of text from books to movies while maintaining constraints of identity. transfer ) refers to similar methods 2 .",
"The aim is to maximally preserve the semantics of the source sentence (content) but change other properties (attributes), such as sentiment (Jin et al., 2020b), expertise (Cao et al., 2020), formality (Rao and Tetreault, 2018) or a combination of them (Subra-manian et al., 2018).",
"Text style transfer , a popular form of attribute transfer, regards style as any attribute that changes between datasets (Jin et al., 2020a).",
"Building on the progress of supervised transfer models, recent works have focused on unsupervised style transfer that avoids costly annotation of parallel sentences.",
"However, models built using unsupervised methods perform poorly when compared to supervised (parallel) training (Artetxe et al., 2020).",
"These methods, while capable of achieving the target domain characteristics, often fail to maintain the invariant content.",
"Figure 1 illustrates one such example, where a sentence from the BOOKS domain is translated to the MOVIE domain.",
"While the translated sentence Loved the movie has correctly transferred the attribute (style), it does not have the same length, does not retain the personal noun ( I ), nor use a domain-appropriate proper noun.",
"Comparatively, the higher-fidelity transfer I absolutely enjoyed Spielberg's direction , maintains such constraints of identity , in addition to being apt.",
"While the literature primary utilizes the term style transfer , we adopt the more general term attribute as suggested by Jin et al. (2020a).",
"of text transfer, as enforcing constraints of identity can help maintain the brand identity when the product descriptions are mapped from one commercial product to another.",
"They can also help in data augmentation for downstream domain adaptation NLP applications ( 5).",
"Constraints of identity are explored extensively in the computer vision task of cross-domain image generation.",
"(Taigman et al., 2017), but these issuesto the best of our knowledgeare unexplored in NLP.",
"In this paper, we improve unsupervised attribute transfer by enforcing invariances via explicit constraints.",
"Current methods in text attribute transfer lack mechanisms to explicitly enforce such constraints between the source and the transferred sentence.",
"To this end, we build upon unsupervised text style transfer work by introducing an additional explicit regularization component in the latent space of a GAN-based seq2seq network through two complementary losses ( 3).",
"Unlike the adversarial losses in the GAN framework, our proposed losses cooperatively reduce the same objective.",
"The first loss is a contrastive loss (Le-Khac et al., 2020) that brings sentences that have similar constraints closer and pushes sentences that are dissimilar farther away.",
"The second loss is a classification loss that helps maintain the sentence identity via constraints from the latent vectors (Odena et al., 2017).",
"Our approach, while simple and aimed at maintaining constraints, improves the overall performance of the generation.",
"We demonstrate these gains over three datasets: YELP (Zhao et al., 2018b), IMDB (Dai et al., 2019) and POLITICAL (Prabhumoye et al., 2018), generating six constraints including lexical, syntactic and domain-specific constraints.",
"The introduced cooperative losses satisfy the constraints more effectively compared against strong baselines.",
"Since multiple attributes can change between two domains (Subra-manian et al., 2018), we test our method on one such dataset and show that the constraints of identity are maintained more effectively ( 4.4.2).",
"To the best of our knowledge, our approach is the first to introduce cooperative losses in a GAN-like setup for NLG.",
"Task Setup: We consider two sets of sentences (or corpora) S = { x 1 src , x 2 src , . . . x msrc } and T = { x 1 trg , x 2 trg , . . . x ntrg } , as the source and target domains, respectively.",
"Each corpus which we interpret as domains contain discernable attributes, ranging from sentiment (e.g., positive vs. negative), topics, political slant (e.g., democratic vs. republi-can), or some combination (Li et al., 2018; Lample et al., 2019).",
"The overall task is to rewrite a piece of text s i S to t i T , such that the translation changes the attributes varying across the two domains but retains the remaining content.",
"While content retention is not explicitly defined in the literature, we design this new task of constrained unsupervised attribute transfer that assigns explicit constraints C = { c 1 , c 2 , . . . , c |C| } , to be retained.",
"These constraints can be defined at various levels of a sentence: lexical, syntactic and domain-specific.",
"Adversarially Regularized Autoencoder ( ARAE ): To perform unsupervised attribute transfer, we consider seq2seq models that encode source sentences to a latent space and then decodes them to the target sentences.",
"ARAE s (Zhao et al., 2018b) are the auto-encoder variants of the Generative Adversarial Network (GAN) (Goodfellow et al., 2014) framework.",
"They learn smooth latent spaces (by imposing implicit priors) to ease the sampling of latent sentences.",
"ARAE s have been widely adopted in tasks like unsupervised text generation (Huang et al., 2020), topic modeling (Hu et al., 2020), among others, and form the backbone of our proposed model.",
"ARAE consists of an auto-encoder with a deterministic encoder enc : X Z that encodes sentences into a latent space; i.e., z = enc ( x ) P z , and a conditional decoder p ( x | z ) that generates a sentence given a latent code.",
"ARAE regularizes this latent space utilizing a GAN-like setup that includes an implicit prior obtained from a parameterized generator network enc : N (0 , I ) Z .",
"Here, enc maps a noise sample s N (0 , I ) to the corresponding prior latent code z = enc ( s ) P z .",
"A critic crc : Z R then learns to distinguish between real and generated samples, whereas both enc and enc are adversarially trained to fool the critic.",
"This results in a minimax optimization which implicitly minimizes the JS-Divergence between the two distributions P z and P z : min max E z P z [ crc ( z )] E z P z [ crc ( z )] (1) The training involves three optimizations: i ) reducing the auto-encoder loss L ae , which tries to reconstruct the input and encourages copying be-417",
"havior and maintain semantics similar to the original text (Eq. 2); ii ) optimizing the critic's loss L cri to distinguish between real and fake samples (Eq. 3) iii ) training the encoder and generator L adv to fool the critic (Eq. 4): L ae ( , ) = E z P z [ log p ( x | z )] (2) L crc ( ) = E z P z [ crc ( z )] + E z P z [ crc ( z )] (3) L adv ( , ) = E z P z [ crc ( z )] E z P z [ crc ( z )] (4) 3 Proposed Method 3.1 Base Model ( ARAE seq2seq ) While ARAE is an auto-encoder that recreates input x x , our requirement is to translate sentences from one domain to another.",
"Given this, we modify the ARAE to a seq2seq variant such that we can translate input sentences between source and target domains; i.e., x src x tgt and x tgt x src .",
"To achieve this, we utilize enc to encode x src and repurpose enc to encode x tgt .",
"We obtain their latent codes ( z , z ) which we name as ( z s , z t ) , i.e., z s = enc ( x src ) and z t = enc ( x tgt ) .",
"Next, to generate sentences, we consider two decoders x src p ( x | z ) and x tgt p ( x | z ) .",
"Here, z can be either z s or z t based on whether we auto-encode (e.g., p ( x | z s = enc ( x src )) ) or translate (e.g., p (cid:0) x | z t = enc ( x tgt ) (cid:1) ).",
"Unlike ARAE 's single decoder, we incorporate two decoders to enable bi-directional translation.",
"In the above process, instead of sampling s from a noise distribution like N (0 , I ) and passing it through a generator enc , we feed it text from the target domain T and a decoder dec that decodes text in T .",
"This is inspired from Cycle-GAN (Zhu et al., 2017), where instead of matching the noise distribution N , we match the distribution of T .",
"In addition, we tie the weights of the encoders from both domains, so that the encoders learn to encode domain-agnostic information.",
"Tying encoder weights has also been used by unsupervised machine translation (Artetxe et al., 2018; Lample et al., 2017) and multiple other works (Mai et al., 2020; Huang et al., 2020; Hu et al., 2020; Artetxe et al., 2018) 3 .",
"While the latent space in ARAE seq2seq learns to match S and T sentences, there is no guarantee on translations maintaining the content.",
"This issue is particularly pronounced in unsupervised attribute transfer due to lack of parallel sentences between S and T .",
"To alleviate the issue, we propose to learn a structured latent space which embodies notions of our constraints in its embedded latent codes.",
"This ensure that instances with similar constraints are closer in the latent space.",
"In particular, we propose two types of optimization self-supervised and discriminative to maintain the constraints better.",
"We use contrastive representation learning to regularize the latent space, such that encoders bring two sentences sharing similar constraints closer together (positive pairs), and force dissimilar ones away (negative pairs).",
"For example, sentences of similar lengths (irrespective of their domains) should be closer together.",
"Among many self-supervised metric losses such as Triplet Loss (Hoffer and Ailon, 2015) and NT-3 We tried with separate encoders and decoders, but encoders with tied weights work best 418 Algorithm 1: ARAE seq2seq + CLF + CONTRA 1 for each training iteration do 2 1) Train the Auto-encoders: 3 Sample x src S , x trg T 4 z s = enc ( x src ) , z t = enc ( x trg ) 5 Backprop loss, L ae ( , ) , L ae ( , ) 6 2) Train the Critic: 7 Sample x src S , x trg T 8 z s = enc ( x src ) , z t = enc ( x trg ) 9 z scrc = crc hid ( z s ) , z tcrc = crc hid ( z t ) 10 l crc L crc ( ) 11 2a) Critic Co-op Training: 12 Backprop loss, l crc + 1 L con ( ) + 2 L clf ( , ) 13 3) Adversarial Training: 14 Sample x src S , x trg T 15 z s = enc ( x src ) , z t = enc ( x trg ) 16 Backprop loss, L adv ( , ) 17 3a) Encoder Co-op Training: 18 Backprop loss, 1 L con ( , ) + 2 L clf ( , , ) Xent loss (Chen et al., 2020), we use one that is amenable to multiple positive instances (Khosla et al., 2020).",
"Given a sentence s i S in a mini-batch of size B , we mine P positive sentences each from S and T that share the same constraints with s i .",
"This contrastive loss is given by: L con ( , , ) = 1 | P | log (cid:32) P (cid:88) j =1 e ( z i z j ) (cid:80) B \\{ i } k =1 e ( z i z k ) (cid:33) (5) where z 's are representations obtained from the encoders in S , T or representations obtained from the last layer of critic crc .",
"C i are a set of constraints for a sentence.",
"Recently, (Kang and Park, 2020) introduced the cooperative loss in the adversarial setup where contrastive losses are added to both the critic and generator for GANs.",
"Unlike the normal opposing losses of the generator and the critic, both of them cooperatively reduce the contrastive loss.",
"We follow a similar principle and add the loss to both the encoders and the critic (Lines 18).",
"Contrastive learning might be sub-optimal if we do not mine good quality positive and negative",
"samples (Tian et al., 2020).",
"To address this, we propose another way to regularize the latent space.",
"Similar to ACGAN (Odena et al., 2017), we encourage the encoders and the critic to cooperatively reduce a classification loss.",
"We include a classifier D : Z R |C| that predicts the different constraints C of the sentences and the binary cross entropy loss is reduced.",
"where |C| is the number of constraints per sentence, is the sigmoid function and l c are the logits produced by the classifier for z i .",
"As in contrastive loss, the z i can be produced by encoders of S , T or from the hidden layers of the critic.",
"Datasets.",
"We use three datasets with single attribute changes: i ) Yelp Reviews : business reviews listed on Yelp, labeled as either a positive or negative sentiment.",
"ii ) IMDb Movie Reviews : consists of movie reviews (Dai et al., 2019) also labelled as positive or negative.",
"iii ) Political Slant : consists of Facebook posts from the politicians of the United States Senate and the House of Representatives (Prabhumoye et al., 2018), labeled with either democratic/republican slant.",
"We provide a summary of the dataset statistics in Table 1.",
"We include datasets of varied length and complexity.",
"Apart from having different topics, the IMDB dataset is more formal compared to the more colloquial YELP .",
"We fix the maximum vocabulary size for YELP , IMDB and POLITICAL at 30K which is also the default maximum vocab size used in (Zhao et al., 2018b).",
"ii ) Syntactic : Presence of personal pronouns (binarized to indicate the presence of a personal pronoun ); number of adjectives (categori-cal up to 5); number of proper nouns (categorical up to 3); syntactic tree height (categorical up to 10).",
"iii ) Domain specific number of domain-specific attributes (Li et al., 2018) (categorical up to 5).",
"Further, we label the sentence with a constraint-specific, catch-all label if the bounds are beyond what we mention above.",
"Since the distribution of the labels may be different, we report the F1 score on our constraints.",
"For the encoders, we use a one-layer LSTM network with 300 hidden dimensions for all the datasets.",
"For the critics and classification loss, we use a two-layer multi-layer perceptron with 100 hidden units.",
"Training Hyper-parameters: For all our experiments we set the learning rate of the auto-encoder ( lr ae ) to 1e-3 and ( lr disc ) to 1e-4.",
"The number of discriminator steps ( n dis ) is set to 5.",
"The Adam optimizer parameters 1 = 0 .",
"5 and 2 = 0 .",
"9 , which ensures a more conservative optimization and is known to improve stability.",
"We also add a gradient penalty to the loss function of the discriminator that stabilizes training.",
"All the suggestions for stabilizing training are mostly obtained from (Arjovsky and Bottou, 2017).",
"Automatic Evaluation: Our automatic evaluation considers the following three prominent criteria: i ) Semantic Similarity ( SIM ): Measured between source and translated target sentences using encoders (Wieting et al., 2019), instead of n-gram metrics like BLEU (Papineni et al., 2002) which have weak correlations with human judgments.",
"ii ) Transfer Accuracy ( ACC ): The transferred sentence should belong to the target domain and a classifier is trained to distinguish between the source and the target sentence.",
"We use fastText classifiers (Joulin et al., 2017) for every dataset.",
"We achieve accuracy of 97 .",
"9 for YELP , 96 .",
"9 for IMDB and 97 .",
"1 for POLITICAL .",
"iii ) Fluency ( FL ): A transferred sentence should be grammatically correct.",
"We fine-tune a RoBERTa-large model on the COLA (Warstadt et al., 2018) dataset to indicate whether a sentence is linguistically acceptable.",
"Finally, we combine the three scores into an aggregate, following the criteria suggested by Krishna et al. (2020): AGG = 1 | S | (cid:88) s SACC (s) SIM (s) FL (s) Human Evaluation: We also perform an indicative human evaluation where we randomly sample 100 samples from each of the three datasets and hire three researchers to rate every sentence for FL , SIM and ACC on a 3-point scale (Krishna et al., 2020).",
"We compare ARAE seq2seq with the following baselines:",
"a) DRG : The Delete, Retrieve, Generate method that deletes domain specific attributes, retrieves a template and generates the target domain text (Li et al., 2018).",
"We use the stronger, entire system rather than the weaker DELETEONLY and RETRIEVEONLY baselines;",
"b) ARAE : Adversarially regularized autoencoders our system is based on (Zhao et al., 2018b);",
"c) ARAE seq2seq : Our model without the contrastive learning or cooperative classifier;",
"d) ARAE seq2seq + CONTRA : Our model with the contrastive learning;",
"e) ARAE seq2seq + CLF : Our model with the cooperative classifier;",
"f) ARAE seq2seq + CLF + CONTRA : Our model with both the cooperative losses.",
"The closest model to ours is from (Huang et al., 2020).",
"However, we were not able to reproduce the results.",
"4 4.4 Results 4.4.1 Overall Results ARAE seq2seq + CONTRA and ARAE seq2seq + CLF consistently perform better than DRG and ARAE on the AGG score (Table 2).",
"The AGG for YELP is 20.6 (vs. 19.8), for IMDB it is 28.1 (vs. 19.9) and for POLITICAL 25.5 (vs. 11.0).",
"Although cooperative loss reduction aims to satisfy the constraints between two domains, our results show that further regularization of the latent space not only brings advantages in satisfying the constraints but also improves performance (Lavoie-Marchildon et al., 4 Repeated attempts to obtain the original source code failed. 2020).",
"Effect of Cooperative Loss Reduction on ACC FL and SIM : Across datasets, reducing cooperative losses improves ACC and FL and SIM to ARAE .",
"Although DRG produces sentences with high SIM as most of the text from the original sentence is retained after the delete step, there is a large tradeoff with ACC resulting in low AGG scores.",
"Also, compared to ARAE , adding cooperative losses significantly increases the SIM , with the highest increase observed for POLITICAL .",
"The reasons for this could be two-fold: i ) since we mine positive sentences from a corpus that is grounded in real world events, most lexically-similar sentences may also be semantically similar (Guu et al., 2018), and ii ) since we tie the encoders from the source and target domain, we extract domain-agnostic information before generation, which retains content.",
"Fluency ( FL ) also improves over all datasets.",
"We hypothesize that reducing cooperative losses regularizes the latent space bringing fluent sentences closer together, enabling the decoder to produce semantically similar and linguistically acceptable sentences.",
"The improvement for POLITICAL is less; we find these source sentences themselves are less fluent and contain many U.S. political acronyms, and that our system produces many out-of-vocabulary words affecting fluency.",
"Nucleus Sampling: Our system achieves the highest AGG score with greedy decoding.",
"We also experiment with nucleus sampling (Holtzman et al., 2019) with different p values.",
"We report results for only p = 0 .",
"6 in Table 2, as it produced the best result.",
"With p =0.6, the results are more diverse, increasing ACC as expected.",
"However we find that with higher values of p , there is a trade-off with SIM resulting in a lower AGG score overall similar to Krishna et al. (2020).",
"Effect of the Number of Positives: The number of positive and negative samples used for contrastive learning (Eq. 5) have a significant effect on the overall performance (Khosla et al., 2020; Chen et al., 2020; Henaff, 2020).",
"Table 3 ( rows | P | { 1 , 2 , 5 , 10 } ) shows the AGG scores on IMDB (for one of the runs), for different number of positives.",
"We find that AGG is the highest with 2 positives per sample as also used by Khosla et al. (2020).",
"Although increasing the number of negatives is beneficial for contrastive learning, when more than one positive example is available, using them brings further improvements (Khosla et al., 2020).",
"Cooperative Losses are Important on Both the Generator and Critic: Table 3 shows the importance of adding the cooperative losses on the generator and critic.",
"First, we see that adding the cooperative losses on both the generator and the critic is crucial for the overall performance.",
"While adding the cooperative contrastive loss to both the generator and critic increases FL and ACC while maintaining similar levels of SIM , adding the cooperative classification loss improves SIM which shows the complementary nature of the losses.",
"Human Evaluation: We average the results and present it in Table 4.",
"DRG produces marginally better semantically similar sentences.",
"Compared to ARAE , our model performs well except for in YELP .",
"This may be because we use nucleus sampling with 0.9 which optimizes for diversity rather than similarity.",
"On other metrics we perform on par or better than our competing systems.",
"(See Appendix B)",
"Qualitative Examples: Table 5 shows examples of the quality of transferred examples (see Appendix A for more).",
"Mistakes made by the model can be attributed to poor understanding of the original semantics, lack of diversity, and not producing attribute-specific words.",
"Figure 3 shows that introducing the cooperative losses significantly outperform DRG and ARAE in maintaining constraints.",
"Specifically the ARAE seq2seq + CLF model performs better than ARAE seq2seq + CONTRA .",
"One reason could be that, finding the appropriate positives and strong negatives can be problematic for contrastive learning.",
"On the other hand, the classifier's objective is simpler and forces the encoder to produce representations that satisfy the different constraints effectively.",
"A seemingly easy to maintain constraint is the length of the sentence.",
"However, seq2seq systems have a difficulty of maintaining appropriate lengths (Murray and Chiang, 2018).",
"With no additional regularization ARAE does not maintain the length as well as ARAE seq2seq + CLF .",
"On the other hand, compared to the lexical constraints, syntactic attributes like descriptiveness, tree height and domain specific constraints present challenges, with significantly lower F scores.",
"ARAE seq2seq + CLF produces significantly better results in maintaining them.",
"This shows that obtaining improvements on the overall AGG does not necessarily translate to producing outputs that satisfy constraints.",
"DRG maintains the proper noun for IMDB effectively, because it contains a wide variety of actor and movie names.",
"They are retained verbatim after the delete operation.",
"Multiple Attribute Datasets: To test whether our model can satisfy constraints across domains where multiple attributes change, we use the multi-attribute dataset released by (Lample et al., 2019).",
"We chose the ASIAN and MEXICAN as two domains.",
"Each of these domains can have multiple attributes like positive and negative sentiment text, different gender attributions to sentences, etc.",
"We compare our ARAE seq2seq + CLF model with the ARAE seq2seq and ARAE in Figure 4.",
"The results are more pronounced in this case with ARAE seq2seq + CLF having clear advantage over ARAE seq2seq .",
"This shows that even with multiple attributes changing between domains, cooperatively reducing losses can satisfy different constraints more effectively.",
"Qualitative Examples: Table 6 shows examples of our model maintaining constraints compared to ARAE .",
"Sometimes, ARAE hallucinates and adds personal pronouns like my to the text even when there are no personal pronouns ( row 1).",
"Also, our model produces sentences where the number of proper nouns are retained (Chris Klein vs. Robert De Niro), whereas ARAE does not.",
"Cycle Consistency Loss:",
"a) In Latent Spaces Cycle consistency in latent spaces has been shown to improve word-level tasks, such as cross-lingual dictionary construction (Mohiuddin and Joty, 2019) and topic modeling (Hu et al., 2020).",
"A recent work from (Huang et al., 2020) claims to improve unsupervised style transfer using such losses.",
"In our experiments, however, it did not result in any noticeable performance improvement 5 .",
"Given this, we hypothesize that cycle consistency might be too restrictive for sentence-level tasks.",
"b) Using Back-Translation Back-translation is another alternative to ensure semantic consistency between source and the target sentence (Prabhumoye et al., 2018; Artetxe et al., 2018; Lample et al., 2017).",
"However, in our case, since we are training an ARAE , it would involve an additional inference and auto-encoder training step which is expensive and we defer exploring this.",
"Using Transformers: We also replace our LSTM auto-encoders with both pre-trained and randomly initialized transformer encoderdecoders (Rothe et al., 2020).",
"Although we found an increase in the AGG , it was mostly because of very high SIM and very low ACC .",
"Reducing the number of layers, attention heads would still result in a large model that is still prone to copying text.",
"This reveals the potential limitations of our method and training using transformers is a future work.",
"Transferred sentences as Adversarial Examples: We demonstrate an important application of our proposed constrained transfer by considering them as adversarial examples for domain adaptation.",
"Domain Adversarial Neural Network (DANN) (Ganin et al., 2017) is an unsupervised domain adaptation method that improves performance of an end-task (e.g, sentiment analysis) on a target domain considering only supervised data from source domain.",
"We train DANN for sentiment analysis on amazon reviews dataset (He and McAuley, 2016) with DVD as source and ELECTRONICS as the tar-5 Repeated attempts to obtain source codes failed.",
"Next, we train the best variant of ARAE seq2seq to transfer a separate set DVD reviews to ELECTRONICS reviews and use them as adversarial examples to test the DANN model 6 .",
"We find that the accuracy of DANN on the ELECTRONICS domain reduces by 3 points.",
"This shows the potential application of domain transferred sentences as adversarial examples.",
"Similar ideas have been tried for image style transfer (Xu et al., 2020), but needs more investigation in NLP.",
"Text attribute transfer has a vast literature (Jin et al., 2020a) with deep learning methods becoming popular.",
"The methods are either supervised (requiring parallel data) or unsupervised.",
"Supervised methods re-purpose Sequence to Sequence models used in machine translation to achieve the goals (Rao and Tetreault, 2018).",
"However, obtaining parallel data is cumbersome and thus unsupervised methods that consider pseudo-parallel data have become popular.",
"Disentanglement approaches are the prevalent approach to tackle unsupervised attribute transfer: attributes and content are separated in latent dimension.",
"To disentangle the attributes adversarial methods maximize the loss of a pre-trained attribute classifier (Li et al., 2020; Fu et al., 2018; Zhao et al., 2018a; John et al., 2019).",
"However, the literature has paid little attention in defining and preserving content.",
"Cycle consistency losses imposing that reconstruction from the target style sentence should resemble the source sentence is the most prevalent (Prabhumoye et al., 2018; Logeswaran et al., 2018; Dai et al., 2019; Huang et al., 2020; Yi et al., 2020).",
"However, this is expensive and nondifferentiable, thus requiring reinforcement learning techniques to enforce it.",
"Our work defines the different constraints that should be preserved and adds simple differentiable contrastive learning losses to preserve them.",
"In recent times, text style transfer models are moving away from disentanglement approaches (Subramanian et al., 2018).",
"Recent works that use transformers for style transfer also have adopted this (Dai et al., 2019; Krishna et al., 2020).",
"How-6 Since each of DVD and ELECTRONICS contain positive and negative reviews, we test whether transferred sentences maintain the appropriate sentiment and find the accuracy to be 79%.",
"ever, these methods do not explicitly maintain the constraints between the two styles which is the main aim of our work.",
"Text style transfer works focuses on retaining content and changing the style of sentences but does not maintain other desirable constraints.",
"We address this by introducing two cooperative losses to the GAN-inspired Adversarially Regularized Autoencoder (ARAE) that further regularizes the latent space.",
"While satisfying the constraints our methods brings significant improvements in overall score.",
"While we focus on simple constraints at the sentenceand word-level, future work can add phrase-level and more fine-grained constraints.",
"Potential future work may explore reinforcement learning losses to directly optimize the constraints.",
"We would like to thank the anonymous reviewers for their useful suggestions.",
"We would also like to acknowledge the support of the NExT research grant funds, supported by the National Research Foundation, Prime Ministers Office, Singapore under its IRC@ SG Funding Initiative, and to gratefully acknowledge the support of NVIDIA Corporation with the donation of the GeForce GTX Titan XGPU used in this research.",
"The work is also supported by the project no.",
"T2MOE2008 titled CSK-NLP: Leveraging Commonsense Knowledge for NLP awarded by Singapore's Ministry of Education under its Tier-2 grant scheme."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"method",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"objective",
"abstain",
"method",
"result",
"method",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Pre-trained language models have shown stellar performance in various downstream tasks.",
"But, this usually comes at the cost of high latency and computation, hindering their usage in resource-limited settings.",
"In this work, we propose a novel approach for reducing the computational cost of BERT with minimal loss in downstream performance.",
"Our method dynamically eliminates less contributing tokens through layers, resulting in shorter lengths and consequently lower computational cost.",
"To determine the importance of each token representation, we train a Contribution Predictor for each layer using a gradient-based saliency method.",
"Our experiments on several diverse classification tasks show speedups up to 22x during inference time without much sacrifice in performance.",
"We also validate the quality of the selected tokens in our method using human annotations in the ERASER benchmark.",
"In comparison to other widely used strategies for selecting important tokens, such as saliency and attention , our proposed method has a significantly lower false positive rate in generating rationales.",
"Our code is freely available at https://github.com/amodaresi/ AdapLeR .",
"While large-scale pre-trained language models exhibit remarkable performances on various NLP benchmarks, their excessive computational costs and high inference latency have limited their usage in resource-limited settings.",
"In this regard, there have been various attempts at improving the efficiency of BERT-based models (Devlin et al., 2019), including knowledge distilation (Hinton et al., 2015; Sanh et al., 2019; Sun et al., 2019, 2020; Jiao et al., 2020), quantization (Gong et al., 2014; Shen et al., 2020; Tambe et al., 2021), weight Equal Contribution.",
"pruning (Han et al., 2016; He et al., 2017; Michel et al., 2019; Sanh et al., 2020), and progressive module replacing (Xu et al., 2020).",
"Despite providing significant reduction in model size, these techniques are generally static at inference time, i.e., they dedicate the same amount of computation to all inputs, irrespective of their difficulty.",
"A number of techniques have been also proposed in order to make efficiency enhancement sensitive to inputs.",
"Early exit mechanism (Schwartz et al., 2020b; Liao et al., 2021; Xin et al., 2020; Liu et al., 2020; Xin et al., 2021; Sun et al., 2021; Eyzaguirre et al., 2021) is a commonly used method in which each layer in the model is coupled with an intermediate classifier to predict the target label.",
"At inference, a halting condition is used to determine whether the model allows an example to exit without passing through all layers.",
"Various halting conditions have been proposed, including Shannon's entropy (Xin et al., 2020; Liu et al., 2020), softmax outputs with temperature calibration (Schwartz et al., 2020b), trained confidence predictors (Xin et al., 2021), or the number of agreements between predictions of intermediate classifiers (Zhou et al., 2020).",
"Most of these input-adaptive techniques compress the model from the depth perspective (i.e., reducing the number of involved encoder layers).",
"However, one can view compression from the width perspective (Goyal et al., 2020; Ye et al., 2021), i.e., reducing the length of hidden states.",
"(Ethayarajh, 2019; Klafka and Ettinger, 2020).",
"This is particularly promising as recent analytical studies showed that there are redundant encoded information in token representations (Klafka and Ettinger, 2020; Ethayarajh, 2019).",
"Among these redundancies, some tokens carry more task-specific information than others (Mohebbi et al., 2021), suggesting that only these tokens could be considered through the model.",
"Moreover, in contrast to layer-wise pruning, token-level pruning does not 1 come at the cost of reducing model's capacity in complex reasoning (Sanh et al., 2019; Sun et al., 2019).",
"PoWER-BERT (Goyal et al., 2020) is one of the first such techniques which reduces inference time by eliminating redundant token representations through layers based on self-attention weights.",
"Several studies have followed (Kim and Cho, 2021; Wang et al., 2021); However, they usually optimize a single token elimination configuration across the entire dataset, resulting in a static model.",
"In addition, their token selection strategies are based on attention weights which can result in a suboptimal solution (Ye et al., 2021).",
"In this work, we introduce Adap tive Le ngth R eduction ( AdapLeR ).",
"Instead of relying on attention weights, our method trains a set of Contribution Predictors (CP) to estimate tokens' saliency scores at inference.",
"We show that this choice results in more reliable scores than attention weights in measuring tokens' contributions.",
"The most related study to ours is TR-BERT (Ye et al., 2021) which leverages reinforcement learning to develop an input-adaptive token selection policy network.",
"However, as pointed out by the authors, the problem has a large search space, making it difficult for RL to solve.",
"To mitigate this, they resorted to extra heuristics such as imitation learning (Hussein et al., 2017) for warming up the training of the policy network, action sampling for limiting the search space, and knowledge distillation for transferring knowledge from the intact backbone fine-tuned model.",
"All of these steps significantly increase the training cost.",
"Hence, they only perform token selection at two layers.",
"In contrast, we propose a simple but effective method to gradually eliminate tokens in each layer throughout the training phase using a soft-removal function which allows the model to be adaptable to various inputs in a batch-wise mode.",
"It is also worth noting in contrast to our approach above studies are based on top-k operations for identifying the k most important tokens during training or inference, which can be expensive without a specific hardware architecture (Wang et al., 2021).",
"In summary, our contributions are threefold: We couple a simple Contribution Predictor (CP) with each layer of the model to estimate tokens' contribution scores to eliminate redundant representations.",
"Instead of an instant token removal, we gradually mask out less contributing token representations by employing a novel soft-removal function.",
"We also show the superiority of our token selection strategy over the other widely used strategies by using human rationales.",
"Self-attention is a core component of the Transformers (Vaswani et al., 2017) which looks for the relation between different positions of a single sequence of token representations ( x 1 , ..., x n ) to build contextualized representations.",
"To this end, each input vector x i is multiplied by the corresponding trainable matrices Q , K , and V to respectively produce query ( q i ), key ( k i ), and value ( v i ) vectors.",
"To construct the output representation z i , a series of weights is computed by the dot product of q i with every k j in all time steps.",
"Before applying a softmax function, these values are divided by a scaling factor and then added to an attention mask vector m , which is zero for positions we wish to attend and (in practice, 10000 ) for padded tokens (Vaswani et al., 2017).",
"Mathematically, for a single attention head, the weight attention from token x i to token x j in the same input sequence can be written as: i,j = softmax x j X (cid:32) q i k j d + m i (cid:33) R (1) The time complexity for this is O ( n 2 ) given the dot product q i k j , where n is the input sequence length.",
"This impedes the usage of self-attention based models in low-resource settings.",
"While self-attention is one of the most white-box components in transformer-based models, relying on raw attention weights as an explanation could be misleading given that they are not necessarily responsible for determining the contribution of each token in the final classifier's decision (Jain and Wallace, 2019; Serrano and Smith, 2019; Abnar and Zuidema, 2020).",
"This is based on the fact that raw attentions are being faithful to the local mixture of information in each layer and are unable to obtain a global perspective of the information flow through the entire model (Pascual et al., 2021).",
"Gradient-based methods provide alternatives to attention weights to compute the importance of a",
"specific input feature.",
"Despite having been widely utilized in other fields earlier (Ancona et al., 2018; Simonyan et al., 2013; Sundararajan et al., 2017; Smilkov et al., 2017), they have only recently become popular in NLP studies (Bastings and Fil-ippova, 2020; Li et al., 2016; Yuan et al., 2019).",
"These methods are based on computing the first-order derivative of the output logit y c w.r.t. the input embedding h 0 i (initial hidden states), where c could be true class label to find the most important input features or the predicted class to interpret model's behavior.",
"After taking the norm of output derivatives, we get sensitivity (Ancona et al., 2018), which indicates the changes in model's output with respect to the changes in specific input dimensions.",
"Instead, by multiplying gradients with input features, we arrive at gradient input (Bastings and Filippova, 2020), also known as saliency , which also considers the direction of input vectors to determine the most important tokens.",
"Since these scores are computed for each dimension of embedding vectors, an aggregation method such as L2 norm or mean is needed to produce one score per input token (Atanasova et al., 2020a): S i = y c h 0 i h 0 i 2 (2) 3 Methodology As shown in Figure 1, our approach relies on dropping low contributing tokens in each layer and passing only the more important ones to the next.",
"Therefore, one important step is to measure the importance of each token.",
"To this end, we opted for saliency scores which have been recently shown as a reliable criterion in measuring token's contributions (Bastings and Filippova, 2020; Pascual et al., 2021).",
"In Section 5.1 we will show results for a series quantitative analyses that supports this choice.",
"In what follows, we first describe how we estimate saliency scores at inference time using a set of Contribution Predictors (CPs) and then elaborate on how we leverage these predictors during inference (Section 3.2) and training (Section 3.3).",
"Computing gradients during inference is problematic as backpropagation computation prolongs inference time, which is contrary to our main goal.",
"To circumvent this, we simply add a CP after each layer in the model to estimate contribution score for each token representation, i.e., S i .",
"The model then decides on the tokens that should be passed to the next layer based on the values of S i .",
"CP computes S i for each token using an MLP followed by a softmax activation function.",
"We argue that, despite being limited in learning capacity, the MLP is sufficient for estimating scores that are more generalized and relevant than vanilla saliency values.",
"We will present a quantitative analysis on this topic in Section 5.",
"Most BERT-based models consist of L encoder layers.",
"The input sequence of n tokens is usually passed through an embedding layer to build the initial hidden states of the model h 0 .",
"Each encoder layer then produces the next hidden states using the 3 ones from the previous layer: h = Encoder ( h 1 ) (3) In our approach, we eliminate less contributing token representations before delivering hidden states to the next encoder.",
"Tokens are selected based on the contribution scores S obtained from the CP of the corresponding layer .",
"As the sum of these scores is equal to one, a uniform level indicates that all tokens contribute equally to the prediction and should be retained.",
"On the other hand, the lower-scoring tokens could be viewed as unnecessary tokens if the contribution scores are concentrated only on a subset of tokens.",
"Given that the final classification head uses the last hidden state of the [CLS] token, we preserve this token's representation in all layers.",
"Despite preserving this, other tokens might be removed from a layer when [CLS] has a significantly high estimated contribution score than others.",
"Based on this intuition, we define a cutoff threshold based on the uniform level as: = 1 / n with 0 < 1 to distinguish important tokens.",
"Tokens are considered important if their contribution score exceeds (which is a value equal or smaller than the uniform score).",
"Intuitively, a larger provides a higher cutoff level, thereby dropping a larger number of tokens, hence, yielding more speedup.",
"The value of determines the extent to which we can rely on CP's estimations.",
"In case the estimations of CP are deemed to be inaccurate, its impact can be reduced by lowering .",
"We train each layer's using an auxiliary training objective, which allows the model to adjust the cutoff value to control the speedup-performance tradeoff.",
"Also, since each input instance has a different computational path during token removal process, it is obvious that at inference time, the batch size should be equal to one (single instance usage), similarly to other dynamic approaches (Zhou et al., 2020; Liu et al., 2020; Ye et al., 2021; Eyzaguirre et al., 2021; Xin et al., 2020).",
"Training consists of three phases: initial fine-tuning, saliency extraction, and adaptive length retraining.",
"In the first phase, we simply fine-tune the backbone model (BERT) on a given target task.",
"We then extract the saliencies of three top-perfroming checkpoints from the fine-tuning process and compute the average of them to mitigate potential inconsistencies in saliency scores (cf. Section 2.2).",
"The final step is to train a pre-trained model using an adaptive length reduction procedure.",
"In this phase, a non-linear function gradually fades out the representations throughout the training process.",
"Each CP is jointly trained with the rest of the model using the saliencies extracted in the previous phase alongside with the target task labels.",
"We also define a speedup tuning objective to determine the thresholds (via tuning ) to control the performance-speedup trade-off.",
"In the following, we elaborate on the procedure.",
"Soft-removal function.",
"During training, if tokens are immediately dropped similarly to the inference mode, the effect of dropping tokens cannot be captured using a gradient backpropagation procedure.",
"Using batch-wise training in this scenario will also be problematic as the structure will vary with each example.",
"Hence, inspired by the padding mechanism of self-attention models (Vaswani et al., 2017) we introduce a new procedure that gradually masks out less contributing token representations.",
"In each layer, after predicting contribution scores, instead of instantly removing the token representations, we accumulate a negative mask to the attention mask vector M using a soft-removal function: m i ( S i ) = adj ( S i ) S i < ( S i 1) (1 ) S i (4) This function consists of two main zones (Figure 2).",
"In the first term, the less important tokens with scores lower than the threshold ( ) are assigned higher negative masking as they get more distant 4 from .",
"The slope is determined by adj = / , where is a hyperparameter that is increased exponentially after each epoch (e.g., 10 after finishing each epoch).",
"Increasing makes the soft-removal function stronger and more decisive in masking the representations.",
"To avoid undergoing zero gradients during training, we define 0 < < 0 .",
"1 to construct a small negative slope (similar to the well known Leaky-ReLU of Maas et al. 2013) for those tokens with higher contributing scores than threshold.",
"Consider a scenario in which sharply drops, causing most of S i get over the threshold.",
"In this case, the non-zero value in the second term of Equation 4, which facilitates optimizing .",
"Training the Contribution Predictors.",
"The CPs are trained by an additional term which is based on the KL-divergence 1 of each layer's CP output with the extracted saliencies.",
"The main training objective is a minimization of the following loss: L = LCE + LCP (5) Where is a hyperparameter which that specifies the amount of emphasis on the CP training loss: LCP = L 1 (cid:88) =0 ( L ) DKL ( S || S ) = L 1 (cid:88) =0 ( L ) N (cid:88) i =1 S i log( S i S i ) (6) Since S is based on the input embeddings, the [CLS] token usually shows a low amount of contribution due to not having any contextualism in the input.",
"As we leverage the representation of the [CLS] token in the last layer for classification, this token acts as a pooler and gathers information about the context of the input.",
"In other words, the token can potentially have more contribution as it passes through the model.",
"To this end, we amplify the contribution score of [CLS] and renormalize the distribution ( S ) with a trainable parameter : S i = S 1 1 [ i = 1] + S i 1 [ i > 1] S 1 + (cid:80) ni =2 S i (7) By this procedure, the next objective (discussed in the next paragraph) will have the capability of tuning the amount of pooling, consequently controlling the amount of speedup.",
"Larger push the 1 Inclusive KL loss.",
"CPs to shift the contribution towards the [CLS] token to gather most of the task-specific information and avoids carrying redundant tokens through the model.",
"Speedup Tuning.",
"In the speedup tuning process, we combine the cross-entropy loss of the target classification task with a length loss which is the expected number of unmasked token representations in all layers.",
"Considering that we have a non-positive and continuous attention mask M , the length loss of a single layer would be the summation over the exponential of the mask values exp( m i ) to map the masking range [ , 0] to a [0 (fully masked/removed) , 1 (fully retained) ] bound.",
"LSPD",
"./P ERF .",
"= LCE + LLENGTHLLENGTH = L (cid:88) l =1 n (cid:88) i =1 exp( m i ) (8) Equation 8 demonstrates how the length loss is computed inside the model and how it is added to the main classification loss.",
"During training, we assign a separate optimization process which tunes and to adjust the thresholds and the amount of [CLS] pooling 2 alongside with the CP training.",
"The reason that this objective is treated as a separate problem instead of merging it with the previous one, is because in the latter case the CPs could be influenced by the length loss and try to manipulate the contribution scores for some tokens regardless of their real influence.",
"So in other words, the first objective is to solve the task and make it explainable with the CPs, and the secondary objective builds the speedup using tuning the threshold levels and the amount of pooling in each layer.",
"To verify the effectiveness of AdapLeR on inference speedup, we selected eight various text classification datasets.",
"In order to incorporate a variety of tasks, we utilized SST-2 (Socher et al., 2013) and IMDB (Maas et al., 2011) for sentiment, MRPC (Dolan and Brockett, 2005) for paraphrase, AG's News (Zhang et al., 2015) for topic classification, DBpedia (Lehmann et al., 2015) for knowledge extraction, MNLI (Williams et al., 2018) for NLI, 2 Since is not in the computational DAG, we employed a dummy variable inside the model.",
"QNLI (Rajpurkar et al., 2016) for question answering, and HateXplain (Mathew et al., 2021) for hate speech.",
"3 Evaluations are based on the test split of each dataset.",
"For those datasets that are in the GLUE Benchmark (Wang et al., 2018), test results were acquired by submitting the test predictions to the evaluation server.",
"As our baseline, we report results for the pre-trained BERT model (base-uncased) (Devlin et al., 2019) which is also the backbone of AdapLeR.",
"We also compare against three other approaches: DistilBERT (uncased) (Sanh et al., 2019) as a static compression method, PoWER-BERT and TR-BERT as two strong length reduction methods (cf. Sec. 1).",
"We used the provided implementations and suggested hyperparameters 4 to train these baselines.",
"To fine-tune the backbone model, we used same hyperparameters over all tasks (see Section D for details).",
"The backbone model and our model implementation is based on the Hug-gingFace's Transformers library (Wolf et al., 2020).",
"Trainings and evaluations were conducted on a dual 2080Ti 11GB GPU machine with multiple runs.",
"Hyperparameter Selection.",
"Overall, we introduced four hyperparameters ( , , , ) 5 which are involved in the training process.",
"Among these, and are the primary terms that have considerable effects on AdapLeR's downstream performance and speedup.",
"This makes our approach comparable to existing techniques (Goyal et al., 2020; Ye et al., 2021) which usually have two or three hyperparameters adjusted per task.",
"We used grid search to 3 See the statistics of datasets in Table 5 in Appendix.",
"4 Since some of the datasets were not used originally, we had to search the hyperparameters based on the given ranges.",
"5 Note that and are trainable terms that are tuned by the model during training.",
"find the optimal values for these two terms, while keeping the other hyperparameters constant over all datasets.",
"Hyperparamter selection is further discussed in Section D. FLOPs Computation.",
"We followed Ye et al. (2021) and Liu et al. (2020) and measured computational complexity in terms of FLOPs, i.e., the number of floating-point operations (FLOPs) in a single inference procedure.",
"This allows us to assess models' speedups independently of their operating environment (e.g., CPU/GPU).",
"The total FLOPs of a given model is a summation of the measured FLOPs over all test examples.",
"Then, a model's speedup can be defined as the total FLOPs measured on BERT (our baseline) divided by the corresponding model's total FLOPs.",
"To have a fair comparison, we also computed FLOPs for PoWER-BERT in a single instance mode, described in Section C. 4.3 Results Table 1 shows performance and speedup for AdapLeR and other comparison models across eight different datasets.",
"While preserving the same level of performance, AdapLeR outperforms other techniques in terms of speedup across all tasks (ranging from +0.2x to +7.4x compared to the best model in each dataset).",
"It is noteworthy that the results also reveal some form of dependency on the type of the tasks.",
"Some tasks may need less amount of contextualism during inference and could be classified by using only a fraction of input tokens.",
"For instance, in AG's News, the topic of a sentence might be identifiable with a single token (e.g., soccer Topic: Sports, see Figure 6 in the Appendix for an example).",
"PoWER-BERT adopts attention weights in its token selection which requires at least one layer of computation to be determined, and TR-BERT ap-6 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 Speedup 0.86 0.87 0.88 0.89 0.90 0.91 0.92 0.93 A cc u r a c y SST-2 TR PoWER AdapLeR 2 3 4 5 6 7 Speedup 0.64 0.65 0.66 0.67 0.68 0.69 0.70 A cc u r a c y HateXplain TR PoWER AdapLeR Figure 3: Accuracy-Speedup trade-off curve for AdapLeR and two other state-of-the-art reduction methods; TR: TR-BERT, PoWER: PoWER-BERT on two different tasks.",
"plies token elimination only in two layers to reduce the training search space.",
"In contrast, our procedure performs token elimination for all layers of the model, enabling a more effective removal of redundant tokens.",
"On the other hand, we observe that TR-BERT and PoWER-BERT lack any speedup gains for tasks such as QNLI, MNLI, and MRPC which need a higher degree of contextualism during inference.",
"However, AdapLeR can offer some speedups even for these tasks.",
"Speedup-Performance Tradeoff.",
"To provide a closer look at the efficiency of AdapLeR in comparison with the other state-of-the-art length reduction methods, we illustrate speedup-accuracy curves in Figure",
"3. We provide these curves for two tasks in which other length reduction methods show comparable speedups to AdapLeR.",
"For each curve, the points were obtained by tuning the most influential hyperparameters of the corresponding model.",
"As we can see, AdapLeR significantly outperforms the other two approaches in all two tasks.",
"An interesting observation here is that the curves for TR-BERT and AdapLeR are much higher than that of PoWER-BERT.",
"This can be attributed to the input-adaptive procedure employed by the former two methods for determining the number of reduced tokens (whereas PoWER-BERT adopts a fixed retention configuration in token elimination).",
"In this section, we first conduct an experiment to support our choice of saliency scores as a supervision in measuring the importance of token representations.",
"Next, we evaluate the behavior of Contribution Predictors in identifying the most important tokens in the AdapLeR.",
"A natural question that arises when dealing with token pruning is that of importance measure : what is the most appropriate criterion for assessing the relative importance of tokens within a sentence?",
"We resort to human rationale as a reliable upper bound for measuring token importance.",
"To this end, we used the ERASER benchmark (DeY-oung et al., 2020), which contains multiple tasks for which important spans of the input text have been highlighted as supporting evidence (aka ra-tionale) by human.",
"Among the tasks in the benchmark, we opted for two diverse classification tasks: Movie reviews (Zaidan and Eisner, 2008) and MultiRC (Khashabi et al., 2018).",
"In the former task, the model predicts the sentiment of the passage.",
"Whereas the latter contains a passage, a question, and multiple candidate answers, which is cast as a binary classification task of passage/question/answer triplets in the ERASER benchmark.",
"In order to verify the reliability of human rationales, we fine-tuned BERT based on the rationales only, i.e., by excluding those tokens that are not highlighted as being important in the input.",
"In Table 2, the first two rows show the performance of BERT on the two tasks with full input and with human rationales only.",
"We see that fine-tuning merely 7 SST-2 (dev) Label: Negative QNLI (dev) Label: Entailment Figure 4: The illustration of contribution scores obtained by CPs in three different layers of the model for two input examples from SST-2 (sentiment) and QNLI (Question-answering NLI) tasks.",
"on rationales not only yields less computation cost, but also results in a better performance when compared with the full input setting.",
"Obviously, human annotations are not available for a whole range of downstream NLP tasks; therefore, this criterion is infeasible in practice and can only be viewed as an upper bound for evaluating different strategies in measuring token importance.",
"We investigated the effectiveness of saliency and self-attention weights as two commonly used strategies for measuring the importance of tokens in pre-trained language models.",
"To compute these, we first fine-tuned BERT with all tokens in the input for a given target task.",
"We then obtained saliency scores with respect to the tokens in the input embedding layer.",
"This brings about two advantages.",
"Firstly, representations in the embedding layer are non-contextualized, allowing us to measure the importance of each token independently from the others.",
"Secondly, the backpropagation of gradients through layers to the beginning of the model provides us with aggregated values for the relative importance of each token based on the entire model.",
"Similarly, we aggregated the self-attention weights across all layers of the model using a post-processed variant of attentions called attention rollout (Abnar and Zuidema, 2020), a popular technique in which the attention weight matrix in each layer is multiplied with the preceding ones to form aggregated attention values.",
"To assign an importance score to each token, we examined two different interpretation of attention weights.",
"The first strategy is the one adopted by PoWER-BERT (Goyal et al., 2020) in which for each token we accumulate attention values from other tokens.",
"Additionally, we measured how much the [CLS] token attends to each token in the sentence, a strategy which has been widely used in interpretability studies around BERT (Abnar and Zuidema, 2020; Chrysostomou and Aletras, 2021; Jain et al., 2020, inter alia ).",
"For a fair comparison, for each sentence in the test set, we selected the topk salient and attended words, with k being the number of words that are annotated as rationales.",
"Results in Table 2 show that fine-tuning on the most salient tokens outperforms that based on the most attended tokens.",
"This denotes that saliency is a better indicator for the importance of tokens.",
"Nonetheless, recent length reduction techniques (Goyal et al., 2020; Kim and Cho, 2021; Wang et al., 2021) have mostly adopted attention weights as their criterion for selecting important tokens.",
"Computing these weights is convenient as they are already computed during the forward pass, whereas computing saliency requires an additional backpropagation step.",
"Note that in our approach, saliency scores are easily estimated within inference time by the pre-trained CPs.",
"In this section we validate our Contribution Predictors in selecting the most contributed tokens.",
"Figure 4 illustrates two examples from the SST-2 and QNLI datasets in which CPs identify and gradually drop the irrelevant tokens through layers, finally focusing mostly on the most important token representations; pedestrian (adjective) in SST-2 and tesla coil in the passage part of QNLI (both of which are highly aligned with human rationale).",
"In order to quantify the extent to which AdapLeR's CPs can preserve rationales without requiring direct human annotations in an unsuper-8 0.1 0.2 0.3 FPR Saliency Attention Attention Rollout CP 1 2 3 4 5 6 7 8 9 10 11 12 Layer 0.35 0.40 0.45 0.50 m AP Figure 5: Agreement with human rationales in terms of mean Average Precision and False Positive Rate for Contribution Predictor (CP) and three alternative techniques.",
"vised manner we carried out the following experiment.",
"To investigate the effectiveness of trained CPs in predicting human rationales we computed the output scores of CPs in AdapLeR for each token representation in each layer.",
"We also fine-tuned a BERT model on the Movie Review dataset and computed layer-wise raw attention, attention rollout, and saliency scores for each token representation.",
"Since human rationales are annotated at the word level, we sum the scores across tokens corresponding to each word to arrive at word-level importance scores.",
"In addition to these soft scores, we used the uniform-level threshold (i.e., 1 / n ) to reach a binary score indicating tokens selected in each layer.",
"As for evaluation, we used the Average Precision (AP) and False Positive Rate (FPR) metrics by comparing the remaining tokens to the human rationale annotations.",
"The first metric measures whether the model assigns higher continuous scores to those tokens that are annotated by humans as rationales.",
"Whereas, the intuition behind the second metric is how many irrelevant tokens are selected by the model to be passed to subsequent layers.",
"We used soft scores for computing AP and binary scores for computing FPR.",
"Figure 5 shows the agreement between human rationales and the selected tokens based on the two metrics.",
"In comparison with the other widely used strategies for selecting important tokens, such as salinecy and attention, our CPs have significantly less false positive rate in preserving rationales through layers.",
"Despite having similar FPRs at the final layer, CP is preferable to attention in that it can better identify rationales at the earlier layers, allowing the model to combine the most relevant token representations when building the final one.",
"This in turn results in better performance, as was also shown in the previous experiment in Section 5.2.",
"Also, we see that the curve of mAP for saliency is consistently higher than other strategies in terms of alignment with human rationales which supports our choice of saliency as a measure for token importance.",
"Finally, we note that there is a line of research that attempts at guiding models to perform humanlike reasoning by training rationale generation simultaneously with the target task that requires human annotation (Atanasova et al., 2020b; Zhao et al., 2020; Li et al., 2018).",
"As a by-product of the contribution estimation process, our trained CPs are able to generate these rationales at inference without the need for human-generated annotations.",
"In this paper, we introduced AdapLeR, a novel method that accelerates inference by dynamically identifying and dropping less contributing token representations through layers of BERT-based models.",
"Specifically, AdapLeR accomplishes this by training a set of Contribution Predictors based on saliencies extracted from a fine-tuned model and applying a gradual masking technique to simulate input-adaptive token removal during training.",
"Empirical results on eight diverse text classification tasks show considerable improvements over existing methods.",
"Furthermore, we demonstrated that contribution predictors generate rationales that are highly in line with those manually specified by humans.",
"As future work, we aim to apply our technique to more tasks and see whether it can be adapted to those tasks that require all token representations to be present in the final layer of the model (e.g., question answering).",
"Additionally, combining our width-based strategy with a depth-based one (e.g., early exiting) might potentially yield greater efficiency, something we plan to pursue as future work.",
"Using our proposed method, pre-trained language models can use fewer FLOPs, reducing energy use and carbon emissions (Schwartz et al., 2020a)."
] | [
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"method",
"objective",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"result",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain"
] |
[
"Natural language processing models often have to make predictions on text data that evolves over time as a result of changes in language use or the information described in the text.",
"However, evaluation results on existing data sets are seldom reported by taking the timestamp of the document into account.",
"We analyze and propose methods that make better use of temporally-diverse training data, with a focus on the task of named entity recognition.",
"To support these experiments, we introduce a novel data set of English tweets annotated with named entities.",
"1 We empirically demonstrate the effect of temporal drift on performance, and how the temporal information of documents can be used to obtain better models compared to those that disregard temporal information.",
"Our analysis gives insights into why this information is useful, in the hope of informing potential avenues of improvement for named entity recognition as well as other NLP tasks under similar experimental setups.",
"Natural language processing models are now deployed on a large scale in many applications and used to drive automatic analyses or for making predictions.",
"The usual setup is that these models are trained and evaluated on the data available at model building time, but are used to make inferences on data coming in at a future time, making models susceptible to data drift.",
"The data distribution of the test set used to measure the model's performance after training may be different from the distribution of data from future time periods (Huang and Paul, 2018).",
"This temporal drift in data often results in lower performance during inference.",
"Drift is especially prevalent in information extraction tasks, Work done during an internship at Bloomberg 1 Our data set is available at https://github.com/ shrutirij/temporal-twitter-corpus .",
"such as named entity recognition (NER), where the context and the target entities differ across time as a result of changes in language use or the events being discussed (Derczynski et al., 2016).",
"Despite its intuitive value, there has been little research on using the temporal information contained in text documents to inform modeling of a task (Huang and Paul, 2018; He et al., 2018), and no past research on modeling sequence labeling tasks in particular.",
"Since sequence labeling models are currently trained and evaluated by randomly splitting the available data, performance is measured in an artificially temporal drift-free scenario that is not realistic or similar to how models are used in practice (Dredze et al., 2010).",
"When splitting the available training data temporally and testing on the data from the most recent time period, we formulate the following hypotheses:",
"a) models trained on data from a closer time to the test set obtain better results, assuming the same model and data size are used;",
"b) models trained on the combined data from all time periods outperform models trained on subsets of the data, as more data usually leads to better models.",
"In these cases, the commonly used setup of pooling all the data for training while disregarding temporal information may lead to sub-optimal performance.",
"In this paper, we study the temporal aspects of text data, focusing on the information extraction task of named entity recognition in the Twitter domain.",
"We make the following contributions:",
"a) a new data set for Twitter Named Entity Recognition consisting of 12,000 English tweets evenly distributed across six years;",
"b) experimental results that demonstrate the performance drift of models trained on data from different time periods and tested on data from a future interval;",
"c) extensive analysis of the data that highlights temporal drift in the context of named entities and illustrates future modeling opportunities;",
"d) simple extensions to state-of-the-art NER models that leverage temporal information associated with the training data, which results in an improvement in F1 score over standard pooling methods.",
"Language change is a popular topic of research in linguistics (Stephen, 1962).",
"In natural language processing, using data from online platforms such as Twitter or discussion fora, language change and adoption have been studied at the community level (Danescu-Niculescu-Mizil et al., 2013; Eisenstein et al., 2014; Goel et al., 2016; Stewart and Eisenstein, 2018) and at the individual level (Zhang et al., 2019).",
"In some cases, the senses of the same word are known to shift over time (Wijaya and Yen-iterzi, 2011), and modeling such changes in word semantics has been explored using diachronic word embeddings (Kulkarni et al., 2015; Hamilton et al., 2016; Kutuzov et al., 2018).",
"Temporal information has been used to create topic models of better quality, usually by adding smoothing properties (Blei and Lafferty, 2006; Wang et al., 2008).",
"For text classification, the temporal periodicity of Twitter hashtags was modeled in Preotiuc-Pietro and Cohn (2013) and used as a prior for text classification models for predicting hashtags on future data, which resulted in performance improvements.",
"Most similar to our experimental setup, Huang and Paul (2018) study the impact of temporal data splits in text classification, finding that performance worsens on data from future periods, and use standard domain adaptation techniques to incorporate time information and improve results.",
"He et al. (2018) introduce a method for training neural networks on data from multiple time intervals while enforcing temporal smoothness between representations.",
"Temporal information has also been used to improve named entity disambiguation on a data set of historical documents (Agarwal et al., 2018).",
"Finally, Huang and Paul (2019) present a model that uses diachronic word embeddings combined with a method inspired by domain adaptation to improve document classification.",
"A related, but distinct, task built on the assump-tion of language change with time is automatic prediction of the date on which a document is written (Kanhabua and Nrvag, 2008; Chambers, 2012; Niculae et al., 2014).",
"Named entity recognition (NER) is the task of identifying entities such as organizations, persons, and locations in natural language text.",
"NER is a well-studied NLP task over the past 20 years (Nadeau and Sekine, 2007; Yadav and Bethard, 2018) and is a key information extraction task as its used in various downstream applications such as named entity linking (Cucerzan, 2007), relation extraction (Culotta and Sorensen, 2004) and question answering (Krishnamurthy and Mitchell, 2015).",
"On social media text, such as tweets, the performance lags far behind that of standard news corpora (Derczynski et al., 2015b), with data drift as one of the suggested causes (Derczynski et al., 2015a).",
"Agarwal et al. (2020) show that NER models decay substantially on entity mentions from a different distribution than those seen in training.",
"NER systems struggle to generalize over diverse genres with limited training data (Augenstein et al., 2017).",
"Domain adaptation for NER (Chiticariu et al., 2010; Lin and Lu, 2018; Wang et al., 2020) is related to our task of improving performance over temporal drift, as the data from a future time period can be considered as a target domain with an unknown distribution.",
"However, the relationship between domains is implied from temporal similarity, and temporal information is very fine-grained in contrast to the standard single source to single target domain adaptation setup.",
"In this paper, we focus on the task of named entity recognition on English tweets as a case study for our hypotheses and analysis regarding model drift with time.",
"Twitter data represents an ideal testbed for our analysis as it contains readily accessible timestamp information for each tweet.",
"Further, users on social media post about current events, which are likely to include entities that change over time.",
"Social media also reflects changes in language use more timely than other sources of data (e.g., newswire), resulting in the potentially rapid evolution of the contexts and ways in which named entities are discussed in natural language.",
"This drift in Twitter data has previously been demonstrated qualitatively in the context of named entity Year 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 Broad Twitter Corpus 5 127 2,414 275 6,022 Current Data Set 2,000 2,000 2,000 2,000 2,000 2,000 Table 1: Number of tweets from each year in the BTC data set and in the data set introduced in this paper.",
"recognition (Derczynski et al., 2015a).",
"Previous research has introduced data sets of tweets annotated with named entities, including the data sets from Finin et al. (2010), Ritter et al. (2011), Liu et al. (2011), the WNUT-17 Corpus (Derczynski et al., 2017), the Microposts NEEL Challenge Corpora (Rowe et al., 2013; Cano et al., 2014; Rizzo et al., 2015; Cano et al., 2016) and the Broad Twitter Corpus (Derczynski et al., 2016).",
"However, these data sets usually consist of tweets collected within a limited time period, making them unsuitable for our proposed work.",
"Of note is the Broad Twitter Corpus (Derczynski et al., 2016), which contains tweets collected over several years, from 2009 to 2014.",
"However, the majority of tweets are from either 2012 or 2014, with fewer than 300 tweets from the other years (details in Table 1).",
"Further, combining existing data sets is challenging, because of the different entity tagging schemes, annotation guidelines and sampling strategies used.",
"Therefore, we create a new collection of tweets annotated with named entities that attempts to alleviate the lack of temporal diversity in existing Twitter data sets as well as provide us with a suitable experimental setup to study our research questions about temporal entity drift and NER model performance.",
"In this section, we present the details of our data set, including the collection and annotation methodology, as well as an analysis of the named entity mentions in the corpus.",
"The data set can be downloaded at https://github.com/ shrutirij/temporal-twitter-corpus .",
"The primary goal of creating a new data set is ensuring wide-enough temporal diversity for our work as well as future directions that can leverage timestamp information.",
"We use the public Twitter Search API 2 to sample tweets spanning six years: 2014, 2015, 2016, 2017, 2018 and 2019.",
"We aim to ensure that the data set is represen-2 https://developer.twitter.com/en/ docs/tweets/search/overview tative of multiple English-speaking locales and a variety of topics, as well as making it comparable to existing data sets.",
"Thus, we follow the same sampling strategy for corpus diversity used by the creators of the Broad Twitter Corpus (Derczynski et al., 2016).",
"Specifically, we collect tweets across six English-speaking regions (the United States, the United Kingdom, New Zealand, Ireland, Canada, and Australia), and focus on two contrasting sets of Twitter handles:",
"a) the twitterati , i.e., individuals from array of domains including musicians, journalists and celebrities;",
"b) Twitter accounts for mainstream news organizations, covering both larger networks like CNN and ABC, as well as local news outlets.",
"The Twitter handles correspond to users from the segments F and G of the Broad Twitter Corpus (Derczynski et al., 2016).",
"Overall, to maintain uniformity across time, we annotated 2,000 tweets for each year from 2014 to 2019 by randomly subsampling tweets from each year.",
"This resulted in a temporally varied and balanced corpus of 12,000 tweets.",
"Table 1 illustrates the temporal data distribution of our data set, as compared to the Broad Twitter Corpus.",
"In annotating our data with entities, we use a tagset consisting of three entity classes Organizations (ORG), Persons (PER), and Locations (LOC).",
"This scheme is consistent with some existing data sets for the task (Finin et al., 2010; Derczynski et al., 2016), overlapping with the majority of other general NER datasets in the social media domain (Liu et al., 2011; Rowe et al., 2013) and beyond (Tjong Kim Sang and De Meulder, 2003a).",
"We use the annotation guidelines used in standard NER data sets (Tjong Kim Sang and De Meul-der, 2003a) supplemented with examples that are specific to Twitter data.",
"Further, we observe in other data sets that usernames are some of the most frequent tokens clas-sified as entities (Ritter et al., 2011; Derczynski et al., 2016).",
"For our experiments, we consider all usernames as non-entities, as otherwise, identifying these using character features would be trivial, and typing entities would be similar to the task of Twitter handle classification (McCorriston et al., 2015; Wood-Doughty et al., 2018), which is outside the scope of the current paper.",
"We preprocess the data set by normalizing URLs, usernames, and Twitter-specific tokens (e.g., RT).",
"We leave hashtags intact as these are often used as words in the context of the tweet, and can be or contain named entities.",
"We use Twokenizer (O'Connor et al., 2010), a Twitter-specific tokenizer to split the tweets into tokens.",
"To limit the impact of imperfect tokenization on the performance of the NER models especially in the case of hashtags containing multiple tokens (Maddela et al., 2019) we expanded sub-token annotations to their closest matching token.",
"If multiple sub-token entity annotations match the same token, then we select the label of the first sub-entity in order of appearance.",
"The data was annotated by multiple annotators that have experience with named entity recognition annotation tasks.",
"Specifically, we used 15 annotators in total, with two annotations per tweet.",
"The inter-annotator agreement is 78.34% on full tweets (same entity types and spans).",
"If the annotators disagree on a tweet in their tagging, we adjudicate in favor of the annotator that had the highest con-fidence on the task, as judged through measuring their agreement with our annotations on a set of test questions (10% of the total).",
"In our experiments, we use temporal splits of the data from 20142018 for training, and the most recent data (i.e., the tweets from 2019) to evaluate our models, to simulate a future time period setup.",
"Thus, we wanted to ensure that the model performance is evaluated on data that has as few annotation errors as possible.",
"Hence, each tweet was checked by either of the authors of the paper, both with significant experience in linguistic annotations, and corrected if needed to ensure additional consistency.",
"This process had the effect of reducing the measurement error of the model performance but ultimately did not affect the conclusions of the experimental results.",
"The type-wise distribution of named entities in for each year in our data set, after annotator adjudication and correction, is shown in Table 2. 4 Base Model Architecture This section describes the base model architecture we use to perform named entity recognition ex-Year PER ORG LOC Total 2014 371 454 350 1,175 2015 363 479 393 1,235 2016 435 501 320 1,256 2017 432 516 314 1,262 2018 468 597 395 1,460 2019 725 881 475 2,081 Table 2: Year-wise number of named entities of each type in the data set introduced in this paper.",
"periments throughout the paper.",
"We use the same underlying architecture to provide a controlled experimental setup and isolate temporal modeling aspects from other model-related factors.",
"We use the neural architecture based on a stacked BiLSTM-CRF model introduced in Huang et al. (2015), which is the core model architecture for several state-of-the-art NER results over the past years (Lample et al., 2016; Peters et al., 2018; Akbik et al., 2018).",
"For each sentence, the token representations are fed into two different LSTM layers, each processing the sentence in different directions (one forward and one backward).",
"The output of these two layers are concatenated and passed through a feed-forward layer that produces a distribution over the output tag space.",
"Finally, a Conditional Random Field is applied to the class predictions with the role of jointly assigning predictions to the entire sequence.",
"This also has the function of ensuring that the output tag sequence takes into account the constraints of the IOB2 entity tagging scheme (e.g. I-LOC cannot follow B-ORG) (Tjong Kim Sang and De Meulder, 2003b).",
"A key component in the base architecture is how the tokens are represented as inputs.",
"Initial research (Lample et al., 2016) on LSTM-CRF models use static pre-trained word embeddings, such as GloVe (Pennington et al., 2014), to initialize the inputs, which are subsequently fine-tuned on the NER training data.",
"More recently, contextual word embeddings, which represent each token differently based on its context, were shown to obtain an improvement of 23 F1 points on the English news CoNLL data set ( Peters et al., 2018; Akbik et al., 2018; Devlin et al., 2019).",
"In this paper, we conduct experiments with both the static GloVe embeddings (Pennington et al., 2014) and the state-of-the-art contextual Flair embeddings (Akbik et al., 2018) to test the robustness of our findings to different input representations.",
"All embeddings were trained outside of the time range of our data: the GloVe embeddings were trained on Twitter data before 2014, while the Flair embeddings were trained on the 1-billion word corpus (Chelba et al., 2013) which contains data up to 2012.",
"Exploiting embeddings trained on data more recent than the NER corpus is an avenue of future work.",
"In addition to token embeddings, we use character embeddings to model subword information that may be indicative of named entities and better represent out-of-vocabulary tokens.",
"We use a character-level BiLSTM with randomly initialized character embeddings to produce the character-based word representations (Lample et al., 2016).",
"These are concatenated to the token embeddings described above and then used as input to the token-level BiLSTM.",
"We split the data temporally for our experiments.",
"We use the data authored in 2019 as the test data, as this is the most recent data available and best replicates the scenario of making predictions on text from future time periods.",
"We use a random sample of 500 tweets (25%) from the 2019 data as the validation set.",
"We use the PyTorch framework (Paszke et al., 2017) for the implementation of the models.",
"For the model using the GLoVe embeddings, we use the same hyperparameter settings as the original creators of the base models (Lample et al., 2016; Akbik et al., 2018) and ensure the correctness of our implementation by replicating their results on the CoNLL-2003 English NER data set (Tjong Kim Sang and De Meulder, 2003a).",
"Specifically, the character embeddings are of size 32, the character-level LSTM hidden size is 64, and the word-level LSTM has a hidden size of 256.",
"We also use a dropout of 0.5 on the input word embeddings and replace singleton words in the training set with an out-of-vocabulary symbol with a proba-2014 2015 2016 2017 2018 55 60 65 70 Flair GloVe Training data year F 1 s c o r e Figure 1: Evaluating the effect of temporal distance: the model is trained on each year individually.",
"bility of 0.5 to improve robustness to unseen words.",
"We use the flairNLP library (Akbik et al., 2019) for the contextual Flair embedding experiments, using the same hyperparameters as the state-of-the-art result in Akbik et al. (2018).",
"For each experimental setting, we use the training checkpoint with the best performance on the validation set (i.e., early stopping).",
"Following the recommendation from Reimers and Gurevych (2017), who study the variance of LSTM-CRF models with different random seeds, we report all experimental results as the mean of five runs.",
"The main metric we use for evaluation is span-level named entity F1 score, reported using the official CoNLL evaluation script (Tjong Kim Sang and De Meulder, 2003a).",
"To determine the utility of temporal information, we first attempt to evaluate whether temporal drift in the data affects the performance of NER models.",
"To this end, we conduct experiments to answer the following research questions: 1) What is the effect of the temporal distance between the training and target data sets on NER performance?",
"2) How do the size and temporal distribution of the training data affect NER performance?",
"We empirically study the effect of temporal distance between the training and test data sets by training the base model on each year, from 2014 2018, individually.",
"Based on the design of our data set, each model has access to the same number of 2000 4000 6000 8000 10000 55 60 65 70 Random Temporal Number of tweets in the training data F 1 s c o r e",
"The results are shown in Figure 1. We observe that the temporal distance between the training and test sets seems to affect NER performance.",
"The F1 score increases as we move temporally closer to the target data, for both the GloVe and Flair embeddings, apart from a slight decrease when moving from 2016 to 2017 when using GloVe embeddings.",
"When using the contextual Flair embeddings, the performance numbers are overall higher, which is consistent with past research (Akbik et al., 2018), as contextual embeddings are more expressive.",
"We now study how the number of instances in the training data and their temporal distribution impact the performance of the model.",
"We first train models on cumulative random samples from the combined training data set (all tweets from 2014 2018), adding 2,000 tweets at each step.",
"Then, we train models starting with the 2,000 tweets from LOCLOCLOCLOCLOCLOCPERPERPERPERPERPERORG ORG ORG ORG ORG ORG 2014 2015 2016 2017 2018 2019 0 0.1 0.2 0.3 0.4 LOC PER ORGF r ac ti on o f m e n ti on s Figure 3: Type distribution across years in our data set.",
"subsequent years from 2015 up to 2018.",
"The NER F1 scores are shown in Figures 2a and 2b, with both Random and Temporal cumulative compositions of the training data set.",
"Looking at the Random sampling strategy, we see that the performance steadily increases as we add more tweets to the training set as we would expect for most supervised machine learning models.",
"We see that the Temporal model with only the 2014 data (2,000 tweets) has a lower performance than randomly selecting 2,000 tweets across all years.",
"This is indicative of the data drift across time, as training on a random sample of tweets from all the years is more informative and leads to a better NER model than using just the 2014 data.",
"Moreover, as we add tweets temporally closer to the target into the training data set, the Temporal strategy converges with the Random strategy.",
"This observation strengthens the hypothesis that temporal information can potentially play an important role while selecting training data and designing model architectures.",
"To understand why the temporal distribution of the training data impacts the performance of an NER model, we analyze the distribution of entity mentions in our data set to uncover the extent to which data drift occurs at the lexical level.",
"Type Distribution Figure 3 shows the distribution of entity types across years in our data set.",
"The distribution looks approximately even, with minor differences in the fraction of location (LOC) entities.",
"Since similar types of entities occur in the data set year-wise, this likely does not cause the change in performance across time indicated in the previous sections.",
"Mention Overlap Figure 4 presents the overlap of unique entity mentions with respect to the test data (2019).",
"There is a clear increase in surface-form overlap as we get temporally closer to the target data, and is potentially an important factor for the F1 score improvement we see in our empirical analysis.",
"Type-wise Mention Overlap Figure 5 shows the surface-form overlap of entity mentions over types of years 2014 to 2018, with respect to the data from 2019.",
"The figure adds further evidence of temporal data drift at the mention level.",
"For all three entity types (LOC, PER, ORG) in our data set, smaller temporal distance leads to a greater percentage of overlap.",
"Interestingly, the PER overlap GloVe Flair Train PER ORG LOC PER ORG LOC 2014 74.45 41.63 52.78 79.66 53.78 56.90 2015 73.39 45.97 52.14 81.91 52.23 58.77 2016 78.42 49.12 57.60 81.58 58.19 60.85 2017 74.63 51.23 52.97 81.82 60.10 58.41 2018 79.40 56.29 59.25 83.47 61.83 64.90 Table 3: Type-wise F1 when testing on data from 2019.",
"is much lower than other types.",
"This is expected, as the people discussed on social media rapidly change with developments in current events (Der-czynski et al., 2017).",
"We see that the 2017 data set has a lower overlap for LOC than both 2016 and 2018, which could explain the off-trend performance of the 2017 model in our empirical results (Figure 1).",
"Type-wise Model Performance Table 3 shows the NER performance by entity type, to gain more insight into which types are affected by data drift.",
"First, we notice that the improved performance of Flair embeddings seen in previous analyses is caused by better performance across all types.",
"Overall, the PER type obtains the best performance for both models, with an F1 of around 20 points higher than the other two types.",
"This is despite the fact that the PER type has the lowest overall overlap between training and test, which indicates that the model is adequately learning the contexts that PER entities appear in.",
"ORG and LOC show similar absolute performance in both setups.",
"Next, we study the temporal differences in performance by type.",
"When using GloVe embeddings, the smallest gap between training on different data splits is for PER (4.95 F1), while ORG suffers from substantial drift in performance, resulting in a 14.66 F1 drop on ORG performance.",
"When using Flair embeddings, the most notable difference in performance when training across different years is still for the ORG type (up to 8.05 F1).",
"However, the gap has proportionally tightened the most as compared to when using GloVe embeddings.",
"These observations correspond with the analysis from Figure 5, where we see the largest increase in overlap between mentions from the training data and the test data over the five years (8% increase for ORG, compared to 34% increase for LOC and PER).",
"We also observe that the slight drop in performance of the model using GloVe embeddings trained on the 2017 data is caused primarily by a decline in performance on the LOC type which holds across both models.",
"Mentions Unseen in the Training Data In addition to the increase in surface-form overlap across years, we investigate whether mentions unseen in the training data are impacted by the temporal distance between the training and test data.",
"Table 4 shows the recall for these mentions using both the GloVe and Flair embeddings.",
"Notably, for GloVe, the performance steadily improves as the temporal distance decreases, with an almost 5 point improvement in recall when moving from 2017 to 2018.",
"Although less pronounced, there is a similar trend with the Flair embeddings.",
"This indicates that surface-form overlap is not the only factor determining temporal data drift.",
"The model is potentially able to learn more relevant context from the training data of temporally close years, perhaps due to changes in language use over time.",
"Supported by the analysis that temporal drift in the training data can impact the performance of NER systems, in this section, we experiment with techniques to account for temporal information while training the NER model.",
"We look at leveraging temporality in two broad ways:",
"a) by altering the architecture of the base model;",
"b) by modifying how the training data set is constructed.",
"These methods are intended to be an initial exploration of using temporal information, with a focus on techniques that do not require significant modification to the base model.",
"We present these in the hope that they will inspire future research on models robust to temporal drift.",
"The specific methods are discussed below, followed by experimental results.",
"Sequential Temporal Training Our analysis from Section 5 showed that using more data is beneficial, irrespective of temporal distance from the target, but individually, the closest data is most useful.",
"Based on this analysis, we attempt to train our model by ordering our training data year-wise such that the model is trained on the temporally closest data last.",
"Specifically, we start with training on the year temporally furthest away from the target data and repeatedly tune the model on the chronological sequence of years (i.e., first train on 2014 data, then 2015 data, and so on up to 2018).",
"Temporal Fine-tuning The analysis showed that training on the model temporally closest to the target data set obtains the best overall performance.",
"Based on this observation, we decide to train the base model on the entire data set of tweets from the years 20142018.",
"Then, we fine-tune the trained model on the data from the year temporally closest to the target (2018).",
"The fine-tuning process is simply retraining the model on the 2018 data with the same hyperparameter settings.",
"Instance Weighting Previous work in domain adaptation shows that giving higher weights to training instances similar to the target domain can improve performance (Wang et al., 2017).",
"Similarly, we decide to assign a higher weight to tweets temporally closer to the test data (i.e., the 2018 tweets are up-weighted).",
"In our experiments, we up-weight the tweets by a factor of 2. We note that the above methods do not require any change to the model, making integration of these methods for existing systems very practical.",
"Year Prediction as an Auxiliary Task Finally, we aim to guide the model to learn temporal features in training.",
"Inspired by related work in domain adaptation (Chen et al., 2018), we enhance the architecture with a multi-task learning component that models an auxiliary task.",
"While training the model for NER, this component uses the LSTM hidden states to predict the year that the tweet was created in.",
"Since the input embeddings and the LSTM are shared between the NER task and the year prediction task, the intuition is that the parameters learned will retain a notion of temporality that GloVe Flair Base Model 70.80 74.72 Sequential Temporal Training 68.47 74.42 Temporal Fine-tuning 71.93 74.95 Instance Weighting 70.59 75.54 Year Prediction 71.01 74.70 Table 5: Performance of proposed methods of using temporal information in NER modeling when compared to the base model.",
"Table 5 presents the experimental results.",
"The base model combines the training data (20142018) without using any temporal information, the current standard setup for most NLP systems.",
"The results show that we can overall obtain a better performance over the base model by using simple techniques to incorporate temporal information.",
"The margin of improvement is overall lower when using Flair embeddings than with GloVe (+0.82 compared to +1.13).",
"This potentially indicates that semantic drift can be captured partially through contextual embeddings.",
"Fine-tuning the model on the temporally closest data (i.e., 2018) leads to the best F1 scores when using GloVe embeddings, reaching a 1.13 increase in F1.",
"For the Flair embeddings, we observe that up-weighting the training instances from the year 2018 leads to the best result, a 0.82 improvement in F1 over the base model.",
"We highlight that these straightforward methods that improve over the base model do not involve any architecture changes, other than a change in how the data is fed to the model.",
"It thus has the potential to both be readily applicable to existing NER implementations as well as generalize to other NLP tasks.",
"Finally, we find that using an auxiliary task for predicting the year improves the performance slightly when using GloVe embeddings, but has the oposite effect when using Flair embeddings.",
"This is likely because the GloVe embeddings are fine-tuned during the model training and are therefore influenced by the auxiliary loss, while the contextual Flair embeddings are not.",
"This paper studies and models text data drift in the information extraction task of named entity recognition.",
"We introduce a new data set of 12,000 English tweets stratified by time, which allows us to study the effects of drift and evaluate named entity recognition models in a realistic scenario of performing inference on temporally unseen data.",
"By analyzing the data, we quantify the temporal drift in named entity type and mention usage and identify that, as expected, the data distribution is more similar when drawn from closer time intervals.",
"We then use current state-of-the-art approaches for named entity recognition and demonstrated that, through modeling of temporal information, performance can be improved when testing on future data.",
"We expect our data, results, and error analysis to inform the design of similar experimental setups for other NLP tasks beyond NER, such as part-of-speech tagging or relation extraction.",
"We would like to thank Leslie Barrett, Liang-Kang Huang, Prabhanjan Kambadur, Mayank Kulkarni, Amanda Stent, Umut Topkara, Jing Wang, Chuck-Hou Yee and the other members of the Bloomberg AI group.",
"They provided invaluable feedback on the experiments and the paper.",
"We also thank the anonymous reviewers for their valuable suggestions.",
"Shruti Rijhwani is supported by a Bloomberg Data Science Ph.D.",
"Fellowship."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"method",
"objective",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"method",
"objective",
"result",
"other",
"other",
"other",
"other",
"other"
] |
[
"The current state-of-the-art in neural graph-based parsing uses only approximate decoding at the training phase.",
"In this paper aim to understand this result better.",
"We show how recurrent models can carry out projective maximum spanning tree decoding.",
"This result holds for both current state-of-the-art models for shift-reduce and graph-based parsers, projective or not.",
"We also provide the first proof on the lower bounds of projective maximum spanning tree, DAG, and digraph decoding.",
"For several years, the NLP field has seen widespread investigation into the application of Neural Networks to NLP tasks, and with this, much, rather inexplicable progress.",
"A string of very recent work (for example, Chen et al. (2018); Weiss et al. (2018); Peng et al. (2018)), has attempted to delve into the formal properties of neural network topology choices, in attempts to both motivate, predict, and explain associated research in the field.",
"This paper aims to further contribute along this line of research.",
"We present the results of our study into the ability of state-of-the-art first-order neural graph-based parsers, with seemingly simple architectures, to explicitly forego structured learning and prediction.",
"1 In particular, this is not due to a significantly faster, simpler, algorithm for projective maximum spanning tree (MST) decoding than Eisner (1996)'s algorithm, which we formally prove to be impossible, given the Exponential Time Hypothesis.",
"But rather, this is due to the capacity of recurrent components of these architectures to implicitly discover a projective MST.",
"We prove this formally by showing how these re-1 For the remainder of this paper, all decoding algorithms discussed are first-order.",
"current components can intrinsically simulate exact projective decoding.",
"The context.",
"The current state-of-the-art for graph-based syntactic dependency parsing is a seemingly basic neural model by Dozat and Manning (2017).",
"The parser's performance is an improvement on the first, even simpler, rather engineering-free, neural graph-based parser by Kiperwasser and Goldberg (2016).",
"This latter parser updates with respect to an output structure: projective decoding over a matrix of arc scores coupled with hinge loss between predicted and gold arcs, reporting parser performance of, for example, 93.32% UAS and 91.2% LAS on the converted Penn Treebank.",
"2 Remarkably, the former parser by Dozat and Manning (2017) forgoes entirely any structural learning, employing simple cross-entropy at training time, and saving (un-constrained) maximum spanning tree decoding for test time.",
"We further optimised Kiperwasser and Goldberg (2016)'s parser (Varab and Schluter, 2018) and extended it for cross-entropy learning, as is done by Dozat and Manning (2017).",
"At test time, instead of any explicit decoding algorithm over the arc score matrix, we simply take the maximum weighted incoming arc for each word; that is, the parser is highly streamlined, without any heavy neural network engineering, but now also without any structured learning, nor without any structural decoding at test time.",
"The resulting neural parser still achieves an impressively competitive UAS of 92.61% evaluated on the converted Penn Treebank data, without recourse to any pre-trained embeddings, unlike the systems by Kiperwasser and Goldberg (2016) and Dozat and 2 Training is on Sections 2-21, development on Section 22 and testing on Section 23), converted to dependency format following the default configuration of the Stanford Dependency Converter (version 3.5.2).",
"Manning (2017).",
"Using GloVe 100-dimensional Wikipedia and Gigaword corpus (6 billion tokens) pretrained embeddings, without updates, but linearly projected through a single linear dense layer to the same dimension, the structure-less parser achieves 93.18% UAS.",
"3 With this paper, we shed light on these surprising results from seemingly simple architectures.",
"The insights we present here apply to any neural architecture that first encodes input words of a sentence using some type of recurrent neural networki.e., all current state-of-the-art graph-based or shift reduce neural parsers.",
"Our contributions.",
"This paper presents results for understanding the surprisingly superior performance of structure-free learning and prediction in syntactic (tree) dependency parsing.",
"1. We provide a formal proof that there will never be an algorithm that carries out projective MST decoding in sub-cubic time, unless a widely believed assumption in computational complexity theory, the Exponential Time Hypothesis (ETH), is false.",
"Hence, computationally, we provide convincing evidence that these neural parsing architectures cannot be as simple as they appear.",
"These results are then extended to projective maximum spanning DAG and digraph decoding.",
"2. In particular, we then show how to simulate Eisner's algorithm using a single recurrent neural network.",
"This shows how, in particular, the LSTM stacked architectures for graph-based parsing by Dozat and Manning (2017), Cheng et al. (2016), Hashimoto et al. (2017), Zhang et al. (2017), and Kiperwasser and Goldberg (2016), are capable of intrinsically decoding over arc scores.",
"This therefore provides one practical application where RNNs do not need supplementary approximation considerations (Chen et al., 2018).",
"The Exponential Time Hypothesis (ETH) and k -Clique.",
"The Exponential Time Hypothesis is a 3 Our structure-less but optimised implementation of the (Kiperwasser and Goldberg, 2016) graph-based parser, with 100 dimensional generated word embeddings, 50 dimensional generated POS-tag embeddings, a stack of 3 BiLSTMs with an output dimension of 225 each (total 450 concatenated), no dropout, MLP mappings for arc nodes of 400 dimension, and for labels of 100 dimensions.",
"We use DyNet 2.1 (Neubig et al., 2017), and the parser code is freely available at https://github.com/ natschluter/MaxDecodeParser .",
"widely held though unproven computational hardness assumption stating that 3-SAT (or any of the several related NP-complete problems) cannot be solved in sub-exponential time in the worst case (Impagliazzo and Paturi, 1999).",
"According to ETH, if 3-SAT were solvable in sub-exponential time, then also P = NP .",
"But the ETH assumption is stronger than the assumption that P (cid:54) = NP , so the converse is not necessarily true.",
"ETH can be used to show that many computational problems are equivalent in complexity, in the sense that if one of them has a subexponential time algorithm then they all do.",
"The k -Clique problem is the parameterised version of the NP-hard Max-Clique problem.",
"This canonical intractable problem in parameterised complexity asks, given an input graph, whether there exists a clique of size k .",
"A nave algorithm for this problem running in O ( n k ) time checks all n k combinations of nodes and verifies each combination in O ( k 2 ) time to see if they form a clique.",
"However, Chen et al. (2006) showed that the problem has no n o ( k ) time algorithmthat is, the problem has no algorithm that runs in time subexponential in the exponent k assuming ETH.",
"4 .",
"Recurrent neural networks.",
"Recurrent neural networks (Rumelhart et al., 1986), as we generally use them in practise in NLP, take as input a matrix x , containing a sequence of n vectors x = x 1 , x 2 , . . . , x n , and apply the following set of equations recursively, with h 0 the initial state: h t = g ( b + Wh ( t 1 ) + Ux t ) Here, g is the activation function.",
"Typically this activation function is tanh , however the computational power of the model is theoretically maintained with any so-called squashing function (Siegelmann, 1996).",
"The choice of g , on the other hand, has been shown to affect the power of the recurrent model in general, depending on the restrictions involved in the formal investigation.",
"For the purposes of this paper, the activation function is a rectified linear unit, or ReLU .",
"The general computational power of such RNNs has recently been formally explored by Chen et al. (2018) (given infinite precision) and Weiss et al. (2018) (given finite pre-cision), and empirically investigated for practical considerations of convergence under training by Le et al. (2015).",
"LSTMs (Hochreiter and Schmidhuber, 1997) are RNNs with weighted self-loops (so-called gates).",
"The recurrence equations take the form: f t = g 1 ( b f + W f h ( t 1 ) + U f x t ) i t = g 1 ( b i + W i h ( t 1 ) + U i x t ) o t = g 1 ( b o + W o h ( t 1 ) + U o x t ) c t = f t c t 1 + i t g 1 ( b c + W c h ( t 1 ) + U c x t ) h t = o t g 2 ( c t ) where g 1 , g 2 are activation functions.",
"Setting all W f , W i , W c , U f , U i , U c to be zero matrices, b f to be a 0 vector, b i , b c to be 1 vectors, and the activation function g 2 to be ReLU we see that, in terms of hidden states, the LSTM model includes that of the RNN.",
"In this paper, all activation functions are ReLU s.",
"State-of-the-art in neural syntactic dependency parsing.",
"The graph-based neural architectures we refer to here have important commonalities.",
"We focus our discussion on the key contributions by Kiperwasser and Goldberg (2016) (the simplest architecture, and the first), and by Dozat and Manning (2017) (the state-of-the-art).",
"1. Word representation generation : Both architectures generate word embeddings and POS-tag embeddings.",
"Pretrained embeddings, if they are being used, are added to the trained embeddings, and concatenated to corresponding POS-tag embedding.",
"The embeddings are sent through a stacked BiLSTM.",
"Output embeddings are projected to two further vector representations: as head node or as dependent node (specialised representa-tions).",
"2. Arc scoring : All (head, dependent) combinations are scored.",
"3. Decoding : By some decoding process, the arc score matrix yields a (possibly disconnected) graph representation of the input sentence: (n-1) arcs, where no word has more than one head, as well as their probabilities.",
"We show in this paper how the second and third components can be carried out implicitly within the BiLSTM layers of the first component.",
"Since currently state-of-the-art shift-reduce parsers also encode input words of a sentence using some type of recurrent neural network, this insight also applies to these non-graph-based models.",
"Related computational hardness results.",
"To date there is no known truly sub-cubic algorithm for Boolean Matrix Multiplication (BMM), nor for Context-Free Grammar (CFG) parsing.",
"Adapting Satta (1994)'s lower bound proof for Tree Adjoining Grammar parsing, Lee (1997) proved that BMM can be reduced to finding a valid derivation a string of length O ( n 13 ) with respect to a CFG of size ( n 2 ) .",
"Lee (1997)'s reduction shows that there can be a no O ( | G | n 3 (cid:15) ) for some constant (cid:15) > 0 (sub-cubic-time) algorithm for CFG-parsing without implying a significant breakthrough in BMM, which is widely believed not to be possible.",
"However, the construction required the grammar size | G | = ( n 6 ) to be dependent on the the input size n , which, as Lee (1997) points out, is unrealistic in most applications.",
"Abboud et al. (2015), on the other hand, present a proof of the unlikelihood of a sub-cubic algorithm for CFG-parsing using ETH and specifically the k -Clique problem.",
"Given an instance of the 3 k -Clique problem (i.e., an undirected graph and the parameter 3 k ), they construct a string w of length n k and a CFG, G of constant size (for any 3 k ) such that if G derives w in sub-cubic time, then there is an algorithm running in time n o (3 k ) for the 3 k -Clique problem, which, as we explained in Section 2, is impossible, assuming ETH.",
"To date, no truly sub-cubic algorithm for projective maximum spanning tree decoding is known.",
"In the next section, we present a proof similar in spirit to Abboud et al. (2015)'s that also shows that such an algorithm most likely cannot be found.",
"Projective Dependency Decoding Current state-of-the-art neural graph-based parsers forego structural learning and do not even seem to require structured prediction.",
"In this section, we provide evidence that this is indeed not because the parsers are so seemingly simple.",
"Computationally it is unlikely that some simpler and faster decoding method alone is achieving such a competitive performance.",
"We show this with the following theorem.",
"Theorem 1. Under the assumption of ETH, there is no algorithm that carries out projective MST decoding in time significantly faster than O ( n 3 ) ; that is, there is no sub-cubic ( O ( n 3 (cid:15) ) for some constant (cid:15) > 0 ) time algorithm for finding the maximally weighted projective spanning tree, T , over a weighted digraph input.",
"Notation and special remarks.",
"We denote by [ n ] the set { 1 , . . . , n } .",
"For lack of a better symbol, we use (cid:12) here to signify iterative string concatenation, which otherwise is signified by just writing symbols beside each other, or by the symbol .",
"Rather than working over words of a sentence, given the formal nature of the proof, the projective MST algorithm must work over symbols of the input word w .",
"Hence the input is a weighted digraph over the symbols of w and the output is a projective MST, T , over these symbols.",
"The reduction, makes use of the weight of T .",
"Proof (of Theorem 1).",
"Let G = ( V, E ) be an arbitrary simple undirected graph.",
"We place an arbitrary order on the nodes from V and fix it, so V := { v 1 , . . . , v n } .",
"As in Abboud et al. (2015)'s reduction from 3 k clique to CFG-parsing, we first generate a string of length O ( n k ) to represent the graph for the task at hand; we do so in O ( n k ) time.",
"The string contains a representation of all of the possible k -cliques in the graph.",
"We can create a listing of all of these k -cliques using exhaustive search in at most O ( n k ) time and space.",
"Let K := {{ v i 1 , . . . , v i k } a k -clique in G | i j [ n ] , v i j V } correspond to the set of k -cliques from G , and place an arbitrary order on K := { k 1 , . . . , k | K | } .",
"So, | K | O ( n k ) .",
"We define 6 k | K | sets of symbols with respect to V , each with n (= | V | ) elements: Unmarked symbols : A i,t := { a i,j,t | j [ n ] } for i [ k ] , t [ | K | ] where a i,j,t corresponds to node v j V .",
"Similarly for the sets B i,t and C i,t .",
"Marked symbols : A i,t := { a i,j,t | a i,j,t A i } .",
"Similarly for the sets B i,t and C i,t .",
"We let A = i [ k ] ,t [ | K | ] ( A i,t A i,t ) and similarly for B and C .",
"Then the vocabulary for constructing our input word is U := A B C .",
"Constructing the input word w .",
"We now construct a word w over the vocabulary U such that if the projective maximum spanning tree has weight | w | + 2 k 2 + | K | , then the graph G has a 3 k clique.",
"We do this by defining the weights of possible arcs between carefully selected pairs of symbols from the vocabulary.",
"The entire construction of the word w takes time O ( n k ) (coinciding with the upper bound on the word's length).",
"The input word is made up of a series of gadgets.",
"For each k -clique, we have three types of gadgets: A-, B-, and C-gadgets.",
"Aand C-gadgets each correspond both to a particular k -clique in G , as well as all k -cliques in G .",
"B-gadgets, on the other hand, only correspond to particular k -cliques in G .",
"Let k t = { v ( t, 1) , . . . , v ( t,k ) } K be the t th k -clique.",
"Even if each v ( t,q ) is a node in V ( G ) the notation for indices is useful to refer to the q th node of the t th k -clique.",
"Also, in what follows, we use the middle index of symbols to simultaneously refer to the k -clique membership: j ( t,q ) [ n ] , and simultaneously allows us to refer to the q th node in the t th k -clique from K , for q [ k ] , t [ | K | ] .",
"A-gadgets: A ( t ) := (cid:12) i [ k ] ( a i,j ( t, 1) ,t a i,j ( t, 2) ,t a i,j ( t,k ) ,t ) (cid:12) i [ k ] ( a i,j ( t, 1) ,t a i,j ( t, 2) ,t a i,j ( t,k ) ,t ) C-gadgets: C ( t ) :=( (cid:12) i [ k ] c i,j ( t, 1) ,t c i,j ( t, 2) ,t c i,j ( t,k ) ,t ) c k,j ( t,k ) ,t c k 1 ,j ( t,k 1) ,t c 1 ,j ( t, 1) ,t and B-gadgets: B ( t ) := L t b k,j ( t,k ) ,t b 2 ,j ( t, 2) ,t b 1 ,j ( t, 1) ,t H t b k,j ( t,k ) ,t b k 1 ,j ( t,k 1) ,t b 2 ,j ( t, 2) ,t b 1 ,j ( t, 1) ,t R t , We call the symbol H t the head of the gadget B ( t ) , and L t and R t the gadget's left and right boundary symbols respectively.",
"We then set the word w to be (cid:0) (cid:12) t [ | K | ] A ( t ) (cid:1) (cid:0) (cid:12) t [ | K | ] B ( t ) (cid:1) (cid:0) (cid:12) t [ | K | ] C ( t ) (cid:1) consisting of an A-gadget region followed by a B-gadget region , and then a C-gadget region .",
"Idea of the proof.",
"The idea of the proof is to allow an optimal projective MST, T , to be built that matches up one distinct (with respect to the k clique) gadget from each region, each representing different k -cliques whenever there is a 3 k clique in G. We will deduce the existence of such a clique by the weight of T .",
"Essentially, a projective spanning tree of weight | w | 1 will always be present, but T having weight superior to this will indicate a matching up of gadgets.",
"Now, suppose we have a sub-cubic projective MST algorithm A .",
"By our construction, if A returns a T with weight | w | +2 k 2+ | K | , then there is a 3 k clique.",
"Otherwise, there is no 3 k -clique.",
"On input of length n k , the sub-cubic time algorithm runs in time O (( n k ) 3 (cid:15) ) = O ( n 3 k k(cid:15) ) n o (3 k ) for some constant (cid:15) > 0 .",
"Thus A will have solved 3 k -clique in time n o (3 k ) , which is impossible under the ETH assumption.",
"Note that by the definition of a 3 k -clique, a 3 k clique can be partitioned arbitrarily into 3 equal sized sub-graphs over k nodes that must each form a k -clique.",
"So, if | K | < k , then there trivially cannot be any 3 k -clique in G .",
"We therefore only consider without loss of generality the argumentation for the case where | K | k , since our algorithm can simply return a negative answer about the existence of a 3 k -clique in G after enumerating the set K and before computing any projective MST.",
"The projective MST algorithm takes as input the description of a weighted digraph, D , whose nodes are defined by symbols of the input word w .",
"The digraph need not be explicitly constructed, since the algorithm can simply use the description of the digraph that follows instead to check for the existence of arcs between symbols.",
"This description has constant length.",
"A description of the input weighted graph D over w .",
"For the input digraph, arcs can (1) be missing from the fully complete digraph, (2) have weight 1, or (3) have weight 2. To construct D , weights are assigned to arcs by the following rules.",
"Weight 1 arcs.",
"The following arcs of our input graph have weight 1. 1. Region connectivity arcs.",
"These arcs ensure connectivity is possible within respective gadget regions.",
"(a) All arcs ( a 1 ,j (cid:48) ,t , a i,j,t 1 ) and ( a 1 ,j (cid:48) ,t , a i,j,t 1 ) , i.e., the first symbol of the t th A-gadget attaches to all symbols of the previous ( t 1 th) A-gadget.",
"(b) All arcs ( c 1 ,j (cid:48) ,t , c i,j,t +1 ) and ( c 1 ,j (cid:48) ,t , c i,j,t +1 ) , i.e., the last symbol of the t th C-gadget attaches to all symbols of the next ( t + 1 th) C-gadget gadget.",
"(c) All arcs ( b i,j,t , b i +1 ,j (cid:48) ,t ) and ( b i +1 ,j,t , b i,j (cid:48) ,t ) for i [ k 1] .",
"(d) All arcs ( b k,j,t , L t ) , ( b 1 ,j (cid:48) ,t , R t ) .",
"(e) All arcs ( H t , b 1 ,j,t ) and ( H t , b k,j (cid:48) ,t ) making H t a possible head of the respective B-gadget ( B-gadget heads ) for any MST.",
"(f) All arcs ( L t +1 , H t ) , ( R t , H t +1 ) for all t [ | K | 1] .",
"(g) All arcs ( c k,j ( t,k ) ,t , c k,j,t ) , i.e., arcs from the last nonmarked symbol to the first marked symbol, in every C-gadget.",
"Also, all arcs ( c i +1 ,j,t , c i,j,t ) for i [ k 1] , i.e., together forming a path of marked symbols within each C-gadget.",
"The following arcs are the reversals of (1c) through (1e).",
"( b i,j,t , b i +1 ,j (cid:48) ,t ) for i [ k 1] .",
"(i) All arcs ( L t , b k,j,t ) , ( R t , b 1 ,j (cid:48) ,t ) .",
"(j) All arcs ( b 1 ,j,t , H t ) and ( b k,j (cid:48) ,t , H t ) making H t the head of the respective B-gadget ( B-gadget heads ) for any MST.",
"2. Boundary connectivity arcs.",
"These arcs ensure that the boundaries of regions are connected.",
"(a) The arcs ( L 1 , a i,j, | K | ) and ( L 1 , a i,j, | K | ) , i.e., all symbols from the last of the A-gadgets attach to the first symbol of the B-gadget region.",
"(b) The arcs ( R | K | , c i,j, 1 ) and ( R | K | , c i,j, 1 ) , i.e., all symbols from the first of the C-gadgets attach to the last symbol of the B-gadget region.",
"3. G-induced arcs.",
"These arcs reflect the connections of the original graph G , and ultimately the existence of a 3 k -clique.",
"(a) All arcs ( b i,j,t , a i,j (cid:48) ,t (cid:48) ) , for each i [ k 1] , t (cid:54) = t (cid:48) , if v j v j (cid:48) E ( G ) (i.e., not for i = k , which has a weight of 2 rather).",
"(b) All arcs ( b i,j,t , c i,j (cid:48) ,t (cid:48) ) , for each i { 2 , . . . , k } , t (cid:54) = t (cid:48) , if v j v j (cid:48) E ( G ) (i.e., not for i = 1 , which has a weight of 2 rather).",
"(c) All arcs ( c i,j,t , a i,j (cid:48) ,t (cid:48) ) for all i [ k ] , t (cid:54) = t (cid:48) , if v j v j (cid:48) E ( G ) (i.e., this time also for i = 1 ).",
"As we show in Lemma 1.1, with the region connectivity arcs (1a-1g) and boundary connectivity arcs (2), we ensure that the algorithm can always return a projective MST with weight at least | w | 1 .",
"The G -induced arcs and region connectivity arcs (1h-1j, 4) on the other hand will be triggered to use by the algorithm's prioritisation of the following arcs.",
"Weight 2 arcs.",
"We have the following arcs of weight 2. 4. Region connectivity arcs.",
"( L t +1 , L t ) and ( R t , R t +1 ) for t [ | K | 1] .",
"5. G-induced arcs.",
"(a) All arcs ( b k,j,t , a k,j (cid:48) ,t (cid:48) ) , for each t (cid:54) = t (cid:48) , if v j v j (cid:48) E ( G )",
"(b) All arcs ( b 1 ,j,t , c 1 ,j (cid:48) ,t (cid:48) ) , for each t (cid:54) = t (cid:48) , if v j v j (cid:48) E ( G ) There are no other arcs in the input digraph D .",
"Proof.",
"The A-region, together with the symbol L 1 from the B-region can form a tree rooted in L 1 using region connectivity arcs (1a) with boundary connectivity arcs (2a)all weight 1 arcs.",
"Similarly for the C-region with the symbol R | K | from the B-region (arcs (1b) and (2b)).",
"Moreover, these regional sub-trees are trivially projective.",
"If we construct a projective subtree out of the B-region, in which L 1 and R | K | are leaf nodes, then we have the result.",
"The combination of weight-1 arcs from (1c), (1d), and (1e) results in each B-gadget B(t) being a projective subtree headed by its head node H t made up of a combination of two paths H t , b 1 ,j 1 ,t , . . . , b k,j k ,t , L t and H t , b k,j k ,t , . . . , b 1 ,j 1 ,t , R t .",
"To make a projective subtree out of the entire B-region, we choose some arbitrary H t node as the root and take further weight-1 arcs described in (1f): ( L p , H p 1 ) if p i and ( R p , H p + 1) otherwise, for i [2 , | K | 1] .",
"In all these possible B-regional projective subtrees, both L 1 and R | K | are leaf nodes, which gives the result.",
"Lemma 1.2.",
"Let T be a projective MST over D .",
"There are at most 2 k +( | K | 1) arcs of weight 2 in T : k from the Bto the A-region, k from the Bto the C-region, and the rest internal to the B-region.",
"The number of arcs of weight 2, internal to or originating from the B-region, will be maximised if arcs exiting the B-region all originate from the same B-gadget (instead of 2+ distinct ones).",
"Moreover, suppose distinct t 1 , t 2 , t 3 [ | K | ] .",
"If T includes an arc of weight 2 from gadget B ( t 2 ) to gadget A ( t 1 ) and from gadget B ( t 2 ) to gadget C ( t 3 ) , then T must also include arcs characterised by the following 1. all non-marked nodes in A ( t 1 ) have nonmarked heads in B ( t 2 ) , 2. all non-marked nodes in C ( t 3 ) have marked heads in B ( t 2 ) , and 3. all marked nodes in A ( t 1 ) have marked heads in C ( t 3 ) .",
"Proof.",
"There are only arcs of weight 2 in D from the B-region to both the Aand the C-regions, and internally in the B-region.",
"We show that there are at most k weight 2 arcs connecting the Aand B-regions and Cand B-regions.",
"Then we show that the maximal number of weight 2 edges internal to the B-region is ( | K | 1) .",
"Suppose there are more than k arcs of weight 2 from the B-region to the A-region in T .",
"Then there are at least two of these arcs entering different A-gadgets: ( b 1 ,j (cid:48) ,t (cid:48) , a 1 ,j,t ) and ( b 1 ,i (cid:48) ,p (cid:48) , a 1 ,i,p ) , with p < t .",
"Consider the barred symbols in the t th A-gadget.",
"There are only two possible heads: (1) the symbol following the gadget (region connectivity arcs (1a) or boundary connectivity arcs (2a)), which by projectivity is excluded because these arcs would cross ( b 1 ,j (cid:48) ,t (cid:48) , a 1 ,j,t ) , or (2) symbols from the C-region, which by projectivity is also excluded because they would cross the arc ( b 1 ,i (cid:48) ,p (cid:48) , a 1 ,i,p ) .",
"The proof that there are at most k arcs of weight 2 from the B-region to the C-region in T is analogous.",
"For the maximal number of arcs of weight 2, internal to the B-region, we first consider the maximum number of weight 2 region connectivity arcs (4).",
"By projectivity, a B-gadget with arcs entering an Aor C-gadget cannot have any entering weight 2 region connectivity arc.",
"Also, by projectivity, a single B-gadget can have at most 1 weight 2 region connectivity arc.",
"Thus, the number of weight 2 arcs would be maximised by ensuring arcs exiting the B-region originate from the same B-gadget, so only one B-gadget does not have weight 2 entering arcs.",
"Since there are | K | B-gadgets in total, this means there are at most | K | 1 weight 2 B-region internal arcs.",
"The rest of the proof follows by the similar projectivity arguments.",
"Proof.",
"( )",
"Suppose there is a 3 k -clique in G consisting of the three k -cliques k 1 , k 2 , and k 3 , and such that k 1 k 2 k 3 is a 3 k -clique.",
"In w , there must be corresponding gadgets in each of its gadget regions.",
"We consider A (1) the gadget for k 1 in A, by B (2) the gadget for k 2 in B, and by C (3) the gadget for k 3 in C. We build up a set S of arcs based on these three gadgets.",
"The set S consists of all the possible G-induced (weight 1 and 2) arcs between these three regionsa disconnected set where no two arcs cross, by Lemma 1.2.",
"By the same lemma, S includes exactly 2 k + ( | K | 1) arcs of weight 2. We will add arcs to S to connect the rest of the symbols in w until we form a tree, and by Lemma 1.2 again we cannot add any further weight 2 arcs.",
"We must now supplement S to make a tree.",
"We first connect the B-region.",
"For t < 2 , we connect B-gadgets internally by making the path from the R t to L t , using weight 1 region connectivity arcs.",
"We make paths in the opposite direction, from L t to R t for t > 2 .",
"All other A-gadgets are connected as in the proof of Lemma 1.1.",
"Similarly for the C-gadgets before and after C (3) .",
"The only nodes that still lack a head node are the marked nodes from C (3) .",
"We connect these using region connectivity arcs from (1g).",
"We have now constructed a projective tree of weight | w | 1 + 2 k + ( | K | 1) .",
"We cannot have a higher weighted projective tree by Lemma 1.2.",
"Hence tree is an optimal T .",
"( )",
"Suppose T has weight | w | 1 + 2 k + ( | K | 1) .",
"By Lemma 1.2, T has exactly k arcs of weight 2 from the B-region to the A-region, and k from the B-region to the C-region, and that in this case all possible G-induced arcs between the three corresponding gadgets are in T .",
"Moreover, internally to the B-region, there are | K | 1 weight 2 edges.",
"Let the gadgets be w.l.o.g., A(1), B(2), and C(3).",
"Each unmarked b symbol in B(2) corresponds to a node in V , and is the head of an unmarked symbol from A(1) corresponding to every node in k -clique k 1 .",
"This means that in G , all possible connections between nodes in k 1 and k 2 exist.",
"The same holds for B(2) with C(3) and C(3) with A(1).",
"Hence there is a 3 k -clique in G .",
"With Theorem 1, we have shown that the nonstructural graph-based neural parsing systems cannot be carrying out explicit exact decoding in with a significantly simpler algorithm.",
"As we show in the next section, in fact, the LSTM stacks of these systems alone are powerful enough to simulate all components.",
"In our proof, the algorithm consistently makes a choice between edges of weight 1 and edges of weight 2 for the result to preserve projectivity.",
"Possibly more edges of weight 1 may end up in a maximum spanning projective DAG or digraph, so we cannot necessarily use the weight in the same way to deduce the result.",
"The number of edges in D is less than n 2 .",
"Hence if we replace the weights of weight 1 arcs in D by weight 1 / ( n 2 ) , then an output maximum spanning projective digraph or DAG with weight superior to 2 k +( | K | 1) would indicate a 3 k -clique.",
"By the algorithms to do this from (Schluter, 2015) in cubic time, we therefore have the same lower bound for finding a maximum spanning projective DAG or digraph.",
"Corollary 1.1.",
"Under the assumption of ETH, there is no algorithm that carries out projective maximum spanning DAG or digraph decoding in sub-cubic time.",
"Eisner (1996)'s algorithm on an input sentence of length n uses an n n table M and dynamic programming to compute for table cell M i,j the highest weighted sub-trees over the span ( i, j ) of the input sentence.",
"The algorithm iterates over spans of increasing length.",
"For M i,j , the weights of all possible combinations of sub-spans are considered as candidate sub-trees over the span, and the maximum of these is retained in M i,j .",
"For our purposes, the problem with this version of the algorithm is that the RNN cannot compute the maximum of the corresponding O ( n ) values in either constant space nor in one time-step, and the corresponding sub-tree weight is required in the computation of maximum sub-trees over the span j i + 1 at the next recursive step.",
"In Algorithm 1, we precompute enough of the comparisons required for finding the maximum spanning sub-tree combination before the algorithm arrives in that table cell (from line 5).",
"Thus, instead of taking the maximum across k O ( n ) values, we only ever take the maximum across 2 values at a time.",
"We now explain this algorithm.",
"A sub-tree over the span ( i, j ) is said to be complete if it includes some arc between i and j .",
"Otherwise the sub-tree is called incomplete .",
"We use seven weight matrices (which we extend to 25 matrices later): S an n n matrix of arc scores, where S [ i, j ] is the score of the ( i, j ) arc.",
"I an n n matrix of incomplete sub-tree scores, where I [ i, j, h ] is the incomplete sub-tree for the span ( i, j ) with head h 0 , 1 .",
"If h = 0 , then i is the root of the sub-tree, and if h = 1 , then j is the root.",
"C is defined in the same way as I but for complete sub-trees.",
"I r [ i, j, h ] (resp. C r ) stores the current row-maximum value for I [ i, j, h ] across the span combinations ( i, k ) , ( k + 1 , j ) for k i > ( k + 1) j (resp.",
"( i, k ) , ( k, j ) for k i > k j ).",
"These are the cases where the span ( i, k ) is the largest of the two sub-spans ( i, j ) .",
"These table values are adjusted while the algorithm visits cells ( i, k ) .",
"I c [ i, j, h ] (resp. C c ) stores the current column-maximum value for I [ i, j, h ] across the span combinations ( i, k ) , ( k + 1 , j ) for k i k + 1 j (resp.",
"( i, k ) , ( k, j ) for k i > k j ).",
"These are the cases where the span ( k + 1 , j ) (resp.",
"( k, j ) ) is larger or equal to the other sub-span of the partitioned span ( i, j ) .",
"These table values are adjusted while the algorithm visits cells ( k + 1 , j ) (resp.",
"( k, j ) ).",
"The pseudocode for this algorithm, which we refer to as streaming-max-eisner is presented in Algorithm 1. The main difference with the original version is that the internal loop partitioning of a span is separated in Algorithm 1 over several previous iterations of the loop, so that once the algorithm visits cell ( i, j ) , all that needs to be computed is the maximum of the two rowand column-maximum values, from I r and I c , or from C r and C c .",
"It is straightforward to show the correctness of this algorithm, which we state as Theorem 2. We omit the proof due to space constraints.",
"The algorithm can also be easily adapted for backtracking.",
"Theorem 2. Algorithm 1 returns the weight of T .",
"We make a final adjustment to the algorithm before stating the simulation construction.",
"For the simulation, we only have RNN operations at our disposal: linear combinations and a ReLU activation function, but no explicit max operation.",
"In order to use only RNN operations, we replace the explicit max function.",
"Replacing the explicit max function.",
"We note that to find the maximum of the two positive num-bers a and b , we can use the ReLU function.",
"Without loss of generality, suppose that a > b , then ( ReLU ( a b ) + ReLU ( b a ) + a + b ) 2 = ( a b + a + b ) 2 = 2 a 2 = a = max( a, b ) .",
"(1) In fact, since all weights are assumed positive, Equation 1 can be rewritten as max( a, b ) = 1 2( ReLU ( a b ) + ReLU ( b a ) + ReLU ( a ) + ReLU ( b )) .",
"(2) We therefore make a final adjustment to the original Eisner algorithm, over the version Algorithm 1, replacing all max functions using Equation 2. Instead of storing only one value for each matrix I r , I c , C r , C c , I, C , we store four, denoted by the fields a, b, ab, ba corresponding the four values we need to store: ReLU ( a ) , ReLU ( b ) , ReLU ( a b ) , and ReLU ( b a ) respectively.",
"For instance, for the matrix I , we have I a , I b , I ab , I ba .",
"Then, for example, line 6 becomes a ReLU (12 ( I r [ i, j, 0] .a + I r [ i, j, 0] .b + I r [ i, j, 0] .ab + I r [ i, j, 0] .ba )) (3) b ReLU (12 ( I c [ i, j, 0] .a + I c [ i, j, 0] .b + I c [ i, j, 0] .ab + I c [ i, j, 0] .ba )) (4) I [ i, j, 0]",
".a ReLU ( a ) I [ i, j, 0]",
".b ReLU ( b ) I [ i, j, 0]",
".ab ReLU ( a b ) I [ i, j, 0]",
".ba ReLU ( b a ) where Equations 3 and 4 are wrapped in an extra ReLU operation which yields no difference to the parameter, but which will be convenient for our simulation in Section 5. Lines 11-14 and 16-19 are adapted in the same way.",
"We provide the adaption of line 11 to make this precise: a ReLU (12 ( I [ i, p, 1] .a + I [ i, p, 1] .b + I [ i, p, 1] .ab + I [ i, p, 1] .ba )) b ReLU (12 ( C [ i, j, 0] .a + C [ i, j, 0] .b + C [ i, j, 0] .ab + C [ i, j, 0] .ba + C [ j + 1 , p, 1] .a + C [ j + 1 , p, 1] .b + C [ j + 1 , p, 1] .ab + C [ j + 1 , p, 1] .ba + S [ p, i ])) I r [ i, p, 1]",
".a ReLU ( a ) I r [ i, p, 1]",
".b ReLU ( b ) I r [ i, p, 1]",
".ab ReLU ( a b ) I r [ i, p, 1]",
".ba ReLU ( b a ) .",
"Simulating Algorithm 1. The projective dependency parsing architecture M (cid:48) to be simulated first sends word embeddings x i , i [ n ] through a forward (and backward) LSTM with output word representations o (cid:48) i (and o (cid:48) i ) of dimension d .",
"The concatenated result [ o (cid:48) i ; o (cid:48) i ] is further specialised through two unrelated nonlinear dense layers: one for dependents and one for heads.",
"Then all resulting pairs (dependent,head) of word representations are sent through a scoring function to generate a score matrix as input to projective MST decoding (Kiperwasser and Goldberg, 2016).",
"The architecture M to simulate M (cid:48) consists of two components, each being a recurrent layer: a BiLSTM (for contextual word representations and word specialisations) and an RNN (for scoring and to simulate Algorithm 1).",
"M starts by feeding word embeddings x i into its first component, the BiLSTM.",
"In the forward direction, at the t th time step, the contextual representation o (cid:48) t is generated, o (cid:48) t 1 is specialised to o ht 1 (head) and o dt 1 (dependent), and the previously specialised word representations in o t (i.e., corresponding to o (cid:48) 1 , . . . , o (cid:48) t 2 ) are copied over.",
"We add a single extra ( n + 1) th time step to each direction, so M can finish specialising con-textualised word representations within this first component.",
"Similarly for the backward direction.",
"There is one single input to M 's second component, an RNN, which also works in n + 1 time steps.",
"We refer to the inputs for this component as z 1 , . . . , z ( n +1) , where z 2 . . . z ( n +1) are all dummy inputs.",
"z 1 is the concatenation of the final output vectors from each direction of M 's BiLSTM.",
"In the first time step of this component, M computes the score matrix and stores it in the hidden state h 1 .",
"The hidden state has a dimension large enough to house the 25 tables ( O ( n 2 ) space) required by Algorithm 1 for subtree score bookkeeping and computing the maximum of two values using linear combinations and a ReLU .",
"The outer loop (the span for-loop with variable t ) of the algorithm corresponds to each time-step t of the RNN.",
"For the first internal for-loop (the diagonal for-loop with variable i ), we note that, in lines 6-9, no cells ( i, i + t ) whose values are being computed require information from each other at this time-step t .",
"The streaming-row and streaming-column for loops (lines 11-14, 16-19) on the other hand sometimes requires maximal values ( i, i + t ) from lines 6-9 to be computed.",
"This problem is simply solved by replacing the corresponding expressions appearing as left-hand sides in lines 6-9 by the righthand sides.",
"The output h n +1 contains the desired maximum value.",
"Recent state-of-the art neural graph-based parsers comprising, among other components, a short stack of BiLSTMs, seem to obviate any explicit structural learning or prediction.",
"In this paper, under the assumption of ETH, we showed that this is not due to any possible indirect discovery of a faster algorithm for finding a projective maximum spanning tree and extended the result to projective maximimum spanning DAGs and digraphs.",
"We further showed how these architectures allow for simulating decoding, implying that they are indeed carrying out implicit structured learning and prediction.",
"We gratefully acknowledge the careful remarks the anonymous NAACL reviewers."
] | [
"abstain",
"objective",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"result",
"objective",
"objective",
"result",
"method",
"method",
"objective",
"result",
"objective",
"objective",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"other",
"method",
"other",
"abstain",
"other",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"method",
"other",
"method",
"method",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"other",
"method",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"abstain",
"objective",
"method",
"abstain",
"other",
"other",
"other",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"other"
] |
[
"Human languages are full of metaphorical expressions.",
"Metaphors help people understand the world by connecting new concepts and domains to more familiar ones.",
"Large pretrained language models (PLMs) are therefore assumed to encode metaphorical knowledge useful for NLP systems.",
"In this paper, we investigate this hypothesis for PLMs, by probing metaphoricity information in their encodings, and by measuring the cross-lingual and cross-dataset generalization of this information.",
"We present studies in multiple metaphor detection datasets and in four languages (i.e., English, Spanish, Russian, and Farsi).",
"Our extensive experiments suggest that contextual representations in PLMs do encode metaphorical knowledge, and mostly in their middle layers.",
"The knowledge is transferable between languages and datasets, especially when the annotation is consistent across training and testing sets.",
"Our findings give helpful insights for both cognitive and NLP scientists.",
"Pre-trained language models (PLMs) (Peters et al., 2018; Devlin et al., 2019), are now used in almost all NLP applications, e.g., machine translation (Li et al., 2021), question answering (Zhang et al., 2020), dialogue systems (Ni et al., 2021), and sentiment analysis (Minaee et al., 2020).",
"They have sometimes been referred to as foundation models (Bommasani et al., 2021) due to their significant impact on research and industry.",
"Metaphors are important aspects of human languages.",
"In conceptual metaphor theory (CMT) (Lakoff and Johnson, 2008), metaphor is defined as a cognitive phenomenon associating two different concepts or domains.",
"This phenomenon is built in cognition and expressed in language.",
"The creativity and problem solving (i.e., generalization to new (cid:63) Equal contribution.",
"problems) depend on the analogies and metaphors a cognitive system, like our brain, relies on.",
"Modeling metaphors is therefore essential in building human-like computational systems that can relate emerging concepts to the more familiar ones.",
"So far, there has been no comprehensive analysis of whether and how PLMs represent metaphorical information.",
"We intuitively assume that PLMs must encode some information about metaphors due to their great performance in metaphor detection and other language processing tasks.",
"Confirming that experimentally is a question that we address here.",
"Specifically, we aim to know whether generalizable metaphorical knowledge is encoded in PLM representations or not .",
"The outline of our work is presented in Figure 1. We first do probing experiments to answer questions such as:",
"(i) with which accuracies and ex-tractablities do different PLMs encode metaphorical knowledge?",
"(ii) how deep is the metaphorical knowledge encoded in PLM multi-layer representations?",
"We take two probing methods, edge probing (Tenney et al., 2019b) and minimum description length (Voita and Titov, 2020), and apply them to four metaphor detection datasets, namely LCC (Mohler et al., 2016), TroFi (Birke and Sarkar, 2006), VUA pos, and VUA Verbs (Steen, 2010).",
"To better estimate the generalization of metaphorical knowledge in PLMs, we design two setups in which testing comes from a different distribution than training data: cross-lingual and cross-dataset metaphor detection.",
"Each setup can reveal important information on whether or not the metaphorical knowledge is encoded consistently in PLMs.",
"Four languages (English, Farsi, Russian and Spanish) and four datasets (LCC, TroFi, VUA pos, and VUA Verbs) are considered in these generalization experiments.",
"In summary, this paper makes the following contributions: For the first time, and through careful probing analysis, we confirm that PLMs do encode metaphorical knowledge.",
"We show that metaphorical knowledge is encoded better in the middle layers of PLMs.",
"We evaluate the generalization of metaphorical knowledge in PLMs across four languages and four dataset sources, and find out that there is considerable transferability for the pairs with consistent data annotation even if they are in different languages.",
"1 2 Related Work Metaphor detection using PLMs.",
"The metaphor detection task (Mason, 2004; Birke and Sarkar, 2007; Shutova et al., 2013) is a good fit for analyzing the metaphorical knowledge.",
"Using PLMs for metaphor detection has been common in recent years, setting new state-of-the-art results, indicating implicitly that PLMs represent metaphorical information.",
"Choi et al. (2021) introduce a new architecture that integrates metaphor detection theories with BERT.",
"They use the definitions as well as example usages of words jointly with PLM representations.",
"Similarly, Song et al. (2021) presents a new perspective on metaphor detection task by framing it as relation classification, focusing on the verbs.",
"These approaches beat the earlier work of using PLMs (Su et al., 2020; Chen et al., 2020; Gong et al., 2020), RNN-based (Wu et al., 2018; Mao et al., 2019) and feature-based approaches (Turney et al., 2011; Shutova et al., 2016).",
"Note that our goal is not to compete with these models, but to probe and analyze the relevant knowledge in PLMs.",
"Tsvetkov et al. (2014) present cross-lingual metaphor detection models using linguistic features and word embeddings.",
"Bilingual dictionaries map different languages.",
"Their datasets are quite small (1000 training and 200 testing examples), making them unsuitable for a robust evaluation.",
"However, this paper still remains as the only cross-lingual evaluation of metaphor detection, to the best of our knowledge.",
"Here, using multilingual PLMs, we perform zero-shot cross-lingual transfer for metaphor detection.",
"Our goal is to test if PLMs represent metaphorical knowledge transferable across languages.",
"Probing methods in NLP.",
"Probing is an analytical tool used for assessing linguistic knowledge in language representations.",
"In probing, the information richness of the representations is inspected by the quality of a supervised model in predicting linguistic properties based only on the representations (Kohn, 2015; Gupta et al., 2015; Yaghoobzadeh and Schutze, 2016; Conneau et al., 2018; Tenney et al., 2019b,a; Yaghoobzadeh et al., 2019; Hewitt and Manning, 2019; Zhao et al., 2020; Belinkov, 2022).",
"Here, we apply probing to perform our study on whether metaphorical knowledge is present in PLM representations, and whether that is generalizable across languages and datasets.",
"A popular probing method introduced by Tenney et al. (2019b) is edge probing (Figure 2).",
"They propose a suite of span-level tasks, including POS tagging and coreference resolution.",
"Despite the widespread use of edge probing and other conventional probes, the question of whether the probing classifier is learning the task itself rather than identifying the linguistic knowledge raises concerns.",
"An Information-theoretic view can solve this issue (Voita and Titov, 2020) by reformulating probing as a data transmission problem.",
"They consider the effort needed to extract linguistic knowledge in addition to the final quality of the probe, showing that this approach is more informative and robust than normal probing methods.",
"We employ both edge and MDL probing in this work.",
"Probing multilingual PLMs.",
"The application of probing methods in NLP is extended to multilingual PLMs as well (Pires et al., 2019; Eichler et al., 2019; Ravishankar et al., 2019a,b; Choenni and Shutova, 2020).",
"Choenni and Shutova (2020) introduce probing tasks for typological features of multiple languages in multilingual PLMs.",
"Ravis-2038 hankar et al. (2019a,b) extend the probing tasks of Conneau et al. (2018), to a few other languages.",
"Pires et al. (2019) study the generalization of multilingual-BERT across languages when performing cross-lingual downstream tasks.",
"Here, as part of our study, we probe the generalization of metaphorical knowledge in XLM-R (Conneau et al., 2020), a notable multilingual PLM.",
"been no earlier work on studying or evaluating out-of-distribution generalization in metaphor detection.",
"This generalization refers to scenarios where testing and training sets come from different distributions (Duchi and Namkoong, 2018; Hendrycks et al., 2021, 2020).",
"Here, we have scenarios where testing and training data are in different languages or domains / datasets.",
"These are challenging evaluation scenarios for the generalization of encoded information (metaphoricity in our case).",
"Metaphors are used frequently in our everyday language to convey our thoughts more clearly.",
"There are related theories in linguistics and cognitive science.",
"Following linguistic theories, metaphoricity is mostly annotated using metaphor identification procedure (MIP).",
"MIP identifies a word in a given context as a metaphor if it has a basic or literal meaning that contrasts with its context words.",
"Based on conceptual metaphor theory (CMT) (Lakoff and Johnson, 2008), one target domain (e.g., ARGUMENT) is explained using a source domain (e.g., WAR).",
"The source domain is usually more concrete or physical, while the target is more abstract.",
"Relating these two theories, metaphors are expressed in language connecting two contrasting domains.",
"For example, in We won the argument, the domain of ARGUMENT is linked to the domain of WAR by using the word won.",
"The word won is a metaphor here since its primary domain contrasts with its contextual domain.",
"The same word won in a sentence like The Allies won the war refers to its literal meaning and therefore is not a metaphor.",
"The task of metaphor detection is defined to do this classifica-tion of literal and metaphor.",
"Accordingly, when designing a metaphor detection system, to figure out if a token is a metaphor in a particular context, we assume following a process like:",
"(i) finding if the token has multiple mean-Figure 2: Probing architecture for metaphors employed in edge probing and MDL probing.",
"ings in different domains, including a more basic, concrete, or body-related meaning.",
"For example, fight, win and mother satisfy this condition.",
"(ii) finding if the source domain of the token contrasts with the target domain.",
"Here the contrast is important and finding the exact domains might not be necessary.",
"The source domain, in which its literal / basic meaning resides, is a non-contextual attribute, while the target domain is mainly found using the contextual clues (WAR and ARGUMENT for won in the above example).",
"Here, we use the metaphor detection datasets annotated based on these theories and analyze the PLM representations to see if they encode metaphorical knowledge and if the encoding is generalizable.",
"To do so, we first probe PLMs for their metaphorical information, generally and also across layers.",
"This gives us intuition on how well metaphoricity is encoded and how local or contextual that is.",
"Then, we test if the knowledge of metaphor detection can be transferred across languages and if multilingual PLMs capture that.",
"Finally, the generalization of metaphorical knowledge across datasets is examined to see if the theories and annotations followed by different datasets are consistent, and if PLMs learn generalizable knowledge rather than dataset artifacts.",
"Here, we aim to answer general questions about metaphors in PLMs: do PLMs encode metaphorical information and, if so, how it is distributed in",
"their layers.",
"We do not attempt to achieve the best metaphor detection results but to analyze layers of PLMs to test if they contain the necessary information to perform this task.",
"In trying to answer this question, we apply probing methods, discussed as follows, to focus on the representation itself by freezing the PLM parameters and training classi-fiers on top.",
"We hypothesize that metaphorical information does exist in PLM layers and more in the middle layers.",
"As we discussed earlier, metaphor detection depends on contrast prediction between source and target domains of a token.",
"We assume that this prediction is made mainly based on the initial layers of PLM representations of either the token itself or its context or both.",
"In higher layers of PLMs, the representations are dominated by contextual information, making it hard to retrieve the source domain, and so, reasoning about the contrast of the source and target domains becomes difficult.",
"Methods We employ edge probing (Tenney et al., 2019b) and MDL (Voita and Titov, 2020).",
"Edge probing consists of a classifier in which word representations obtained from PLMs are fed as inputs after projecting to 256-dimensional vectors first.",
"The quality of the classifier illustrates how well the representations encode a specific linguistic knowledge.",
"This method is designed for span-level tasks, i.e., the classifier can only access the representations of a limited part of the input sentence specified in the dataset.",
"Edge probing has two pooler sections for making fixed-sized vectors; one pools representations across the words in the span and the other pools representations across the layers.",
"The Minimum Description Length (MDL) probing is based on information theory and combines the quality of the classifier and the amount of effort needed to achieve this quality.",
"Voita and Titov (2020) propose two methods for computing MDL: variational coding and online coding. The former computes the complexity of the classifier with a Bayesian model.",
"In the latter, the classifier is trained gradually on different portions of the dataset, and the code length will be the sum of the cross-entropies, each for a data portion.",
"Voita and Titov (2020) show that the two methods' results are consistent with each other.",
"Accordingly, we opted for the online coding method since it is more straightforward in implementation.",
"Since the code length is related to the size of the dataset N , we report the compression , which is equal to 1 for a random classifier and larger for better models, and is defined as: compression = N log 2 ( K ) MDL See extra details in Voita and Titov (2020).",
"To see if PLMs encode generalizable metaphorical knowledge, we evaluate them in settings where testing and training data are in different distributions.",
"We explore transferability analysis across languages and datasets as two sources of distribution.",
"We explain each in the following sections.",
"Multilingual encoders project the representations in multiple languages into a shared space so that semantically similar words and sentences across languages end up close to each other.",
"If we use a multilingual PLM model, and our classifier shows that representations in language S are informative about metaphoricity, what happens if we apply this classifier to the representations in language T ?",
"We hypothesize that if the representation is rich in both languages, the annotation of metaphor is consistent, and the concept of metaphor is transferable across languages, then the classifier would be able to predict metaphoricity in language T from what it learns in S .",
"When testing cross-lingual generalization, the linguistic and cultural differences of metaphoricity are important as well.",
"We assume that metaphors are conceptualized in a similar process across languages, and metaphor detection is defined consistently.",
"The lexicalization is, of course, different, but that is something that multilingual PLMs are supposed to handle to some extent.",
"When training and testing on the same distribution, any learning model often uses heuristics and annotation biases.",
"The consequence is the recurring overestimation of the capabilities of PLMs in doing hard tasks.",
"This might be the case for our probing experiments as well.",
"Therefore, another generalization dimension we consider is cross-dataset transfer, i.e., training on dataset S and testing on dataset T .",
"S and T could be annotated by different people with possibly different goals in mind, and their raw sentences could come from different domains.",
"However, they must be annotated for the same task of metaphor detection.",
"POS types (e.g., TroFi is only verbs, but LCC is not).",
"Further, the annotation process is different as each follows its own guidelines.",
"However, the essential task of metaphor detection, i.e., distinguishing metaphor and literal usages, is the same for all.",
"Therefore, we expect some transferability across datasets but with differences aligned with their mismatches.",
"Datasets We use four metaphor detection datasets in our study.",
"The annotations of LCC (Mohler et al., 2016) are done mostly on web crawled data as well as news corpora.",
"It provides metaphoricity scores including 0 as no , 2 as conventional, and 3 as clear metaphor.",
"2 We use the examples with score 0 as literal, and others as metaphor.",
"of metaphoric and literal usages of 51 English verbs from WSJ.",
"VUA (Steen, 2010) corpus consists of words in the academic, fiction, and news subdomains of the British National Corpus (BNC).",
"The authors published two versions: VUA POS and VUA Verbs.",
"LCC contains annotations in four languages: English, Russian, Spanish, and Farsi.",
"The other three datasets, TroFi, VUA Verbs and VUA POS, are in English only.",
"We have label-balanced all the datasets to get a more straightforward interpretation of results (the accuracy of a fair-coin random baseline is 50% in all cases) and have split the datasets to train / dev / test sets with ratios of 0.7 / 0.1 / 0.2.",
"The statistics of the datasets are shown in Table 2. Example sentences with the corresponding annotations can be seen in Table 1. Setup In implementing the edge probe, we use batch size = 32 and learning rate = 5e-5 and train for five epochs in all experiments.",
"For the MDL probe, the same structure of edge probing is employed.",
"We apply a logarithm to the base two instead of the natural logarithm in cross-entropy loss to have all the obtained code lengths in bits (see extra details in Voita and Titov 2020).",
"Our experiments are done using the GPUs provided by Google Colab free and pro.",
"Here, BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and ELECTRA (Clark et al., 2020) represent our PLMs.",
"Due to our resource limitations, we conduct all experiments on the base version of the models (12 layers, 768 hidden size, 110M parameters) implemented in HuggingFace's Transfomers (Wolf et al., 2020).",
"We employ edge probing for evaluating overall metaphorical knowl-2041 Baseline BERT RoBERTa ELECTRA Dataset Acc.",
"edge in our selected PLMs, and MDL for the layer-wise comparisons.",
"MDL is shown to be more effective for layer-wise probing (Fayyaz et al., 2021).",
"Table 3 shows the edge probing accuracy and MDL probing compression results for our three PLMs.",
"Accordingly, RoBERTa and ELECTRA are shown to encode metaphorical knowledge better than BERT on both metrics.",
"This is consistent with their better performance on various tasks, acquired by having better pre-training objectives and / or enjoying more extensive pre-training data.",
"The higher probing quality of ELECTRA's representations, is also consistent with Fayyaz et al. (2021) results on various linguistic knowledge tasks, including dependency labeling, named entity recognition, semantic role labeling, and coreference resolution.",
"MDL probing compression across layers is demonstrated in Figure 3.",
"We see the numbers increase mostly at the first 3 to 6 layers, depending on the dataset, but it decreases afterwards 3 .",
"In other words, metaphorical information is more concentrated in the middle layers, where the representations are relatively contextualized but not as much as higher layers.",
"To put this in perspective, we can consider Tenney et al. (2019a) and Fayyaz et al. (2021) where the best layers for various linguistic knowledge tasks in BERT are within 4 and 9.",
"This shows that metaphor detection in PLM representations can be resolved earlier than some basic linguistic tasks.",
"In 3.1, we elaborated a hypothesis that the process of detecting metaphors is not very deep since what it needs to do is mainly contrast prediction between source and target domains, and the deep layers do not represent the source domain well.",
"Our reported probing results confirm that metaphor detection is not deep in PLM layers.",
"To further evaluate our reasoning, we probe the domain knowledge in PLM representations across layers.",
"We employ LCC's annotation of source and target domains, and run a similar MDL probing on different PLMs but for domain prediction.",
"The obtained results, shown in Figure A.1 in appendix, demonstrate that the source domain information is represented in the initial layers (2-6), confirming that the source domain is dominated by other information in higher layers.",
"On the other hand, target domain information generally increases across layers.",
"Therefore, the middle layers can be the best place for contrasting source and target domains.",
"3 For RoBERTa and in the case of TroFi and VUA Verbs, we see exceptional increases in the last layers.",
"As our PLMs, we use XLM-R (Conneau et al., 2020) for cross-lingual and BERT for cross-dataset experiments.",
"To compare the cross-lingual and cross-dataset transferability, in 4.3.3, we employ the same setup, including using XLM-R as PLM for both.",
"The results in 4.3.1 and 4.3.2 are not comparable.",
"We apply the same edge probing architecture as in the probing experiments.",
"We sometimes refer to both language and dataset as distribution .",
"We run two experiments for each case of a source distribution S and a target distribution T : one with the PLM and one with a randomized version of the PLM where weights are set to random values.",
"Randomly initialized Transformers with the same architecture as PLMs are common baselines in the community.",
"The difference between the two gives evidence about the helpfulness of the encoded knowledge gained during pre-training in doing the task.",
"When S = T , this effect is measured for in-distribution and when S (cid:54) = T , for out-of-distribution generalization.",
"Comparing results of in-distribution (e.g., training and testing on English data) and out-of-distribution (e.g., training on Spanish and testing on English) setups demonstrates how generalizable the metaphorical knowledge in PLM is and how consistent the annotations are.",
"The four LCC datasets corresponding to four languages are used here.",
"We subsample from the datasets to have the same number of examples in the training sets, i.e., 12,238 which is the size of the Russian training set.",
"The results are shown in Table 4. The random baseline is acquired using a randomly initialized XLM-R.",
"knowledge learned during the pre-training is transferable across languages.",
"This considerable transferability can be attributed to the ability of XLM-R to build language-universal representations useful for metaphoricity transfer.",
"Moreover, the innate similarities of metaphors in distinct languages can contribute to higher transferability, despite the lexicalization differences.",
"E.g., analogizing a concept to a tool (en) occurs the same way in other languages like instrumento (es), (fa) and (ru).",
"Finally, the constraints of the dataset producers in, for instance, keeping the languages in relatively similar target and source domains, could be influential.",
"(See Figures A.2 and A.3).",
"An interesting observation is that training on Russian shows the best out-of-distribution results when testing on other languages.",
"We analyze this further.",
"First, we observe that LCC(ru) has almost the closest target domain distribution to all other languages (See Table A.2 in Appendix).",
"Second, the reported results can also be influ-enced by the amount of data from each of these languages in the pre-training data of XLM-R.",
"Russian has the second largest size after English (Con-neau et al., 2020).",
"Finally, for English, the higher-resource language with closer target domain distribution, we find out that there are considerable number of examples in the LCC(en) related to GUNS and CONTROL OF GUNS.",
"These domains are not covered in other LCC datasets (See Figure A.3 in Appendix).",
"Similar to the cross-lingual evaluations, here we have four datasets used as sources and targets.",
"We set the train size of each to the minimum of all, i.e., 3,838.",
"For each pair, we run two experiments: one with randomized and one with pre-trained BERT as our PLM.",
"Results are shown in Table 5. 2043 Train Dataset LCC(en) TroFi VUA POS VUA Verbs T e s t D a t a s e t LCC(en) 84.26 (54.93) 62.04 (50.05) 70.35 (50.69) 70.37 (50.14) TroFi 59.49 (50.58) 68.73 (64.96) 55.38 (49.45) 59.67 (53.68) VUA POS 62.23 (51.47) 55.29 (50.47) 76.86 (56.01) 71.6 (53.47) VUA Verbs 60.20 (50.88) 54.55 (51.73) 72.6 (56.01) 75.21 (60.03) Table 5: Cross-dataset edge probing accuracy results on BERT is shown in pairs: pre-trained model and, in the parenthesis, the randomly initialized model.",
"PLM is much better than random in all out-of-distribution cases, suggesting the presence of generalizable metaphorical information.",
"As expected, VUA Verbs and POS achieve the best results when mutually tested, because, apart from the POS, they have the same distribution.",
"VUA datasets and LCC(en) show good transferability, but the gap with in-distribution results is still considerable ( > 13% absolute).",
"VUA Verbs is the best source for TroFi, likely because of the POS match between them.",
"Overall, apart from the two VUA datasets, the gap between inand out-of-distribution performance is large.",
"The random PLM accuracies range from about 54%-64% and 50%-56% for inand out-of-distribution cases.",
"We hypothesize that this drop in the out-of-distribution is related to the annotation biases, which a randomly initialized classifier can leverage better when testing and training sets are from the same distribution.",
"When the sets have different distributions, the biases do not transfer well.",
"using XLM-R and evaluating different training sources on LCC(en) test set.",
"We make the size of each train set to be the same (3,838).",
"The results are shown in Table 6, where the first and second rows belong to cross-lingual and cross-dataset, respectively.",
"To base our results, we include the in-distribution result of training on LCC(en), i.e., 82.31%.",
"Clearly, there is a substantial gap between cross-lingual and cross-dataset accuracies.",
"The annotation guideline is consistent in the LCC language datasets, while for the cross-dataset settings, we have datasets that differ in many aspects, including annotation procedure and definitions, covered part-of-speeches (e.g., Trofi and VUA Verbs vs. LCC and VUA POS) and sentence lengths (LCC: 25.9, VUA: 19.4, Trofi: 28.3).",
"Metaphors are important in human cognition, and if we seek to build cognitively inspired or plausible language understanding systems, we need to work more on their best integration in the future.",
"Therefore, any work in this regard is impactful.",
"Our probing experiments showed that PLMs do in fact represent the information necessary to do the task of metaphor detection.",
"We assume this information is related to metaphorical knowledge learned during pre-training.",
"Further, the layer-wise analysis confirmed our hypothesis that middle layers are more informative.",
"Even though our probing experiments did show that metaphorical knowledge is present in PLMs, it was still unclear if this knowledge is generalizable beyond the training data.",
"So, to probe the probe and evaluate generalization, we ran cross-lingual and cross-dataset experiments.",
"Our results showed that the transferability across languages 2044 works quite well for the four languages in LCC annotation.",
"However, when the definitions and annotations were inconsistent across different datasets, the cross-dataset results were not satisfactory.",
"Overall, we conclude that metaphorical knowledge does exist in PLM representations and in middle layers mainly, and it is transferable if the annotation is consistent across training and testing data.",
"We will explore more the cross-lingual transfer of metaphors and the impact of cross-cultural similarities in the future.",
"Also, the application of metaphorical knowledge for text generation is something important that we will address.",
"We would like to thank the anonymous reviewers and action editors who helped us greatly in improving our work with their comments."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"objective",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"method",
"method",
"result",
"method",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"other"
] |
[
"Pretrained multilingual models are able to perform cross-lingual transfer in a zero-shot setting, even for languages unseen during pretraining.",
"However, prior work evaluating performance on unseen languages has largely been limited to low-level, syntactic tasks, and it remains unclear if zero-shot learning of high-level, semantic tasks is possible for unseen languages.",
"To explore this question, we present AmericasNLI, an extension of XNLI (Conneau et al., 2018) to 10 Indigenous languages of the Americas.",
"We conduct experiments with XLM-R, testing multiple zero-shot and translation-based approaches.",
"Additionally, we explore model adaptation via continued pretraining and provide an analysis of the dataset by considering hypothesis-only models.",
"We find that XLM-R's zero-shot performance is poor for all 10 languages, with an average performance of 38.48%.",
"Continued pretraining offers improvements, with an average accuracy of 43.85%.",
"Surprisingly, training on poorly translated data by far outperforms all other methods with an accuracy of 49.12%.",
"Pretrained multilingual models such as XLM (Lample and Conneau, 2019), multilingual BERT (mBERT; Devlin et al., 2019), and XLM-R (Con-neau et al., 2020) achieve strong cross-lingual transfer results for many languages and natural language processing (NLP) tasks.",
"However, there exists a discrepancy in terms of zero-shot performance between languages present in the pretraining data and those that are not: performance is generally highest for well-represented languages and decreases with less representation.",
"Yet, even for unseen languages, performance is generally above chance, and model adaptation approaches have been shown to yield Language ISO Family Dev Test Aymara aym Aymaran 743 750 Ashninka cni Arawak 658 750 Bribri bzd Chibchan 743 750 Guaran gn Tupi-Guaran 743 750 Nahuatl nah Uto-Aztecan 376 738 Otom oto Oto-Manguean 222 748 Quechua quy Quechuan 743 750 Rarmuri tar Uto-Aztecan 743 750 Shipibo-Konibo shp Panoan 743 750 Wixarika hch Uto-Aztecan 743 750 Table 1: The languages in AmericasNLI, along with their ISO codes, language families, and dataset sizes.",
"further improvements (Muller et al., 2020; Pfeiffer et al., 2020a,b; Wang et al., 2020).",
"Importantly, however, there are currently no datasets for high-level, semantic tasks which focus solely on low-resource languages.",
"As these languages are most likely to be unseen to commonly used pretrained models, practically all work evaluating unseen language performance and language adaptation methods has been limited to low-level, syntactic tasks such as part-of-speech tagging, dependency parsing, and named-entity recognition (Muller et al., 2020; Wang et al., 2020).",
"This largely limits our ability to draw more general conclusions with regards to the zero-shot learning abilities of pretrained multilingual models for unseen languages.",
"In this work, we introduce AmericasNLI, an extension of XNLI (Conneau et al., 2018) a natural language inference (NLI; cf. 2.3) dataset covering 15 high-resource languages to 10 Indigenous languages spoken in the Americas: Ashninka, Aymara, Bribri, Guaran, Nahuatl, Otom, Quechua, Rarmuri, Shipibo-Konibo, and Wixarika.",
"All of them are truly low-resource languages: they have little to no digitally available labeled or unlabeled 6279 data, and they are not typically studied by the mainstream NLP community.",
"The goal of this work is two-fold: First, we hope to increase the visibility of these languages by providing a portion of the resources necessary for NLP research.",
"Second, we aim to allow for a more comprehensive study of multilingual model performance on unseen languages, where improvements will help extend the reach of NLP techniques to a larger set of languages.",
"We are specifically interested in the following research questions: (1) Do pretrained multilingual models still perform above random chance for a high-level, semantic task in an unseen language?",
"(2) Do methods aimed at adapting models to unseen languages previously exclusively evaluated on low-level, syntactic tasks also increase performance on NLI?",
"(3) Are translation-based approaches effective for truly low-resource languages, where translation quality is typically very poor?",
"1 We experiment with XLM-R, both with and without model adaptation via continued pretraining on monolingual corpora in the target language.",
"Our results show that the performance of XLM-R out-of-the-box is moderately above chance, and model adaptation leads to improvements of up to 5.86 percentage points.",
"Training on machine-translated training data, however, results in an even larger performance gain of 11.13 percentage points over the corresponding XLM-R model without adaptation.",
"We further perform an analysis via experiments with hypothesis-only models, to examine potential artifacts which may have been inherited from XNLI and find that performance is above chance for most models, but still below that for using the full example.",
"AmericasNLI is publicly available 2 and we hope that it will serve as a benchmark for measuring the zero-shot natural language understanding abilities of multilingual models for unseen languages.",
"Additionally, we hope that our dataset will motivate the development of novel pretraining and model adaptation techniques which are suitable for truly low-resource languages.",
"Prior to the widespread use of pretrained transformer models, cross-lingual transfer was mainly",
"achieved through word embeddings (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017), either by aligning monolingual embeddings into the same embedding space (Lample et al., 2018b,a; Grave et al., 2018) or by training multilingual embeddings (Ammar et al., 2016; Artetxe and Schwenk, 2019).",
"Pretrained multilingual models represent the extension of multilingual embeddings to pretrained transformer models.",
"These models follow the standard pretraining finetuning paradigm: they are first trained on unlabeled monolingual corpora from various languages (the pretraining languages ) and later finetuned on target-task data in a usually high-resource source language.",
"Having been exposed to a variety of languages through this training setup, cross-lingual transfer results for these models are competitive with the state of the art for many languages and tasks.",
"Commonly used models are mBERT (Devlin et al., 2019), which is pretrained on the Wikipedias of 104 languages with masked language modeling (MLM) and next sentence prediction (NSP), and XLM, which is trained on 15 languages and introduces the translation language modeling objective, which is based on MLM, but uses pairs of parallel sentences.",
"XLM-R has improved performance over XLM, and trains on data from 100 different languages with only the MLM objective.",
"Common to all models is a large shared subword vocabulary created using either BPE (Sen-nrich et al., 2016) or SentencePiece (Kudo and Richardson, 2018) tokenization.",
"Just as in the monolingual setting, where benchmarks such as GLUE (Wang et al., 2018) and Su-perGLUE (Wang et al., 2019) provide a look into the performance of models across various tasks, multilingual benchmarks (Hu et al., 2020; Liang et al., 2020) cover a wide variety of tasks involving sentence structure, classification, retrieval, and question answering.",
"Additional work has been done examining what mechanisms allow multilingual models to transfer across languages (Pires et al., 2019; Wu and Dredze, 2019).",
"Wu and Dredze (2020) examine transfer performance dependent on a language's representation in the pretraining data.",
"For languages with low representation, multiple methods have been proposed to improve performance, in-6280 cluding extending the vocabulary, transliterating the target text, and continuing pretraining before finetuning (Lauscher et al., 2020; Chau et al., 2020; Muller et al., 2020; Pfeiffer et al., 2020a,b; Wang et al., 2020).",
"In this work, we focus on continued pretraining to analyze the performance of model adaptation for a high-level, semantic task.",
"Given two sentences, the premise and the hypothesis , the task of NLI consists of determining whether the hypothesis logically entails, contradicts, or is neutral to the premise.",
"The most widely used datasets for NLI in English are SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018).",
"XNLI (Conneau et al., 2018) is the multilingual expansion of MNLI to 15 languages, providing manually translated evaluation sets and machine-translated training sets.",
"While datasets for NLI or the similar task of recognizing textual entailment exist for other languages (Bos et al., 2009; Alab-bas, 2013; Eichler et al., 2014; Amirkhani et al., 2020), their lack of similarity prevents a generalized study of cross-lingual zero-shot performance.",
"This is in contrast to XNLI, where examples for all 15 languages are parallel.",
"To preserve this property of XNLI, when creating AmericasNLI, we choose to translate Spanish XNLI as opposed to creating examples directly in the target language.",
"However, NLI datasets are not without issue: Gururangan et al. (2018) show that artifacts from the creation of MNLI allow for models to classify examples depending on only the hypothesis, showing that models may not be reasoning as expected.",
"Motivated by this, we provide further analysis of AmericasNLI in Section 6 by comparing the performance of hypothesis-only models to models trained on full examples.",
"AmericasNLI is the translation of a subset of XNLI (Conneau et al., 2018).",
"As translators between Spanish and the target languages are more frequently available than those for English, we translate from the Spanish version.",
"Additionally, some translators reported that code-switching is often used to describe certain topics, and, while many words without an exact equivalence in the target language are worked in through translation or interpretation, others are kept in Spanish.",
"To minimize the amount of Spanish vocabulary in the translated examples, we choose sentences from genres that we judged to be relatively easy to translate into the target languages: face-to-face, letters, and telephone.",
"We choose up to 750 examples from each of the development and test set, with exact counts for each language in Table 1. 3.2 Languages We now discuss the languages in AmericasNLI.",
"For additional background on previous NLP research on Indigenous languages of the Americas, we refer the reader to Mager et al. (2018).",
"A summary of this information can be found in Table C.1.",
"Aymara Aymara is a polysynthetic Amerindian language spoken in Bolivia, Chile, and Peru by over two million people (Homola, 2012).",
"Aymara follows an SOV word order and has multiple dialects, including Northern and Southern Aymara, spoken on the southern Peruvian shore of Lake Titicaca as well as around La Paz and, respectively, in the eastern half of the Iquique province in northern Chile, the Bolivian department of Oruro, in northern Potosi, and southwest Cochabamba.",
"AmericasNLI examples are translated into the Central Aymara variant, specifically Aymara La Paz.",
"Ashninka Ashninka is an Amazonian language from the Arawak family, spoken by 73,567 people 3 in Central and Eastern Peru, in a geographical region located between the eastern foothills of the Andes and the western fringe of the Amazon basin (Mihas, 2017).",
"Ashninka is an agglutinating and polysynthetic language with a VSO word order.",
"Bribri Bribri is a Chibchan language spoken by 7,000 people in Southern Costa Rica (INEC, 2011).",
"It has three dialects, and while it is still spoken by children, it is currently a vulnerable language (Moseley, 2010; Snchez Avendao, 2013).",
"Bribri is a tonal language with SOV word order.",
"There are several orthographies which use different diacritics for the same phenomena, however even for researchers who use the same orthography, the Uni-code encoding of similar diacritics differs amongst authors.",
"Furthermore, the dialects of Bribri differ in their exact vocabularies, and there are phonological processes, like the deletion of unstressed vowels, which also change the tokens found in texts.",
"As 3 https://bdpi.cultura.gob.pe/pueblos/ ashaninka 6281 Language Premise Hypothesis en And he said, Mama, I'm home.",
"Bribri has only been a written language for about 40 years, existing materials have a large degree of idiosyncratic variation.",
"These variations are standardized in AmericasNLI, which is written in the Amubri variant.",
"Guaran Guaran is spoken by between 6 to 10 million people in South America and roughly 3 million people use it as their main language, including more than 10 native nations in Paraguay, Brazil, Argentina, and Bolivia, along with Paraguayan, Argentinian, and Brazilian peoples.",
"According to the Paraguayan Census, in 2002 there were around 1.35 million monolingual speakers, which has since increased to around 1.5 million people (Dos Santos, 2017; Meli, 1992).",
"4 Although the use of Guaran as spoken language is much older, the first written record dates to 1591 (Catechism) followed by the first dictionary in 1639 and linguistic descriptions in 1640.",
"The official grammar of Guaran was approved in 2018.",
"Guaran is an agglutinative language, with ample use of prefixes and suffixes.",
"Nahuatl Nahuatl belongs to the Nahuan subdivision of the Uto-Aztecan language family.",
"There are 30 recognized variants of Nahuatl spoken by over 1.5 million speakers across Mexico, where Nahuatl is recognized as an official language (SEGOB, 2020b).",
"Nahuatl is polysynthetic and agglutinative, and many sentences have an SVO word order or, for contrast and focus, a VSO order, and for emphasis, an SOV order (MacSwan, 1998).",
"The 4 https://www.ine.gov.py/news/ 25-de-agosto-dia-del-Idioma-Guarani.php translations in AmericasNLI belong to the Central Nahuatl (Nhuatl de la Huasteca) dialect.",
"As there is a lack of consensus regarding the orthographic standard, the orthography is normalized to a version similar to Classical Nahuatl.",
"Otom Otom belongs to the Oto-Pamean language family and has nine linguistic variants with different regional self-denominations.",
"Otom is a tonal language following an SVO order, and there are around 307,928 speakers spread across 7 Mexican states.",
"In the state of Tlaxcala, the yuhmu or uhmu variant is spoken by fewer than 100 speakers, and we use this variant for the Otom examples in AmericasNLI.",
"Quechua Quechua, or Runasimi , is an Indigenous language family spoken primarily in the Peruvian Andes.",
"It is the most widely spoken pre-Columbian language family of the Americas, with around 8-10 million speakers.",
"Approximately 25% (7.7 million) of Peruvians speak a Quechuan language, and it is the co-official language in many regions of Peru.",
"There are multiple subdivisions of Quechua , and AmericasNLI examples are translated into the standard version of Southern Quechua, Quechua Chanka, also known as Quechua Ayacucho, which is spoken in different regions of Peru and can be understood in different areas of other countries, such as Bolivia or Argentina.",
"In AmericasNLI, the apostrophe and pentavocalism from other regions are not used.",
"to the Taracahitan subgroup of the Uto-Aztecan language family (Goddard, 1996), and is polysynthetic and agglutinative.",
"Rarmuri is an official language of Mexico, spoken mainly in the Sierra Madre Occidental region by a total of 89,503 speakers (SEGOB, 2020c).",
"AmericasNLI examples are translated into the Highlands variant (IN-ALI, 2009), and translation orthography and word boundaries are similar to Caballero (2008).",
"Shipibo-Konibo Shipibo-Konibo is a Panoan language spoken by around 35,000 native speakers in the Amazon region of Peru.",
"Shipibo-Konibo uses an SOV word order (Faust, 1973) and postpositions (Vasquez et al., 2018).",
"The translations in AmericasNLI make use of the official alphabet and standard writing supported by the Ministry of Education in Peru.",
"Wixarika The Wixarika, or Huichol , language, meaning the language of the doctors and healers (Lumholtz, 2011), is a language in the Cora-chol subgroup of the Uto-Aztecan language family (Campbell, 2000).",
"Wixarika is a national language of Mexico with four variants , spoken by a total of around 47,625 speakers (SEGOB, 2020a).",
"Wixarika is a polysynthetic language and follows an SOV word order.",
"Translations in AmericasNLI are in Northern Wixarika and use an orthography common among native speakers (Mager-Hois, 2017).",
"In this section, we detail the experimental setup we use to evaluate the performance of various approaches on AmericasNLI.",
"Pretrained Model We use XLM-R (Conneau et al., 2020) as the pretrained multilingual model in our experiments.",
"The architecture of XLM-R is based on RoBERTa (Liu et al., 2019), and it is trained using MLM on web-crawled data in 100 languages.",
"It uses a shared vocabulary consisting of 250k subwords, created using SentencePiece (Kudo and Richardson, 2018) tokenization.",
"We use the Base version of XLM-R for our experiments.",
"Adaptation Methods To adapt XLM-R to the various target languages, we continue training with the MLM objective on monolingual text in the target language before finetuning.",
"To keep a fair comparison with other approaches, we only use target data which was also used to train the translation models, which we describe in Section 4.2.",
"However, we note that one benefit of continued pretraining for adaptation is that it does not require parallel text, and could therefore benefit from text which could not be used for a translation-based approach.",
"For continued pretraining, we use a batch size of 32 and a learning rate of 2e-5.",
"We train for a total of 40 epochs.",
"Each adapted model starts from the same version of XLM-R, and is adapted individually to each target language, which leads to a different model for each language.",
"We denote models adapted with continued pretraining as +MLM .",
"Finetuning To finetune XLM-R, we follow the approach of Devlin et al. (2019) and use an additional linear layer.",
"We train on either the English MNLI data or the machine-translated Spanish data, and we call the final models XLM-R (en) and XLM-R (es), respectively.",
"Following Hu et al. (2020), we use a batch size of 32 and a learning rate of 2e-5.",
"We train for a maximum of 5 epochs, and evaluate performance every 2500 steps on the XNLI development set.",
"We employ early stopping with a patience of 15 evaluation steps and use the best performing checkpoint for the final evaluation.",
"All finetuning is done using the Huggingface Transformers library (Wolf et al., 2020) with up to two Nvidia V100 GPUs.",
"Using Lacoste et al. (2019), we estimate total carbon emissions to be 75.6 kgCO 2 eq.",
"We also experiment with two translation-based approaches, translate-train and translate-test, detailed below along with the translation model used.",
"Translation Models For our translation-based approaches, we train two sets of translation models: one to translate from Spanish into the target language, and one in the opposite direction.",
"We use transformer sequence-to-sequence models (Vaswani et al., 2017) with the hyperparameters proposed by Guzmn et al. (2019).",
"Parallel data used to train the translation models can be found in Table B.1.",
"We employ the same model architecture for both translation directions, and we measure translation quality in terms of BLEU (Papineni et al., 2002) and ChrF (Popovic, 2015), cf.",
"Table 3. We use fairseq (Ott et al., 2019) to implement all translation models.",
"5 Translate-train For the translate-train approach, the Spanish training data provided by XNLI is translated into each target language.",
"It is then used to finetune XLM-R for each language individually.",
"Along with the training data, we also translate the Spanish development data, which is used for validation and early stopping.",
"We discuss the effects of using a translated development set in Section F.1.",
"Notably, we find that the finetuning hyperparameters defined above do not reliably allow the model to converge for many of the target languages.",
"To 5 The code for translation models can be found at https: //github.com/AmericasNLP/americasnlp2021 find suitable hyperparameters, we tune the batch size and learning rate by conducting a grid search over {5e-6, 2e-5, 1e-4} for the learning rate and {32, 64, 128} for the batch size.",
"In order to select hyperparameters which work well across all languages, we evaluate each run using the average performance on the machine-translated Aymara and Guaran development sets, as these languages have moderate and high ChrF scores, respectively.",
"We find that decreasing the learning rate to 5e-6 and keeping the batch size at 32 yields the best performance.",
"Other than the learning rate, we use the same approach as for zero-shot finetuning.",
"Translate-test For the translate-test approach, we translate the test sets of each target language into Spanish.",
"This allows us to apply the model finetuned on Spanish, XLM-R (es), to each test set.",
"Additionally, a benefit of translate-test over translate-train and the adapted XLM-R models is that we only need to finetune once overall, as opposed to once per language.",
"For evaluation, we use the checkpoint with the highest performance on the Spanish XNLI development set.",
"Zero-shot Models We present our results in Table 4. Results for the development set are presented in Table E.1.",
"Zero-shot performance is low for all 10 languages, with an average accuracy of 38.48% and 37.99% for the English and Spanish model, respectively.",
"However, in all cases the performance is higher than the majority baseline.",
"As shown in 6284 FT aym bzd cni gn hch nah oto quy shp tar Avg.",
"Table E.3 in the appendix, the same models achieve an average of 74.20% and 75.35% accuracy respectively, when evaluated on the 15 XNLI languages.",
"Interestingly, even though code-switching with Spanish is encountered in many target languages, finetuning on Spanish labeled data on average slightly underperforms the model trained on English, however performance is better for 3 of the languages.",
"The English model achieves a highest accuracy of 42.59%, when evaluated on Nahuatl, while the Spanish model achieves a highest accuracy of 39.51%, when evaluated on Quechua.",
"The lowest performance is achieved when evaluating on Aymara and Rarmuri, for the English and Spanish model, respectively.",
"We find that model adaptation via continued pretraining improves both models, with an average gain of 5.22 percentage points for English and 5.86 percentage points for Spanish.",
"Notably, continued pretraining increases performance for Quechua by 24.53 percentage points when finetuning on English, and 22.89 points when finetuning on Spanish.",
"Performance decreases for Bribri and Otom when finetuning on English, however performance for all languages improves when using Spanish.",
"Translate-test Performance of the translate-test model improves over both zero-shot baselines.",
"We see the largest increase in performance for Guaran and Quechua, with gains of 7.16 and, respectively, 11.87 points over the best performing zero-shot model without adaptation.",
"Considering the translation metrics in Table 3, models for Guaran and Quechua achieve the two highest scores for both metrics.",
"On average, translate-test does worse when compared to the adapted zero-shot models, and in all but two cases, both adapted models perform better than translate-test.",
"We hypothesize that translate-test is more sensitive to noise in the translated data; sentences may lose too much of their original content, preventing correct classification.",
"Translate-train The most surprising result is that of translate-train, which considerably outperforms the performance of translate-test for all languages, and outperforms the zero-shot models for all but two languages.",
"Compared to the best non-adapted zero-shot model, the largest performance gain is 20.40 points for Quechua.",
"For the language with the lowest performance, Otom, translate-train performs 2.32 points worse than zero-shot; however, it still outperforms translate-test.",
"When averaged across all languages, translate-train outperforms the English zero-shot model by 10.64 points, and translate-test by 8.9 points.",
"It is important to note that the translation performance from Spanish to each target language is not particularly high: when considering ChrF scores, the highest is 0.33, and the highest BLEU score is 3.26.",
"Performance of both translation-based models is correlated with ChrF scores, with a Pearson correlation coefficient of 0.82 and 0.83 for translate-train and translate-test.",
"Correlations are not as strong for BLEU, with coefficients of 0.37 and 0.59.",
"The sizable difference in performance between translate-train and the other methods suggests that translation-based approaches may be a valuable asset for cross-lingual transfer, especially for low-6285 resource languages.",
"While the largest downsides to this approach are the requirement for parallel data and the need for multiple models, the potential performance gain over other approaches may prove worthwhile.",
"Additionally, we believe that the performance of both translation-based approaches would improve given a stronger translation system, and future work detailing the necessary level of translation quality for the best performance would offer great practical usefulness for NLP applications for low-resource languages.",
"As shown by Gururangan et al. (2018), SNLI and MNLI the datasets AmericasNLI is based on contain artifacts created during the annotation process which models exploit to artificially inflate performance.",
"To analyze whether similar artifacts exist in AmericasNLI and if they can also be exploited, we train and evaluate models using only the hypothesis, and present results in Table 5. We can see that the average performance across languages is better than chance for all models except for XLM-R without adaptation.",
"Translate-train obtains the highest result with 44 .",
"23% accuracy, and as shown in Table E.2, hypothesis-only performance of translate-test is higher than standard performance for 5 languages.",
"Thus, as with SNLI and MNLI, artifacts in the hypotheses can be used to predict, to some extent, the correct labels.",
"However all but 1 zero-shot and translate-train models perform better in the standard setting, indicating that the models are learning something beyond just exploiting artifacts in the hypotheses, even with the additional challenge of unseen languages.",
"Following Conneau et al. (2018), AmericasNLI was created by translating sentences individually, in order to prevent additional context being added into the hypotheses.",
"However, this strategy may break the original semantic relationship between the premise and the hypothesis.",
"Furthermore, for some examples the logical relationship may be dependent on context or subtext which can be lost through translation, or simply not make sense in the target language.",
"To verify the validity of the labels of AmericasNLI, we conduct a human evaluation experiment, focusing on examples translated to Bribri.",
"We create a balanced, random sample of 450 examples taken from the Bribri development set.",
"An annotator familiar with the task was then asked to classify the pairs of sentences.",
"For comparison, we also annotate parallel examples taken from the English and Spanish development sets.",
"For Bribri, we recover the original XNLI label for 76.44% of examples.",
"For English and Spanish, we achieve 81.78% and 71.56% accuracy, respectively.",
"Due to the relatively small differences in performance across languages, we conclude that translation to Bribri has a minimal effect on the semantic relationship between the premise and the hypothesis.",
"While the case study above provides strong evidence for the validity of our Bribri examples, we cannot currently generalize this claim to the remaining languages.",
"For future work, we plan on extending our human evaluation to more languages and provide a more detailed analysis.",
"Additionally, due to the limited availability of annotators and the difficulties of translation for languages that are less frequently studied, the size of the AmericasNLI test set is relatively small.",
"As such, care must be taken to carefully evaluate conclusions drawn using the dataset; following Card et al. (2020) we present a power analysis of our results in Section D.1.",
"Future work expanding the dataset size will help create a stronger baseline.",
"Furthermore, while we do not make any model-specific assumptions in our experiments, our results are based on only one pretrained model and adaptation method.",
"Methods using vocabulary extension or adapters may offer additional improvements.",
"Similarly, other pretrained models could perform differently, depending on, e.g., the model size or the set of languages in their pretraining data.",
"In Table F.3, we present results using XLM-R Large, and find that, while the relationship between the approaches differs from the main experiments, the overall highest average performance is still achieved by the translate-train approach with XLM-R Base.",
"We provide a longer discussion in Section F.3.",
"To better understand the zero-shot abilities of pretrained multilingual models for semantic tasks in unseen languages, we present AmericasNLI, a parallel NLI dataset covering 10 low-resource lan-6286",
"guages indigenous to the Americas.",
"We conduct experiments with XLM-R, and find that the model's zero-shot performance, while better than a majority baseline, is poor.",
"However, it can be improved by model adaptation via continued pretraining.",
"Additionally, we find that translation-based approaches outperform a zero-shot approach, which is surprising given the low quality of the employed translation systems.",
"We hope that this work will not only spur further research into improving model adaptation to unseen languages, but also motivate the creation of more resources for languages not frequently studied by the NLP community.",
"In this work, we present a new dataset created through the translation of an existing resource, XNLI (Conneau et al., 2018).",
"While this allows for results that are directly comparable, it also means that this dataset inherits any biases and flaws which are contained in the previous dataset.",
"Furthermore, research involving languages spoken by Indigenous communities raises ethical concerns regarding the exploitation of these languages and communities: it is crucial that members of the community are able to directly benefit from the research.",
"Translation for AmericasNLI was done by either paper authors or translators who were compensated at a rate based on the average rate for translation and the minimum wage in their country of residence.",
"Additionally, many authors are members of, and/or have a record of close work with communities who speak a language contained in AmericasNLI.",
"We thank the following people for their work on the translations: Francisco Morales for Bribri, Feliciano Torres Ros for Ashninka, Perla Alvarez Britez for Guaran, Silvino Gonzlez de la Crz for Wixarika, Giovany Martnez Sebastin, Pedro Kapoltitan, and Jos Antonio for Nahuatl, Jos Mateo Lino Cajero Velzquez for Otom, Liz Chvez for Shipibo-Konibo, and Mara del Crmen Sotelo Holgun for Rarmuri.",
"We would also like to thank Dallas Card for his help with power analysis.",
"This work would not have been possible without the financial support of Facebook AI Research, Microsoft Research, Google Research, the Institute of Computational Linguistics at the University of Zurich, the NAACL Emerging Regions Fund, Comunidad Elotl, and Snorkel AI."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"result",
"other",
"method",
"result",
"method",
"method",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Metaphor involves not only a linguistic phenomenon, but also a cognitive phenomenon structuring human thought, which makes understanding it challenging.",
"As a means of cognition, metaphor is rendered by more than texts alone, and multimodal information in which vision/audio content is integrated with the text can play an important role in expressing and understanding metaphor.",
"However, previous metaphor processing and understanding has focused on texts, partly due to the unavailability of large-scale datasets with ground truth labels of multimodal metaphor.",
"In this paper, we introduce MultiMET, a novel multimodal metaphor dataset to facilitate understanding metaphorical information from multimodal text and image.",
"It contains 10,437 text-image pairs from a range of sources with multimodal annotations of the occurrence of metaphors, domain relations, sentiments metaphors convey, and author intents.",
"MultiMET opens the door to automatic metaphor understanding by investigating multimodal cues and their interplay.",
"Moreover, we propose a range of strong baselines and show the importance of combining multimodal cues for metaphor understanding.",
"MultiMET will be released publicly for research.",
"Metaphor is frequently employed in human language and its ubiquity in everyday communication has been established in empirical studies (Cameron, 2003; Steen, 2010; Shutova et al., 2010).",
"Since Lakoff and Johnson (1980) introduced conceptual metaphor theory (CMT), metaphor has been regarded as not only a linguistic, but also a cognitive phenomenon for structuring human thought.",
"Individuals use one usually concrete concept in metaphors to render another usually abstract one for reasoning and communication.",
"For example,",
"According to CMT, metaphor involves the mapping process by which a target domain is conceptualized or understood in terms of a source domain.",
"As a means of cognition and communication, metaphor can occur in more modes than text alone.",
"Multimodal information in which vision/audio content is integrated with the text can also contribute to metaphoric conceptualization (Forceville and Urios-Aparisi, 2009; Ventola et al., 2004).",
"A multimodal metaphor is defined as a mapping of domains from different modes such as text and image, text and sound, or image and sound (Forceville and Urios-Aparisi, 2009).",
"For example, in Figure 1",
"(a), the metaphorical message of fire in the sky is conveyed by a mapping between the target domain sky (sunset) and the source domain fire from two modalities.",
"Figure 1",
"(b) offers another example with the metaphor of lungs made from cigarettes so a relation is triggered between two different entities, lung and cigarette, with the perceptual idea that smoking causes lung cancer.",
"The source domain cigarette comes from the image, while the target domain lung appears in both text and image.",
"Understanding multimodal metaphor requires decoding metaphorical messages and involves many cognitive efforts such as identifying the semantic relationship between two domains (Coulson and Van Petten, 2002; Yang et al., 2013), interpreting authorial intent from multimodal messages (Evan Nelson, 2008), analyzing the sentiment metaphors convey (Ervas, 2019), which might be difficult for computers to do.",
"Qualitative studies have investigated the interplay between different modes underlying the understanding of multimodal metaphors in communicative environments such as advertisements (Forceville et al., 2017; Urios-Aparisi, 2009), movies (Forceville, 2016; Kappelhoff and Muller, 2011), songs (Forceville and Urios-Aparisi, 2009; Way and McKerrell, 2017), and cartoons (Refaie, 2003; Xiufeng, 2013).",
"In particular, with the development of mass communication, texts nowadays are often combined with other modalities such as images and videos to achieve a vivid, appealing, persuasive, or aesthetic effect for the audience.",
"This rapidly growing trend toward multimodality requires a shift to extend metaphor studies from monomodality to multimodality, as well as from theory-driven analysis to data-driven empirical testing for in-depth metaphor understanding.",
"Despite the potential and importance of multimodal information for metaphor research, there has been little work on the automatic understanding of multimodal metaphors.",
"While a number of approaches to metaphor processing have been proposed with a focus on text in the NLP community (Shutova et al., 2010; Mohler et al., 2013; Jang et al., 2015, 2017; Shutova et al., 2017; Pra-manick et al., 2018; Liu et al., 2020), multimodal metaphors have not received the full attention they deserve, partly due to the severe lack of multimodal metaphor datasets with their challenging and time-and labor-consuming creation.",
"To overcome the above limitations, we propose a novel multimodal metaphor dataset (MultiMET) consisting of text-image pairs (text and its corresponding image counterparts) manually annotated for metaphor understanding.",
"MultiMET will expand metaphor understanding from monomodality to multimodality and help to improve the performance of automatic metaphor comprehension systems by investigating multimodal cues.",
"Our main contributions are as follows: We create a novel multimodal dataset consisting of 10,437 text-image pair samples from a range of resources including social media (Twitter and Facebook), and advertisements.",
"MultiMET will be released publicly for research.",
"We present fine-grain manual multimodal annotations of the occurrence of metaphors, metaphor category, what sentiment metaphors evoke, and author intent.",
"The quality control and agreement analyses for multiple annotators are described.",
"We quantitatively show the role of textual and visual modalities for metaphor detection; whether and to what extent metaphor affects the distribution of sentiment and intention, which quantitatively explores the mechanism of multimodal metaphor.",
"We propose three tasks to evaluate fine-grained multimodal metaphor understanding abilities, including metaphor detection, sentiment analysis, and intent detection in multimodal metaphor.",
"A range of baselines with benchmark results are reported to show the potential and usefulness of the MultiMET for future research.",
"Although datasets of multimodal metaphors are scarce, a variety of monomodal datasets for metaphor studies have been created in recent years.",
"Table 1 lists these datasets with their properties.",
"Numerous text metaphor datasets have been published for metaphor processing in the NLP community including several popular ones, e.g., the VU Amsterdam Metaphor Corpus (VUAMC) (Steen, 2010), TroFi Example Base (Birke and Sarkar, 2006), and MOH-X (Mohammad et al., 2016).",
"The largest one, VUAMC, consists of over 10,000 samples spread across 16,000 sentences, while others contain less than 5,000 samples.",
"However, most existing metaphor datasets contain only textual data.",
"Image metaphor datasets are few and they are pretty limited in the size and the scope of the data, such as VisMet (Steen, 2018), which is a visual metaphor online resource consisting of only 353 image samples.",
"Although Shutova et al. (2016) constructed both text and image samples, their images were obtained by using a given phrase and queried Google Metaphor Dataset Sample Size ( % Metaphor) Modality Data Source Annotation TroFi (Birke and Sarkar, 2006) 3,737 (44 % ) Text WSJ metaphor (metaphoricity) VUAMC (Steen, 2010) 16,000 (12.5 % ) Text BNC Baby metaphor TSV (Tsvetkov et al., 2014) 3,334 (50 % ) Text Web metaphor, affect LCC (Mohler et al., 2016) 16,265 (19 % ) Text ClueWeb09 metaphor MOH (Mohammad et al., 2016) 1,639 (25 % ) Text WordNet metaphor Zayed's Tweets (Zayed et al., 2019) 2,500 (54 % ) Text Twitter metaphor Visual Met (Steen, 2018) 353 (100 % ) Image Adv, Arts, Cartoons metaphor Shutova et al. (2016) 2,415 (50 % ) Text,Image WordNet metaphor MultiMET (Ours) 10,437 (58 % ) Text,Image Social Media, Adv metaphor, sentiment, intent Table 1: Comparison of various metaphor datasets images.",
"In that way, words and images in their work may be not suitably presented by each other.",
"The cognitive nature of metaphor implies that not only one modal isolation, but rather integrated multimodal information may contribute to metaphor expression and understanding, which makes our dataset MultiMET, which is large scale and contains both natural text and image messages and their annotations, different from existing datasets and more important for metaphor studies.",
"Automatic metaphor understanding requires accomplishing certain tasks to decode metaphorical messages.",
"In this paper, we focus on three important tasks for NLP in understanding metaphor: metaphor detection, sentiment analysis, and author intent detection.",
"There has been increasing interest in NLP in various approaches to metaphor detection based on monomodal text.",
"Early metaphor studies have focused on hand-constructed knowledge and machine learning techniques (Mason, 2004; Turney et al., 2011; Tsvetkov et al., 2014; Hovy et al., 2013).",
"Others have also used distributional clustering (Shutova et al., 2013) and unsupervised approaches (Shutova et al., 2017; Mao et al., 2018).",
"More recently, deep learning models have been explored to understand metaphor.",
"However, little has been explored in multimodal metaphor detection except by Shutova et al. (2016), who are among the very few to explore the fusion of textual and image modalities to detect multimodal metaphor.",
"Their results demonstrate the positive effect of combining textual and image features for metaphor detection.",
"However, in their work, image features are extracted from a small size of constructed examples rather than natural samples of texts integrated with images, like MultiMET in our work.",
"In addition, apart from multimodal metaphor detection, the tasks related to metaphor understanding like sentiment detection and author intent detection in multimodal metaphor also have rarely been studied, although there exist similar multimodal studies in different tasks (Wang et al., 2017; Zadeh et al., 2017; Kruk et al., 2019).",
"With the goal of creating a large-scale multimodal metaphor dataset to support research on understanding metaphors, we collect data that contains both text and image from a range of sources including social media (Twitter and Facebook), and advertisements.",
"Table 2 shows an overview for the statistics of the dataset.",
"Social Media.",
"To collect potential metaphorical samples from Twitter and Facebook, we retrieved posts by querying hashtags metaphor or metaphorical.",
"We collected publicly available Twitter and Facebook posts using Twitter and Facebook APIs complying with Twitter and Facebook's terms of service.",
"What the author labels as metaphorical is not always aligned with the actual definition of metaphor in our study.",
"To collect metaphors whose nature accorded with what we define as multimodal metaphors, we re-annotated metaphorical or literal in the below section to potential Twitter and Facebook posts that other authors annotated as metaphor with hashtags.",
"Advertisements.",
"Based on our review of linguistic literature on multimodal metaphor, we focused on an important source that is the main context of study: advertisements.",
"Metaphorical messages abound in advertisements , which offer a natural and rich resource of data on metaphor and how textual and visual factors combine and interact (So-brino, 2017; Forceville et al., 2017).",
"We collected",
"highway.",
"(b) Sometimes, without knowing why, your heart beats faster.",
"New Beetle.",
"(c) A kitten is kissing a flower.",
"Butterflies are not insects.",
"To obtain the textual information, we extracted inside text from images using the API provided by Baidu AI.",
"After that, human annotators rectified the extracted inaccurate text, removed any blurred text, and obtained text + image pairs from advertisements.",
"For text data, we removed external links and mentions (@username); we removed non-English text using the LANGID (Lui and Baldwin, 2012) library to label each piece of data with a language tag; we removed strange symbols such as emo-jis; we removed metaphor or metaphoric when they were regular words rather than hashtags, because explicit metaphorical expressions are not our interest (e.g., This metaphor is very appropriate); we removed text with fewer than 3 words or more than 40 words.",
"For image data, we removed text-based images (all the words are in the image), as well as images with low resolution.",
"Because this task is about multimodal metaphor, it is necessary to maintain consistency of data between models.",
"In other words, either both the image data and the text data should be removed, or neither.",
"In addition, in the de-duplication step, we considered removal only when both text and images were repeated.",
"We annotated the text-image pairs with the occurrence of metaphors (literal or metaphorical); (if metaphorical) relations of target and source domain (target/source: target/source vocabulary in text or verbalized target/source vocabulary in im-age); target/source modality (text, image, or text + image), metaphor category (text-dominant, image-dominant, or complementary); sentiment category (the sentiment metaphors evoke, namely very negative, negative, neutral, positive, or very positive), and author intents (descriptive, expressive, persuasive, or other).",
"The annotation model was Anno-tationModel = (Occurrence, Target, Source, Tar-getModality, SourceModality, MetaphorCategory, SentimentCategory, Intent, DataSource).",
"Figure 3 is an annotation example.",
"There are a variety of ways in which texts and images are combined in multimodal content (Hendricks et al., 2016; Chen et al., 2017).",
"Based on our review of the literature and observation of the samples in our dataset, we follow Tasic and Stamenkovic (2015) and divide multimodal metaphor into three categories: text dominant, image dominant, and complementary.",
"Sometimes metaphors are expressed through texts with a mapping between source and target domains while the accompanying images serve as a visual illustration of the metaphors in the text, which is text dominant.",
"As in Figure 2",
"(a), the text itself is sufficient to convey metaphorical information and can be identified as metaphorical expressions.",
"High-way is a visual illustration of the source domain in a textual modality.",
"By contrast, in the image dominant category, images play the dominant role in conveying metaphorical information and they provide sufficient information for readers to understand the metaphors.",
"In Figure 2",
"(b), where we see the metaphorical message Beetle (cars) are blood cells, the text enriches the understanding of metaphorical meaning by adding an explanation your heart beats faster to the visual manifestation.",
"The complementary category involves a roughly equal role of texts and images in rendering metaphorical information.",
"The understanding of metaphor depends on the interaction of and balance between different modalities.",
"If texts and images are interpreted separately, metaphors cannot be understood.",
"In Figure 2",
"(c), when people read the text, A kitten is kissing a flower, and the inside text Butterflies are not insects, they do not realize the metaphorical use until they observe the butter-fly in the corresponding image and infer that the target butterfly is expressed in term of the source flower.",
"Metaphorical or literal.",
"Our annotations focus on the dimension of expression, which involves identification of metaphorical and literal expressions by verbal means and visual means (Forceville, 1996; Phillips and McQuarrie, 2004).",
"The metaphor annotation takes place at the relational level, which involves the identification of metaphorical relations between source and target domain expressions.",
"For text modality, source and target domain expressions mean source and target domain words used in metaphorical texts.",
"For image modality, source and target domain expressions mean words' verbalized source and target domain in the visual modality.",
"That is, the annotation of metaphorical relations represented in the modality of image involve the verbalization of the metaphor's domains.",
"Annotations involve naming and labeling what is linguistically familiar.",
"Unlike text modality, which relies on explicit linguistic cues, for image modality, metaphorical relations are annotated based on perceptions of visual unities, and they determine the linguistic familiarity of images as well as existing words in the metaphor's domains.",
"Following Sorm and Steen (2018), annotators identified the metaphorical text+image pairs by looking at the incongruous units and explaining one non-reversible A is B identity relation, where two domains were expressed by different modalites.",
"understanding metaphors.",
"As mentioned above, within CMT, the essence of metaphor is using one thing from a source domain to express and describe another from a target domain.",
"This implies that one important intent of creating metaphor could be to enable readers to understand the entities being described better.",
"Perceptual resemblance is a major means of triggering a metaphorical relation between two different entities (Forceville and Urios-Aparisi, 2009).",
"We name it descriptive intent, which involves visual and textual representations regarding the object, event, concept, information, action or character, etc.",
"Moreover, in modern times, the increasing ubiquity of multimodal metaphors means that people cannot ignore its power of persuasion (Urios-Aparisi, 2009).",
"People often leverage metaphor in communication environments such as advertisements and social media to persuade readers to buy or do things.",
"We name this intent as persuasive.",
"In addition, inspired by a variety of arousing, humorous, or aesthetic effects of metaphors (Christmann et al., 2011), the expressive is included in our intent annotation within the enlarged definition: expressing attitude, thought, emotion, feeling, attachment, etc.",
"Based on these factors as well as investigation of the samples in our datasets, we generalized their taxonomy and listed the categories of the author intent in metaphor as descriptive,persuasive, expressive, and others.",
"Numerous studies show that metaphorical language frequently expresses sentiments or emotions implicitly (Goatly, 2007; Kovecses, 1995, 2003).",
"Compared to literal expressions, metaphors elicit more emotional activation of the human brain in the same context (Citron and Goldberg, 2014).",
"Thus we also added the sentiment in our annotation, to test whether the sentiment impact of metaphors is stronger than literary messages from a multimodal perspective.",
"The sentiment was placed in one of the five categories of very negative, negative, neutral, positive, or very positive.",
"We took two independent annotation approaches for two different types of tasks: selecting types of sentiment and intent and the annotation of metaphor.",
"To select the options for sentiment and intent, we used a majority vote through Crowd-Flower, the crowdsourcing platform.",
"The participants were randomly presented with both the text and vision components with the instruction on the",
"The annotation of metaphors includes metaphor occurrence, metaphor category and domain relation annotation.",
"For metaphor annotation, we used expert annotators to complete the challenging annotation task, which required relatively deep understanding of metaphorical units and the complete task of verbalization of domains in image.",
"The annotator team comprised five annotators who are postgraduate student researchers majoring in computational linguistics with metaphor study backgrounds.",
"The annotators formed groups of two, plus one extra person.",
"Using cross-validation, the two-member groups annotated, and the fifth person intervened if they disagreed.",
"Annotations of multimodal metaphors rely on an-notators' opinions and introspection, which might be subjective.",
"Thus we took corresponding, different measures for different types of annotations to achieve high-quality annotation.",
"To select options, we established strict criteria for the choice of category.",
"Each text-image pair was annotated by at least 10 annotators and we used a majority vote through CrowdFlower, the crowdsourcing platform.",
"Following Shutova (2017), we chose the category of annotated options on which 70% or more annotators agreed as the answer to each question (final decision) to provide high confidence of annotation.",
"For metaphor annotation, we added a guideline course, detailed instruction, and many samples, and we held regular meetings to discuss annotation problems and matters that needed attention.",
"The guidelines changed three times when new problems emerged or good improvement methods were found.",
"The kappa score, , was used to measure inter-annotator agreements (Fleiss, 1971).",
"The agreement on the identification of literal or metaphorical was = 0 .",
"67 ; identification of text dominant, image dominant or complementary was = 0 .",
"79 ; the identification of source and target domain relation was = 0 .",
"58 , which means they are substantially reliable.",
"Metaphor Category.",
"We analyzed the role of textual and visual modalities to detect metaphors.",
"From Figure 4",
"(a), we can see a complementary category among the three kinds of multimodal metaphors, which requires the interplay of textual and visual modality to understand the metaphorical meaning.",
"It accounts for the largest proportion of metaphors, followed by the text-dominant and image-dominant categories.",
"It shows the contribution of visual factors, which are similarly important in detecting metaphors.",
"We therefore present a quantitative study of the role of textual and visual modalities in metaphor detection through human annotations and confirm the role and contribution of visuals in metaphor occurrence in natural language.",
"Author Intent.",
"Figure 4",
"(b) shows that expressive and persuasive intentions occur most frequently in the metaphorical data.",
"However, descriptive intention occurs most frequently in the non-metaphorical data.",
"This suggests that on the one hand, we are more likely to use metaphorical expressions when expressing our feelings, expressing emotions, or trying to persuade others.",
"On the other hand, we tend to use literal expressions to make relatively objective statements.",
"Sentiment.",
"Figure 4",
"(c) shows that there are some differences in the distribution of sentiment between the metaphorical data and non-metaphorical data.",
"In the non-metaphorical data, neutral sentiment accounted for the largest proportion of 51%, followed by positive sentiment (33%), strong positive sentiment (7%), negative sentiment (7%), and strong negative sentiment (2%).",
"In the metaphorical data, positive sentiment accounted for the largest proportion of 42%, followed by neutral sen-Hyper-Parameter Value Word embedding size 300 Hidden size of LSTM 256 Dropout 0.4 Text padding 30 Batch size 48 Learning rate 5e-4 Gradient clipping 10 Early stop patience 10 Table 3: Hyperparameters.",
"timents (39%), strong positive sentiment (8%), negative sentiment (8%), and strong negative sentiment (3%).",
"It turns out that there are more non-neutral sentiments in metaphor expression than in nonmetaphorical expression, and that metaphors are more frequently used to convey sentiments.",
"Our findings accord with the results of previous studies on monomodal textual metaphors that metaphors convey more sentiments or emotions than literary text (Mohammad et al., 2016).",
"We confirm the stronger emotional impact of metaphors than literary messages from a multimodal perspective.",
"In positive sentiment, the most common words in the source domain are person, face, and flower; the most common words in the target domain are love, life, and success.",
"In negative sentiment, heart, food, and smoke are the most common words in the source domain, and the world, disaster, and life are the most common words in the target domain.",
"This shows that sentiment tendency can influence the category in the source and target domains to some extent.",
"For the dataset constructed for this paper, we propose three tasks and provide their baselines, namely multimodal metaphor detection, multimodal metaphor sentiment analysis, and multimodal metaphor author intent detection.",
"We used the model shown in Figure 5 to detect metaphors, metaphorical sentiments, and metaphorical intentions.",
"For text input, we used a text encoder to encode the text and to get the feature vector of the text.",
"This paper used two different methods to encode the text, namely the pre-trained Bidirectional Encoder Representations from Transformers (BERT) model (Devlin et al., 2019) and Bidirectional Long-Short Term Memory (Bi-LSTM) networks (Medsker and Jain, 2001).",
"Similarly, for image input, we used an image encoder to extract image features.",
"We used three different image pretraining models: VGG16 (Simonyan and Zisser-man, 2014), ResNet50 (He et al., 2016), and EfficientNet (Tan and Le, 2019).",
"These methods have been widely used by researchers in feature extraction for various tasks.",
"After obtaining the text feature vector and the image feature vector, we used four different feature fusion methods to combine the vectors, namely concatenation (Suryawanshi et al., 2020), element-wise multiply (Mai et al., 2020), element-wise add (Cai et al., 2019), and maximum (Das, 2019).",
"Finally, we inputted the fusion vector into a fully connected layer and obtained the probabilities of different categories through the softmax activation function.",
"We used Pytorch (Paszke et al., 2019) to build the model.",
"The pre-trained models are available in Pytorch.",
"The word embeddings have been trained on a Wikipedia dataset by Glove (Pennington et al., 2014).",
"In the training process, we did not update the parameters in the pre-training models.",
"When the model gradually tended to converge, we updated the parameters of the pre-training models with training data to avoid overfitting.",
"We used the Adam optimizer (Kingma and Ba, 2014) to optimize the loss function, and the training method of gradient clipping (Zhang et al., 2019) to avoid gradient explosion.",
"Other hyper-parameter settings are shown in Table",
"3. 5.2 Results The classification results are shown in Table",
"4. Random means that random predictions were made using the data as a baseline.",
"In general, the model performed best on metaphor detection, followed by metaphor intention detection, and finally metaphor sentiment detection.",
"For image and Metaphor Sentiment Intention Type Text Image Validation Test Validation Test Validation Test Random -0.5063 0.4923 0.2222 0.2023 0.3416 0.3609 Text Bi-LSTM -0.7458 0.7434 0.5705 0.5714 0.6597 0.6593 BERT -0.7742 0.7736 0.5958 0.5927 0.6794 0.6720 Image VGG16 0.7315 0.7345 0.5953 0.5914 0.6672 0.6658 EfficientNet 0.7467 0.7405 0.5563 0.5548 0.6441 0.6324 ResNet50 0.7677 0.7646 0.5715 0.5714 0.6658 0.6653 Text + Image Bi-LSTM VGG16 0.7735 0.7658 0.6195 0.6157 0.6843 0.6812 Bi-LSTM EfficientNet 0.7832 0.7795 0.5723 0.5714 0.6672 0.6732 Bi-LSTM ResNet50 0.7988 0.7912 0.6263 0.6220 0.7036 0.6843 BERT VGG16 0.8033 0.8072 0.6289 0.6188 0.7012 0.7000 BERT EfficientNet 0.7975 0.8033 0.6152 0.6125 0.6833 0.6757 BERT ResNet 0.8276 0.8286 0.6462 0.6422 0.7278 0.7245 Table 4: Results on three tasks with a combination method of concatenate.",
"multimodal classification, the ResNet50 performed best, followed by VGG16, and finally EfficientNet.",
"Because ResNet solved the problem of gradient disappearance through the method of residual connection, the classification performance was better than VGG16 and EfficientNet.",
"For text and multimodal classification, BERT performed better than Bi-LSTM.",
"BERT has been fully trained in a large-scale corpus, using transfer learning technology to fine-tune our three tasks and data, so it can achieve better performance.",
"From the perspective of different features, multimodal features perform best, followed by text-only features, and finally image-only features.",
"Multimodal fusion helps to improve the classification performance by 6%.",
"This shows that the combination of image and text features is indeed helpful for the detection and understanding of metaphors, especially the detection of sentiments and intentions in metaphors.",
"In addition, the importance of text modal data is explained.",
"Without text description, it is difficult to detect metaphors correctly using only visual modal data.",
"To verify the influence of feature fusion on classification, we compared four different feature fusion methods.",
"The results are shown in Table",
"5. The concatenate method to merge image and text features produces the highest accuracy.",
"It shows that concatenate can make full use of the complementarity between different modal data, eliminate the noise generated by the fusion of different modal data, and improve the detection effect.",
"In contrast, the other three fusion methods cannot effectively eliminate the influence of noise introduced by different modal data, and it therefore interferes with the training of the model.",
"Overall, the multimode model that combines the BERT text function and the ResNet50 image function through the concatenation method performs best on our three tasks.",
"This paper presents the creation of a novel resource, a large-scale multimodal metaphor dataset, MultiMET, with manual fine-gained annotation for metaphor understanding and research.",
"Our dataset enables the quantitative study of the interplay of multimodalities for metaphor detection and confirms the contribution of visuals in metaphor occurrence in natural language.",
"It also offers a set of baseline results of various tasks and shows the importance of combining multimodal cues for metaphor understanding.",
"We hope MultiMET provides future researchers with valuable multimodal training data for the challenging tasks of multimodal metaphor processing and understanding ranging from metaphor detection to sentiment analysis of metaphor.",
"We also hope that MultiMET will help to expand metaphor research from monomodality to multimodality and improve the performance of automatic metaphor understanding systems and contribute to the in-depth understanding and research development of metaphors.",
"The dataset will be publicly available for research.",
"This research was granted ethical approval by our Institutional Review Board (Approval code: DU-TIEE190725 01).",
"We collected publicly available Twitter and Facebook data using Twitter and Facebook APIs complying with Twitter and Facebook's terms of service.",
"We did not store any personal data (e.g., user IDs, usernames) and we annotated the data without knowledge of individual identities.",
"We annotated all our data using two independent approaches (expert based and crowdsourcing based) for two different types of tasks: the annotation of metaphor and the selection of types of sentiment and intent.",
"For metaphor annotation, a deep understanding of metaphorical units was necessary.",
"This challenging task was completed by five researchers who involved in this project.",
"To annotate sentiment and intent, we used CrowdFlower, the crowdsourcing platform.",
"To ensure that crowd workers were fairly compensated, we paid them at an hourly rate of 15 USD per hour, which is a fair and reasonable rate of pay for crowdsourcing (Whiting et al., 2019).",
"We launched small pilots through CrowdFlower.",
"The pilot for sentiment options took on average 43 seconds, and crowd workers were thus paid 0.18 USD per judgment, in accordance with an hourly wage of 15 USD.",
"At the same time, the annotation of author intent took on average 23 seconds, and we thus paid 0.10 USD per judgment, corresponding to an hourly wage of 15 USD.",
"We would like to thank the anonymous reviewers for their insightful and valuable comments.",
"This work is supported by NSFC Programs (No.62076051)."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"objective",
"objective",
"method",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other"
] |
[
"Retrieval-based methods have been shown to be effective in NLP tasks via introducing external knowledge.",
"However, the indexing and retrieving of large-scale corpora bring considerable computational cost.",
"Surprisingly, we found that RE trieving from the tra IN ing dat A ( REINA ) only can lead to significant gains on multiple NLG and NLU tasks.",
"We retrieve the labeled training instances most similar to the input text and then concatenate them with the input to feed into the model to generate the output.",
"Experimental results show that this simple method can achieve significantly better performance on a variety of NLU and NLG tasks, including summarization, machine translation, language modeling, and question answering tasks.",
"For instance, our proposed method achieved state-of-the-art results on XSum, BigPatent, and CommonsenseQA.",
"Our code is released.",
"1 1 Introduction In natural language processing, retrieval-based methods work by fetching textual information related to the input from large corpora.",
"The model then takes both the input and retrieved results as input to generate results.",
"This can often improve the performance as the model is exposed to related knowledge not present in the input.",
"As a result, retrieval-based methods have been successfully applied in many tasks such as open-domain question answering (Chen et al., 2017), language modeling (Guu et al., 2018; Khandelwal et al., 2020) and machine translation (Khandelwal et al., 2021).",
"However, these methods require building an index of large-scale corpus, and the retrieval leads to a significant computational burden.",
"For example, the kNN-MT model for machine translation has a generation speed two orders of magnitude slower than traditional MT models (Khandelwal et al., 2021).",
"On the other hand, in the supervised learning setting, the text most similar in distribution to the 1 https://github.com/microsoft/REINA Figure 1: REINA pipeline of model training/inference with retrieval from training data.",
"data in inference is the training data.",
"Thus, we explore whether retrieving from the training data, which is usually much smaller than a large-scale corpus, can help improve the performance.",
"Specifically, we first index a task's labeled training data as input-label pairs.",
"Then, during both training and testing, we retrieve the input-label pairs most similar to the current input 2 .",
"Finally, we concatenate the retrieved training pairs with the input and feed it into the model.",
"An overview of our method is shown in Figure",
"1. We note that our method is similar to recent works in prompt learning (Brown et al., 2020; Liu et al., 2021), where a set of labeled data is carefully chosen based on the input and then included in the prompt for few-shot learning.",
"Our method also bears a resemblance to non-parametric instance-based learning (Gu et al., 2018).",
"However, a critical difference is that we focus on the supervised learning setting, where the model parameters are fine-tuned to learn from given examples to achieve much higher performance than few-shot learning or non-parametric methods.",
"2 During training, we exclude the training instance itself from the retrieval results to avoid data leakage.",
"on four popular types of NLP tasks: summarization, language modeling, machine translation, and question answering.",
"We find that i ) after integrating REINA, we can achieve significantly better performance on these tasks, 11 datasets in total, than models with different pre-trained models; ii ) REINA leads to SOTA performance on the datasets of XSum, CommonsenseQA (Leader-board No.1), and BigPatent; iii ) REINA can scale up more easily by leveraging more labeled data from other datasets via retrieval, outperforming baselines which is trained on the same set of data.",
"iv ) the results on 3 summarization tasks show that BART-base with REINA rivals BART-large, which contains twice more parameters now.",
"The effectiveness of our approach on summarization tasks provides insights into the core of supervised learning.",
"Even with hundreds of millions of parameters, a model cannot memorize all the patterns in the training data.",
"Thus, recapturing related training data as a side-by-side reminder can explicitly provide needed information to enhance the model's performance at inference.",
"It also points out that instead of building models of ever increasing sizes, we can make a decent-size model output high-quality results by leveraging those training data that resemble the instance at hand.",
"This can significantly reduce the computational cost while achieving a similar or better performance of a mega-sized model.",
"Retrieval-based Methods Even a pre-trained model as large as GPT-3 (Brown et al., 2020) cannot remember everything, and it is important to leverage information retrieval to collect external knowledge to solve different NLP tasks.",
"There are two types of representations for retriever: bag-of-word (BOW) based sparse representation (Chen et al., 2017) and dense representation from neural networks (Karpukhin et al., 2020).",
"For the sparse representation, as the method is based on BOW and usually rule-based score, such as BM25, is used for ranking, it can be easily adapted to a general large-scale search.",
"This method has also been widely explored to solve open domain question answering (Chen et al., 2017; Wang et al., 2018; Lin et al., 2018) and Machine Translation (Gu et al., 2018).",
"widely explored area in recent years.",
"Dense representations come from encoders, such as Transformer, trained with task-specific data.",
"And these methods can achieve better recall performance than sparse representation on different tasks, such as open domain question answering (Karpukhin et al., 2020; Guu et al., 2020; Yu et al., 2021), knowledge-grounded generation (Zhang et al., 2021), and machine translation (Cai et al., 2021).",
"One drawback of DPR is that it cannot process longer documents, usually less than 128 tokens (Karpukhin et al., 2020).",
"Another drawback is that it needs parallel data for model training on specific tasks.",
"Considering the generalization and efficiency of sparse representation, in this paper, we use BM25 score (Robertson and Zaragoza, 2009; Schtze et al., 2008) to retrieve from the training data, and our method is more flexible with no requirement of parallel data for model training.",
"Compared to nonparametric systems guided by search engine (Gu et al., 2018; Khandelwal et al., 2020), our proposed method is based on supervised learning and is more general.",
"Lewis et al. (2021) is related to our work by retrieving related questions from pre-built large-scale question-answer pairs.",
"However, our method doesn't need addition data augmentation method, and we have successfully applied REINA to a wide range of downstream tasks, including summarization, question answering, machine translation and language modeling.",
"Prompt Engineering With the success of large-scale language models (Brown et al., 2020) on fewshot learning, prompt engineering comes to be a popular research direction.",
"The idea is to prepend several labeled instances to the input sequence and then conduct the classification or generation.",
"Liu et al. (2021) proposes to prepend the most related labeled data as prompt to help fewshot inference.",
"Li and Liang (2021) optimizes the prompt in continuous space.",
"Motivated by these works where a good labeled prompt can help fewshot learning, we also prepend/append the most similar labeled training data for all the data in training, validation, and test set.",
"However, different from prompt learning, we focus on supervised learning settings.",
"In this section, we will introduce the details of our proposed method.",
"Briefly, given the input, we first retrieve the most matched instances with labels 3171 Figure 2: Model training with retrieval from the training data ( REINA ).",
"from the training data.",
"We then concatenate them with the input sequence to feed into the model for generating the output.",
"An overview of the whole method is shown in Figure",
"2. 3.1 Retrieval-based Methods A retrieval-based method collects information most similar to the input from a corpus and then combines it with the input to feed into the NLP model.",
"Suppose we index the corpus into a list of key-value pairs, i.e. C = { ( k i , v i ) } .",
"Then, given the input x , the retrieval engine E matches it with all keys and returns the top K most similar keys to the query together with their values: { ( k i 1 , v i 1 ) , ..., ( k i K , v i K ) } = E ( x |C ) (1) In this work, we build the retrieval engine based on the widely used BM25 score (Schtze et al., 2008).",
"We choose BM25 over dense representation mainly for its faster speed.",
"Then, these retrieved results are combined with the input x to feed into the NLP model M to generate the output O : O = M ( f ( x, { ( k i 1 , v i 1 ) , ..., ( k i K , v i K ) } ) (2) Here, the combination function f can be concatenation, e.g. f ( x, { ( k i 1 , v i 1 ) , ..., ( k i K , v i K ) } ) = [ x ; v i 1 ; ... ; v i K ] .",
"As data in different tasks is organized in different formats with varying lengths, we will introduce how we define different combination functions f for various tasks in the follows.",
"As retrieval from a large corpus is computationally costly, we propose to retrieve from the labeled training data.",
"In other words, we directly adopt the training data T = { ( x 1 , y 1 ) , ..., ( x N , y N ) } as the indexed corpus C , where x i is the input and y i is the ground-truth label.",
"Given an input x , the top K retrieved training instances with labels are combined with x as input to the model M , i.e., M ( f ( x, { ( x i 1 , y i 1 ) , ..., ( x i K , y i K ) } .",
"Both training and inference take this retrieve-combine-generate scheme.",
"Note that during training, as the input x is already indexed, we filter it from the retrieval results to avoid data leakage.",
"Now, we introduce how we define the keys, values, and the combination function for different NLP tasks.",
"Summarization is to generate a summary for a given document.",
"We first build an index for the document-summary pairs in the training data, where a document is the key and its summary is the value.",
"Given a document x , we search for the most similar documents in the index.",
"As documents are usually quite long, the combination function only keeps the values (summaries), i.e., f summ ( x, { ( x i 1 , y i 1 ) , ..., ( x i K , y i K ) } ) = [ x ; y i 1 ; ... ; y i K ] .",
"Language Modeling (LM) generates the probability of a given sequence of words.",
"Typically, a Left-to-Right language model (Dong et al., 2020) is trained on chunked sequences with an attention mask.",
"In this paper, we use Seq2Seq based approach, i.e., given a context chunk, we predict the next chunk of text.",
"In detail, we first chunk all the text in the training data.",
"The IR index is built with one chunk C i as the key x i and its next chunk C i +1 as the value y i .",
"Given a chunk x , we look for the most similar keys in the index and prepend their corresponding next chunks to x , i,e., f LM ( x, { ( x i 1 , y i 1 ) , ..., ( x i K , y i K ) } ) = [ y i 1 ; ... ; y i K ; x ] .",
"Machine Translation is to translate text from the source language S to the target language T .",
"We define the key to be the sentence in S and the value to be its translation in T .",
"To keep the sequence short and speed up the training process, we only concatenate the retrieved text in target language: f MT ( x, { ( x i 1 , y i 1 ) , ..., ( x i K , y i K ) } ) = [ x ; y i 1 ; ... ; y i K ] .",
"Question Answering We mainly consider multiple-choice question answering, where commonsense knowledge is also required to reach the correct answer.",
"For each question x i , there is a correct choice y i and several distractive candidate choices.",
"We index the concatenation of the question and the corresponding ground-truth choice.",
"For a new question x , the model is given several choices c 1 , ..., c M .",
"We concatenate x with each choice c i as the query and retrieve related training instances: { ( x i 1 , y i 1 ) , ..., ( x i K , y i K ) } = E ( x ; c i |C ) .",
"The combination function f concatenates both retrieved question and answers with the input: f QA (( x, c i ) , { ( x i 1 , y i 1 ) , ..., ( x i K , y i K ) } ) = [ x ; c i ; x i 1 ; y i 1 ; ... ; x i K ; y i K ] .",
"Then, the model predicts a score representing how likely c i is the correct choice to x .",
"As the task requires commonsense knowledge, we build another version of index integrating commonsense knowledge.",
"We follow the strategy from (Xu et al., 2021) and extract the knowledge from ConceptNet (Speer et al., 2017) and Wiktionary 3 for the concepts in the question and choices.",
"For each question x and choice c , we use string match to find corresponding entities in ConceptNet: E ( x ) = { e ( x ) 1 , ..., e ( x ) n x } appears in the question, and E ( c ) = { e ( c ) 1 , ..., e ( c ) n c } appears in the answer.",
"To find the most relevant concept, we choose the concept with maximum length as the question and answer concept.",
"We find the definition of the chosen concepts from Wiktionary.",
"To find relations in ConceptNet, we find edges that connects question and answer concepts: R = { ( e 1 , r, e 2 ) | e 1 E ( x ) , e 2 E ( c ) , ( e 1 , e 2 ) KG} .",
"Here KG is ConceptNet and r is a relation (e.g., AtLocation ).",
"We concatenate the Wiktionary definitions and ConceptNet relations R to form the knowledge, K , for a question.",
"The knowledge K is included both in the query and index.",
"Thus, the retrieval process becomes: { ( x i 1 , c i 1 , K i 1 ) , ..., ( x i K , y i K , K i K ) } = E ( x ; c i ; K|C ) .",
"The combination function f concatenates retrieved questions and answers with the input: f QAK (( x, c i ) , E ( x ; c i ; K|C )) = [ x ; c i ; x i 1 ; y i 1 ; ... ; x i K ; y i K ] .",
"After concatenating the input with the retrieved data from the training corpus, we feed the new sequence into the Seq2Seq framework for generation tasks and the encoder-only framework for question answering tasks.",
"During training, as it will also retrieve the exact golden label, we filter it directly.",
"During inference, we will not filter any retrieved information, as all the retrieve data only come from training set.",
"In this section, we will introduce more details about experiments and the corresponding analysis.",
"We evaluate REINA on 4 different tasks with 12 datasets as shown in Table",
"1. Summarization We evaluate our method on 5 summarization datasets: 1) XSum (Narayan et al., 2018), extreme summarization, is a task of one sentence summarization on one document.",
"The document comes from British Broadcasting Corporation (BBC) online articles.",
"2) NEWSROOM (Grusky et al., 2018) is a summarization dataset on a larger scale and the articles with human-written summaries come from 38 major news publications.",
"3) Multi-News (Fabbri et al., 2019) is a task of multi-document summarization on news articles from the site newser.com.",
"4) BigPatent (Sharma et al., 2019) is constructed on U.S. patent documents along with human written abstracts.",
"The documents cover broader areas in 9 different categories.",
"Another domain, 5) WikiHow (Koupaee and Wang, 2018) is to summarize the steps of How to\" solve a problem.",
"The dataset consists of more diverse style articles written by ordinary people.",
"Besides the above datasets, we also introduce 3 https://www.wiktionary.org/ CNN/Dailymail (Nallapati et al., 2016) and 160G BART pretraining corpus (Lewis et al., 2020) from BOOKCORPUS, CC-NEWS, OPENWEBTEXT, and STORIES, to scale up the training corpus.",
"Language Modeling As our model is initialized by a pre-trained model, we select two language modeling datasets, the corpus of which is not used for model pre-training.",
"The text of both datasets, WikiText103 (Merity et al., 2017) and WikiText2 (Merity et al., 2017), are extracted from Wikipedia.",
"As the dataset's text is at a document level, the tasks focus on testing the model's ability to remember longer sequences.",
"Machine Translation We evaluate our method on the translation of English-German and English-Turkish in both directions from WMT16 (Bojar et al., 2016).",
"Question Answering We have 3 question answering datasets to evaluate our method: 1) CommonsenseQA (CSQA, Talmor et al., 2019) is a dataset for commonsense multi-choice question answering.",
"The questions are generated based on commonsense knowledge base, ConceptNet.",
"2) Physical IQA (PIQA, Bisk et al., 2020) is to answer questions requiring physical commonsense reasoning.",
"3) Abductive NLI (aNLI, Bhagavatula et al., 2020) is a multiple-choice question answering task for choosing the more likely explanation.",
"All these tasks are challenging by requiring commonsense knowledge to reach the correct answer.",
"For the task of summarization, instead of directly retrieving the most relevant summary (An et al., 2021), we find the most relevant documents by BM25 score and then leverage the corresponding summaries.",
"Compared to the dense passage retrieval based method, our method can handle the long document retrieval and does not need to train.",
"Moreover, REINA is easier to scale up.",
"We also consider joint training baseline on Summarization tasks.",
"Our setting is to test how other datasets can help improve XSum.",
"For REINA, we build index on summarization datasets from different sources.",
"During model training, we will only train models with the XSum dataset along with retrieved data appended to the documents.",
"For language modeling task, instead of working on word-level retrieval by KNN (Khandelwal et al., 2020), we chunk all the training data.",
"During 3174 BigPatent XSum WikiHow Multi-News NEWSROOM R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L Earlier SOTA 37.5 10.6 22.7 45.1 22.2 37.2 28.5 9.2 26.5 43.4 14.8 17.4 39.9 28.3 36.8 PEGASUS 53.6 33.2 42.3 47.2 24.6 39.3 43.1 19.7 34.8 47.5 18.7 24.9 45.2 33.5 41.3 PEGASUS 38.4 13.5 26.3 46.6 23.9 38.6 35.9 15.3 30.3 43.1 15.4 22.6 41.7 30.7 37.8 REINA (PG) 44.6 21.5 33.0 48.2 26.0 40.2 36.8 16.7 31.0 45.0 17.1 23.8 41.4 30.5 37.5 BART-base 44.2 16.9 28.4 41.0 18.2 33.3 43.3 18.1 33.9 44.8 16.4 23.3 41.3 29.1 37.5 REINA (B) 59.5 42.6 50.6 43.2 21.0 35.5 44.2 19.4 34.9 45.1 16.9 23.6 41.2 29.0 37.5 BART-large 44.9 17.5 28.9 44.7 21.6 36.5 43.4 19.0 34.9 44.1 16.6 22.7 41.6 29.4 38.0 REINA (L) 60.7 43.3 51.3 46.5 24.1 38.6 44.2 20.4 35.8 46.9 17.7 24.0 42.5 30.2 38.7 Table 2: Summarization results.",
"training, besides the retrieved chunks, we will also include the context of the query chunk to generate next chunk.",
"Compared to KNN-LM (Khandelwal et al., 2020), REINA only needs retrieval once per chunk which is much more efficient.",
"For multi-choice question answering, we build two types of indexes with or without external knowledge from ConceptNet and Wiktionary.",
"For the query, the concatenation of question and one candidate answer, we also have two versions, with or without knowledge.",
"After adding knowledge, there would be more word overlaps when key concept words between questions are matched.",
"The retrieved information will be treated as either a prompt or additional knowledge to encode together and then predicts the answer probability of each candidate.",
"Our information retrieval is based on Lucene Index 4 .",
"Our model training is based on Transformers library 5 .",
"All our experiments are based on 8-GPU machines.",
"For summarization tasks, we initialized the model with three types of pre-trained models, PEGASUS-large (Zhang et al., 2020), BART-base, and BART-large (Lewis et al., 2020).",
"Optimization is based on AdamW (Loshchilov and Hutter, 2019).",
"We tune the hyper-parameters from learning rate {2e-05, 5e-05, 7e-05}, and set dropout 0.1, batch size 32.",
"For both baseline and our method, we set the maximal length of the input sequence to be 1024.",
"We use the original document to generate summary in baselines.",
"For REINA, we set the maximal length of the original document 600 and then append the top-5 retrieved summaries from training data.",
"For language modeling tasks, we initialized the model with BART-base and BART-large.",
"We set the number of words in each chunk to 128 for WikiText103 and 64 for WikiText2.",
"For each chunk generation, we set the context length of baseline methods 1024.",
"For our method, we set the context 512 and prepend the retrieved text.",
"The maximal length of the concatenated sequence is 1024.",
"We use optimizer Adam (Kingma and Ba, 2015) with learning rate 5e-05, dropout 0.1, batch size 32.",
"For machine translation tasks, we initialized the model with mBART-large (Liu et al., 2020).",
"We 4 https://lucene.apache.org/pylucene/ 5 https://github.com/huggingface/transformers 3175 CSQA aNLI PIQA Dev Set results DeBERTa 84.0 88.8 85.6 REINA (w/o K) 88.8 88.6 85.5 REINA (w/ K) 86.8 89.6 86.9 Test Set results CALM 71.8 82.4 76.9 UNICORN 79.3 87.3 90.1 DEKCOR 83.3 -DeBERTa -86.8 85.1 REINA 84.6 88.0 85.4 Table 4: Question answering results.",
"follow the hyper-parameter setting from the original paper with Adam optimizer, dropout 0.3, label smoothing 0.2, warm-up steps 2500, maximum learning rate 3e-05, and training updates 40K in total.",
"For question answering datasets, our method is based on DeBERTa (He et al., 2021) with 1.5B parameters.",
"We use optimizer AdamW (Loshchilov and Hutter, 2019) with learning rate 3e-06, batch size 8.",
"As the datasets requiring commonsense reasoning, we also leverage knowledge bases, ConceptNet and Wiktionary, in REINA .",
"Our experiment results on the summarization tasks are shown in Table",
"2. Our evaluation metric is based on Rouge-1/2/L scores, same as PEGASUS (Zhang et al., 2020).",
"We have a broad experiment on 5 datasets, ranging from single document summarization (XSum) to multi-document summarization (Multi-News), from news domain to wiki knowledge (WikiHow) and patent (Big-Patent) domains.",
"We re-run all of our baseline methods.",
"Based on the experiment results, we find that REINA can significantly boost the baselines initialized with different pre-trained models, such WikiText103 WikiText2 Transformer-XL 18.30 -kNN-LM 15.79 -GPT-2 17.48 18.34 BART-Base 15.88 20.41 REINA (B) 14.76 20.78 BART-Large 12.10 15.11 REINA (L) 11.36 15.62 Table 5: Language modeling results.",
"as PEGASUS, BART-base, and BART-large, on all 5 datasets.",
"Besides, our method with BART-large can achieve state-of-the-art performance on XSum and BigPatent datasets.",
"Moreover, we find REINA can help base models beat larger models.",
"For example, REINA (BART-base) is better than both PEGASUS-LARGE and BART-large on BigPatent and WikiHow datasets.",
"We also evaluate the ability of REINA on learning from more related datasets.",
"Our experiment results are shown in Table",
"3. The evaluation is conducted on XSum test set and we use three related data sources from CNN/Dailymail, NEWSROOM, and a 160G raw-text corpus 6 .",
"Based on the experiments, we can see that simply training the model on merged dataset (XSum + other sources) doesn't lead to any gains.",
"However, after adding one additional data source to build index and applying 6 For the 160G data, we treat the first sentence as summary and the rest as document.",
"REINA, there's 1% improvement in Rouge scores 7 .",
"Overall, our REINA can effectively leverage the most relevant data from additional datasets while being trained only on the target task.",
"For question answering tasks, our results are shown in Table",
"4. We test REINA on three datasets, where commonsense knowledge is usually required to answer the question.",
"Thus we first verify whether we need external knowledge during the retrieval.",
"According to the experiments, we find that directly retrieving the labeled data without knowledge works best for CommonsenseQA dataset, but involving knowledge can help on aNLI and PIQA datasets.",
"And REINA can significantly improve our baselines with DeBERTa on all the datasets.",
"Moreover, after submitting our best results to the corresponding leaderboards, REINA achieves state of the art on CommonsenseQA dataset (Leader-board No.1) and beat strong baselines on aNLI and PIQA datasets.",
"Our evaluation of language modeling is shown in Table",
"5. Our method can achieve significant improvement on WikiText103 dataset over both BART-base and BART-large baselines.",
"However, it cannot lead to better performance on WikiText2.",
"One reason may be that WikiText2 is a much smaller dataset, and it's hard for REINA to retrieve the most related text.",
"Besides, we also find Seq2Seq model can be a very strong baseline which means we can leverage more pre-trained models such as PEGASUS, T5 (Raffel et al., 2030), and 7 In our experiments, we follow Xu and Durrett (2021) by ignoring the retrieved data if there are over three 7-gram overlap between retrieved summary and golden summary.",
"BART, for language modeling in future work.",
"And Seq2Seq frame would be more flexible to integrate external knowledge to boost performance further.",
"For machine translation, we make use of the datasets from WMT16.",
"We select one low-resource language, Turkish-English, and one rich-resource, German-English, for REINA evaluation, as shown in Table",
"6. We re-implement mBART baseline for translation in both directions.",
"To make a fair comparison, REINA is also based on mBART.",
"We can find that REINA can further boost performance under three settings, translating English to Turkish, Turkish to English, and English to German.",
"We show a case study on the data retrieved by REINA .",
"We list two cases from XSum and CommonsenseQA dev sets.",
"From the case on summarization task, we can find that the first retrieved summary from training set, REINA 1, shows the same point of security concerns\" as the golden summary.",
"And the other case on multi-choice question answering, REINA 1 suggests that the sun can warm up a place that shares the same commonsense knowledge to answer the question.",
"After, although we cannot visualize how the neural encoders work by leveraging the retrieved data, we have shown that the data from REINA have very strong correlation with the golden labels.",
"proposed method is general and can be easily integrated into different models on different tasks.",
"We prove that REINA can effectively improve baseline performance on 11 datasets covering summarization, language modeling, machine translation, and question answering tasks."
] | [
"abstain",
"abstain",
"result",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"objective",
"method",
"method",
"result",
"method",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"method",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result"
] |
[
"Constituency parsing and nested named entity recognition (NER) are similar tasks since they both aim to predict a collection of nested and non-crossing spans.",
"In this work, we cast nested NER to constituency parsing and propose a novel pointing mechanism for bottom-up parsing to tackle both tasks.",
"The key idea is based on the observation that if we traverse a constituency tree in post-order, i.e., visiting a parent after its children, then two consecutively visited spans would share a boundary.",
"Our model tracks the shared boundaries and predicts the next boundary at each step by leveraging a pointer network.",
"As a result, it needs only linear steps to parse and thus is efficient.",
"It also maintains a parsing configuration for structural consistency, i.e., always outputting valid trees.",
"Experimentally, our model achieves the state-of-the-art performance on PTB among all BERT-based models (96.01 F1 score) and competitive performance on CTB7 in constituency parsing; and it also achieves strong performance on three benchmark datasets of nested NER: ACE2004, ACE2005, and GENIA 1 .",
"Constituency parsing is an important task in natural language processing, having many applications in downstream tasks, such as semantic role labeling (Fei et al., 2021), opinion mining (Xia et al., 2021), among others.",
"Named entity recognition (NER) is a fundamental task in information extraction and nested NER has been receiving increasing attention due to its broader applications (Byrne, 2007).",
"Constituency parsing and nested NER are similar tasks since they both aim to predict a collection of nested and non-crossing spans (i.e., if two spans overlap, one must be a subspan of the other).",
"Fig.1 Corresponding Author 1 Our code is publicly available at https://github.",
"shows example span representations of both tasks.",
"The difference between the two tasks is that the collection of spans form a connected tree in constituency parsing, whereas they form several tree fragments in nested NER.",
"However, we can add a node that spans the whole sentence to connect all tree fragments in nested NER to form a tree.",
"Because of the similarity, there are some previous studies adapting methods from the constituency parsing literature to tackle nested NER (Finkel and Manning, 2009; Wang et al., 2018; Fu et al., 2021).",
"In this work, we focus on constituency parsing, but our proposed method tackles nested NER as well.",
"ing are span-based and transition-based methods.",
"Span-based methods (Stern et al., 2017; Kitaev and Klein, 2018; Zhang et al., 2020; Xin et al., 2021, inter alia) decompose the score of a constituency tree into the scores of constituent spans and use chart-based algorithms for inference.",
"Built upon powerful neural encoders, they have obtained state-of-the-art results.",
"However, they suffer from the high inference time complexity of exact algorithms or error propagation of top-down approximate algorithms.",
"In contrast, transition-based methods (Dyer et al., 2016; Cross and Huang, 2016; Liu and Zhang, 2017, inter alia) conduct a series of local actions (e.g., shift and reduce) to build the final parse in linear steps, so they enjoy lower parsing time complexities.",
"However, they suffer from the error propagation and exposure bias problems.",
"Recently, Nguyen et al. (2021a) propose a sequence-to-sequence (seq2seq) model with pointer networks (Vinyals et al., 2015a).",
"They cast constituency parsing to a top-down splitting problem.",
"First, they use neural encoders to obtain span representations, similar to span-based methods.",
"Then they feed input parent span representations into the neural decoder recursively following the order shown in Fig.",
"2(a) 2 which amounts to pre-order traversalto output a series of splitting points (i.e., boundaries) via pointer networks, so that each parent span is split into two child spans.",
"Notably, Nguyen et al. (2020) propose a similar top-down pointing mechanism, but they design a chart-based parsing algorithm instead of adopting seq2seq modeling, and has been shown underper-forming Nguyen et al. (2021a).",
"Thanks to seq2seq modeling, Nguyen et al. (2021a)'s model achieves a competitive parsing performance with a lower parsing complexity compared with span-based methods.",
"However, their model has two main limitations.",
"First, when generating each constituent, its subtree features cannot be exploited since its subspans have not been realized yet (Liu and Zhang, 2017).",
"Thus it is difficult for the model to predict the splitting point of a long span due to a lack of its subtree information, which exacerbates the error propagation problem and undermines the parsing performance.",
"Second, since each parent span can only be split into two, their parsing algorithm can only ouput binary trees, thus needing binarization.",
"In this work, we devise a novel pointing mechanism for bottom-up parsing using (almost) the same seq2seq backbone as Nguyen et al. (2021a).",
"Our model is able to overcome the two aforementioned limitations of Nguyen et al. (2021a).",
"The main idea is based on the observation that if we traverse a constituency tree in post-order (i.e., visiting a parent after its children), two consecutively visited constituent spans would share a boundary.",
"Fig.",
"2(b) shows an example: the right boundary of 1 is also the left boundary of 2 and the right boundary of 5 is also the right boundary of 6 .",
"Based on this observation, we propose to use a cursor to track the shared boundary boundaries and at each step, leverage a pointer network to predict the next boundary for generating the next constituent span and update the cursor to the right boundary of the new span.",
"Our model generates one span at each step, thus needing only linear steps to parse a sentence, which is efficient.",
"Besides, our model can leverage rich subtree features encoded in the neural decoder to generate parent constituent spans, which is especially helpful in predicting long spans.",
"Finally, our model can output n-ary trees, enabling direct modeling of the original non-binary parse tree structures in treebanks and eliminating the need for binarization.",
"We conduct experiments on the benchmarking PTB and CTB for constituency parsing.",
"On PTB, we achieve the state-of-the-art performance (96.01 F1 score) among all BERT-based models.",
"On CTB, we achieve competitive performance.",
"We also apply our method to nested NER and conduct experiments on three benchmark datasets: ACE2004, ACE2005, and GENIA.",
"Our method achieves comparable performance to many tailored methods of nested NER, beating previous parsing-based methods.",
"Our contributions can be summarized as the following: We propose a novel pointing mechanism for bottom-up n-ary tree parsing in linear steps.",
"Our model achieves the state-of-the-art result on PTB in constituency parsing.",
"We further show its application in nested NER where it achieves competitive results.",
"It is known that constituency parsing can be regarded as a top-down splitting problem where parent spans are recursively split into pairs of subspans (Stern et al., 2017; Shen et al., 2018; Nguyen et al., 2020, 2021a).",
"However, this formulation can output binary trees.",
"We make an extension to cast constituency parsing as top-down segmentation, i.e., parent spans are segmented into 2 subspans recursively, for the sake of outputting n-ary trees.",
"To this end, we add some spans (we do not allow two adjacent spans to eliminate ambiguities) so that each span is either a bottommost span or can be segmented by its subspans.",
"For instance, in Fig 2, 3 is a bottom-most span, and 7 can be segmented by 2 , 3 and 6 .",
"We always include the whole-sentence span in order to cast other tasks, e.g., nested NER, to constituency parsing.",
"We also collapse unary chains to atomic labels in constituency parsing, e.g., S->VP S+VP .",
"A problem of seq2seq constituency parsers is how to maintain structural consistency, i.e., outputting valid trees.",
"To solve this problem, our pointing system maintains a parsing configuration , which is a quadruple ( c, A, p, S ) where: c : index of the cursor.",
"p : the left boundary of the lastly created span, which is needed to maintain A .",
"We can see from Fig. 3 that in the beginning, the cursor c lies at 0.",
"At each step, c points to another boundary a from A to form a span (min( c, a ) , max( c, a )) .",
"There are two cases: c < a : a new bottom-most span is generated.",
"a < c : several consecutive spans are merged into a larger span.",
"It is worthy to note that we can merge > = 2 spans in a single step, which allows our model to perform n-ary tree parsing.",
"In the first case, the new bottom-most span can combine with the very previous span to form a larger span whose left boundary is p , so we push p back to A (except for the case that p = null ).",
"In the later case, the very previous span is a subspan of the new span and thus p cannot be pushed back.",
"In both cases, all indices min( c, a ) i < max( c, a ) are removed from A due to the post-order generation restriction; p is updated to min( c, a ) and c is updated to max( c, a ) .",
"The process stops when the whole-sentence span is generated.",
"Table 1 for-malises this process.",
"Oracle.",
"The oracle pointing representations shown in Fig.1 can be generated by running a postorder traversal of the tree (e.g., Fig.2) and for each traversed span, pointing the cursor from its boundary shared with the previous span to its other boundary.",
"If we do not allow two consecutive spans, the oracle is unique under our pointing system (we give a proof in Appendix A.1 by contradiction).",
"Given a sentence w = w 1 , ..., x n , we add <bos> (beginning of sentence) as w 0 and <eos> (end of sentence) as w n +1 .",
"The oracle is { q i p i , y i } i =1 ,...,m , where y i is the span label and we use l i = min( q i , p i ) and r i = max( q i , p i ) to denote the left and right boundary of the i -th span, respectively.",
"Encoder.",
"We feed the sentence into BERT (De-vlin et al., 2019) and for each word w i , we use the last subtoken emebedding of the last layer as its dense representations x i .",
"Then we feed x 0 , . . . , x n +1 into a three-layer bidirectional 2405 Initial configuration ( c, A, p, s ) = (0 , { 1 , 2 , . . . , n } , null , ) Goal (0 , n ) S Pointing action Input Output Precondition LEFT-POINTa ( c, A, p, S ) ( c, A \\ { a, . . . , c 1 } , a, S { ( a, c ) } ) 0 a < c RIGHT-POINTa ( c, A, p, S ) ( a, A { p } \\ { c, . . . , a 1 } , c, S { ( c, a ) } ) c < a n , Table 1: Description of the parsing configuration.",
"LSTM (Hochreiter and Schmidhuber, 1997) (BiL-STM) to obtain c 0 , . . . , c n +1 , where c i = [ f i ; g i ] and f i and g i are the forward and backward hidden states of the last BiLSTM layer at position i respectively.",
"Boundary and span representation.",
"We use fencepost representation (Cross and Huang, 2016; Stern et al., 2017) to encode the i -th boundary lying between x i and x i +1 : b i = [ f i ; g i +1 ] then we represent span ( i, j ) as: h i,j = MLP span ( b j b i ) Decoder.",
"We use a unidirectional one-layer LSTM network as the decoder: d t = LSTM ( d t 1 , h l t 1 ,r t 1 ; E y t 1 ) , t 2 (1) where d t is the hidden state of the LSTM decoder at time step t , E is the label embedding matrix, ; is the concatenation operation.",
"For the first step, we feed a randomly initialized trainable vector d 0 and a special <START> embedding into the decoder to obtain d 1 .",
"Pointing score.",
"We use a deep biaffine function (Dozat and Manning, 2017) to estimate the pointing score s ti of selecting the i -th boundary at time step t : d (cid:48) t = MLP cursor ( d t ) b (cid:48) i = MLP point ( b i ) s ti = (cid:2) b (cid:48) i ; 1 (cid:3) (cid:62) W point d (cid:48) t where MLP cursor and MLP point are multi-layer perceptrons (MLPs) that project decoder states and boundary representations into k -dimensional spaces, respectively; W point R ( k +1) ( k ) .",
"Label score.",
"For a newly predicted span, we feed the concatenation of the span representation and the decoder state into another MLP to calculate the label score e t : H = MLP label ([ d t ; b r t b l t ]) e t = HET Note that we reuse the label embedding matrix from Eq.",
"1 to facilitate parameter sharing.",
"Training objective.",
"The training loss is decomposed into the pointing loss and the labeling loss: L = L pointing + L labeling L pointing = m (cid:88) t =1 log exp { s tp t } (cid:80) nj =0 exp { s tj } L labeling = m (cid:88) t =1 log exp { e ty t } (cid:80) | L | j =1 exp { e tj } where | L | is the number of labels.",
"Note that in the pointing loss we normalize over all boundaries instead of only accessible boundaries, because we find it performs better in our preliminary experiments.",
"Parsing.",
"Our model follows the description in the previous subsection for parsing.",
"For each time step t , it selects the highest-scoring accessible boundary to generate the span, then selects the highest-scoring label of the generated span, and updates the parsing configuration (Table 1).",
"Constituency parsing.",
"We conduct experiments on Penn Treebank (PTB) 3.0 (Marcus et al., 1993) and Chinese Treebank (CTB) (Xue et al., 2005).",
"Many previous researchers report that the results on CTB5.1 are unstable and of high variance (Zhang et al., 2020; Yang and Deng, 2020).",
"So we follow the suggestion of Zhang et al. (2020) to conduct experiments on CTB7 instead of CTB5.1 for more robust evaluation as CTB7 has more test sentences and has a higher annotation quality.",
"We use the standard data splits for both PTB and CTB.",
"Nested NER.",
"We conduct experiments on three benchmark datasets: ACE2004 (Doddington et al., 2004), ACE2005 (Walker et al., 2006), and GENIA (Kim et al., 2003).",
"We use the same data preprocessing as Shibuya and Hovy (2020) 3 .",
"We report labeled recall/precision/F1 scores based on EVALB 4 for constituency parsing; span-level labeled recall/precision/F1 scores for nested NER.",
"All reported results are averaged over three runs with different random seeds.",
"We use \"bert-large-cased\" (Devlin et al., 2019) for PTB, ACE2004 and ACE2005; \"bert-chinese-based\" for CTB; and \"biobert-large-cased-v1.1\" (Lee et al., 2020) for GENIA.",
"We use no other external resources (e.g., predicted/gold POS tags, external static word embedding).",
"The hidden size of LSTM is set to 1000 for both the encoder and the decoder.",
"We add dropouts in LSTM/MLP layers.",
"The dropout rate is set to 0.33.",
"The hidden and output sizes of all MLPs are set to 500.",
"The value of gradient clipping is set to 5.",
"The number of training epochs is set to 10 for PTB, CTB, GENIA; 50 for ACE2004/2005.",
"We use Adam (Kingma and Ba, 2015) as the optimizer with 1 = 0 .",
"9 and 2 = 0 .",
"9 .",
"The maximal learning rate is set to 5 e 5 for BERT and 2 .",
"5 e 3 for all other components.",
"We use the first 10% epochs to linearly warmup the learning rates of each components to their maximum value and gradually decay them to zero for the rest of epochs.",
"We batch sentences of similar lengths to make full use of GPUs and the number of tokens in a single batch is set to 3000.",
"On both PTB and CTB, we find incorporating E y t 1 in Eq.",
"1 leads to a slightly inferior performance (-0.02 F1 score on PTB and -0.05 F1 score on CTB), so we report results without this input feature.",
"Table 2 shows the results on PTB test set.",
"Our method achieves 96.01 F1 score, outperforming the method of Nguyen et al. (2021a) by 0.31 F1 and having the same worst-case O ( n 2 ) parsing time complexity as theirs 5 .",
"It also outperforms all span-3 https://github.com/yahshibu/ nested-ner-tacl2020-transformers 4 https://nlp.cs.nyu.edu/evalb 5 In their paper, they claim an O ( n ) time complexity, which treats the complexity of a single pointing operation as O(1).",
"This calculation, however, assumes full GPU parallelization.",
"Without parallelization, their method has a worst-case O ( n 2 ) time complexity as ours.",
"based methods, obtaining the state-of-the-art performance among all BERT-based models while enjoying a lower parsing complexity.",
"Table 3 shows the results on CTB7.",
"Our method obtains 91.49 F1 score, which is comparable to the method of Zhang et al. (2020) but has a lower complexity (worst-case O ( n 2 ) vs. O ( n 3 ) ).",
"Table 4 shows the results on three benchmark dataset on nested NER.",
"We find that incorporating E y t 1 is important, leading to +0.67 F1 score and +0.52 F1 sore on ACE2004 and ACE2005, respectively.",
"Although our method underperforms two recent state-of-the-art methods: Shen et al. (2021) and Tan et al. (2021), we find it has a competitive performance to other recent works (Wang et al., 2021; Yan et al., 2021; Fu et al., 2021).",
"The most comparable one is the method of Fu et al. (2021), which belongs to parsing-based methods as ours.",
"They adapt a span-based constituency parser to tackle nested NER using the CYK algorithm for training and inference.",
"Our model outperforms theirs by 0.34 F1 and 0.13 F1 scores on ACE2004 and ACE2005 and has a similar performance to theirs on GENIA, meanwhile enjoying a lower inference complexity.",
"Error analysis.",
"As we discussed previously, bottom-up parsing can make use of the subtree features when predicting parent spans, so it is expected to have higher F1 scores on longer spans.",
"To verify this, we plot Fig. 4 to show the changes of F1 scores with different constituent span lengths on the PTB test set.",
"We can see that our method consistently outperforms the method of (Nguyen et al., 2021a) on all span lengths, but our advantage is most prominent for spans of length >30, which verifies our conjecture.",
"In Fig. 5, we can see that when a constituent has multiple children (>3), our method performs much better than that of (Nguyen et al., 2021a), which validates the benefit of n-ary tree parsing .",
"An intuitive explanation of this benefit is that our method predicts n-ary branching structures in a single step, whereas theirs needs multiple steps, which is more error-prone.",
"Effect of beam search.",
"We also tried beam search but observed very slight improvement or even worse performance (e.g., +0.05 F1 score on PTB and -0.03 F1 score on CTB when we use a beam size 20).",
"Hence we report all results using greedy decoding for simplicity.",
"This suggests that greedy decoding can yield near-optimal solutions, indicating that our model is less prone to the error propagation problem.",
"Effect of training loss.",
"As discussed in Sec. 2.3, we find that explicitly considering the structural consistency constraints when normalizing is harmful (-0.12 F1 score on PTB, -0.10 F1 score on CTB).",
"We speculate that not enforcing the constraints during training can help the model to learn the constraints implicitly, which is helpful for the model to generalize better on the unseen test set.",
"Notably, Nguyen et al. (2021a) also adopt this strategy, i.e., normalizing over all boundaries.",
"Speed.",
"Similar to Nguyen et al. (2021a), the training process (i.e., teacher forcing) can be fully parallelized without resorting to structured inference, which could be compute-intensive or hard to parallelize.",
"On PTB, it takes only 4.5 hours to train the model using BERT as the encoder with a single Titan V GPU.",
"As for parsing, our method has the same parsing complexity as Nguyen et al. (2021a), i.e., worst-case O ( n 2 ) .",
"Table 5 shows the speed comparison on parsing the PTB test set (we report values based on a single Titan V GPU and not using BERT as encoder following Nguyen et al. (2021a)).",
"We report the average number of pointing actions in Appendix A.2.",
"Constituency parsing.",
"There are many methods to tackle constituency parsing, such as transition-based methods (Dyer et al., 2016; Cross and Huang, 2016; Liu and Zhang, 2017; Yang and Deng, 2020), span-based methods (Stern et al., 2017; Kitaev and Klein, 2018; Kitaev et al., 2019; Zhang et al., 2020; Wei et al., 2020; Nguyen et al., 2020; Xin et al., 2021), sequence-to-sequence (seq2seq)-based methods (Vinyals et al., 2015b; Fernndez-Gonzlez and Gmez-Rodrguez, 2020), sequence-labeling-based methods (Gmez-Rodrguez and Vilares, 2018; Vilares et al., 2019; Kitaev and Klein, 2020), among others.",
"Our work belongs to the category of seq2seq-based methods.",
"Previous seq2seq models linearize constituency trees into bracket sequences (Vinyals et al., 2015b) or shift-reduce action sequences (Ma et al., 2017; Fernndez-Gonzlez and Gmez-Rodrguez, 2020).",
"However, they may produce invalid outputs and their performance lags behind span-based methods.",
"Recently, seq2seq models lin-2409 earize constituency trees into sequences of spans in pre-order (Nguyen et al., 2021a) or in in-order (Wei et al., 2021).",
"Our method generates sequences of spans in post-order instead, which has the advantage of utilizing rich subtree features and performing direct n-ary tree parsing.",
"Binarization is de facto in constituency parsing, but there is a recent trend toward n-ary parsing.",
"Previous span-based methods adopt either explicit binarization (Zhang et al., 2020) or implicit binarization (Stern et al., 2017; Kitaev and Klein, 2018).",
"Although the implicit binarization strategy eliminates the need for binarization in training, it can only output binary trees during decoding.",
"Xin et al. (2021) propose an n-ary-aware span-based method by defining semi-Markov processes on each parent span so that the transition scores of adjacent sibling child-spans are explicitly considered in parsing.",
"Fernndez-Gonzlez and Gmez-Rodrguez (2019); Yang and Deng (2020) propose novel transition systems to model n-ary trees.",
"Our method outputs n-ary trees without the need for binarization via a novel pointing mechanism.",
"Parsing with pointer networks.",
"Pointer Networks (Vinyals et al., 2015a) are introduced to the parsing literature by Ma et al. (2018) and quickly become popular in various parsing subtasks because they are flexible to predict various trees/graphs and can achieve very competitive performance.",
"Ma et al. (2018) linearize a dependency tree in a top-down depth-first and inside-out manner and use a pointer network to predict the linearized dependency tree, which is then extended by Lin et al. (2019) to discourse parsing.",
"Liu et al. (2019) add shortcuts between the decoder states of the previously generated parents/siblings to the current decoder states in both dependency and discourse parsing.",
"Fernndez-Gonzlez and Gmez-Rodrguez (2019) propose a left-to-right dependency parser that predicts the heads of each word autoregressively, and later, they propose right-to-left and outside-in variants (Fernndez-Gonzlez and Gmez-Rodrguez, 2021a).",
"They also adapt the left-to-right dependency parser to semantic dependency parsing (which predicts acyclic graphs instead of trees) (Fernndez-Gonzlez and Gmez-Rodrguez, 2020), discontinuous constituency parsing (by treating discontinuous constituency trees as augmented dependency trees) (Fernndez-Gonzlez and Gmez-Rodrguez, 2020), and joint dependency and constituency parsing (Fernndez-Gonzlez and Gmez-Rodrguez, 2020).",
"They use a pointer network to reorder the sentence to reduce discontinuous constituency parsing to continuous constituency parsing (Fernndez-Gonzlez and Gmez-Rodrguez, 2021b).",
"Nguyen et al. (2021a,b) cast (discourse) constituency/RST parsing as conditional splitting and use pointer networks to select the splitting points.",
"Zhou et al. (2021) propose an action-pointer network for AMR parsing.",
"Nested NER.",
"There are also many methods to tackle nested NER, such as hypergraph-based methods (Lu and Roth, 2015; Katiyar and Cardie, 2018; Wang and Lu, 2018), sequence-labeling-based methods (Shibuya and Hovy, 2020; Wang et al., 2021), parsing-based methods (Finkel and Manning, 2009; Wang et al., 2018; Fu et al., 2021), layered methods (Fisher and Vlachos, 2019; Wang et al., 2020; Luo and Zhao, 2020), span-based methods (Yu et al., 2020; Li et al., 2021), object-detection-based methods (Shen et al., 2021; Tan et al., 2021) etc.",
"Our work belongs to the category of parsing-based methods.",
"Finkel and Manning (2009) insert named entities into a constituency tree and use a discriminative parser (Finkel et al., 2008) for learning and prediction.",
"Wang et al. (2018) adapt a shift-reduce transition-based parser to output a constituency forest instead of a constituency tree for nested NER.",
"Fu et al. (2021) adapt a span-based neural TreeCRF parser, treat nested named entities as the observed parts of a partially-observed constituency tree and develop a masked inside algorithm to marginalize all unobserved parts for maximizing the probability of the observed named entities.",
"Our method has a better performance as well as a lower time complexity than Fu et al. (2021).",
"Recently, Lou et al. (2022) extend the work of Fu et al. (2021), casting nested NER to lexicalized constituency parsing for leveraging headword information.",
"They achieve a higher performance at the cost of a higher parsing complexity, i.e., O ( n 4 ) .",
"In the deep learning era, global optimization on trees becomes less important in both training and decoding.",
"Teng and Zhang (2018) show that a span-based model trained with a local span classification loss performs well in conjunction with CYK decoding.",
"Wei et al. (2020); Nguyen et al. (2020) show that top-down greedy decoding per-2410 forms comparably.",
"In this work we have shown that greedy decoding works well.",
"Thus it would also be a fruitful direction to design more powerful neural decoders which can leverage more subtree information and can maintain structural consistency.",
"Also, it is a fruitful direction to devise more powerful span representations.",
"In this work we have presented a novel pointing mechanism and model for bottom-up constituency parsing, which allows n-ary tree parsing in linear steps.",
"Experiments on multiple datasets show the effectiveness of our methods in both constituency parsing and nested NER.",
"We thank the anonymous reviewers for their constructive comments.",
"This work was supported by the National Natural Science Foundation of China (61976139)."
] | [
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"method",
"result",
"result",
"method",
"result",
"objective",
"result",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"result",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"result",
"other",
"other"
] |
[
"Discourse signals are often implicit, leaving it up to the interpreter to draw the required inferences.",
"At the same time, discourse is embedded in a social context, meaning that interpreters apply their own assumptions and beliefs when resolving these inferences, leading to multiple , valid interpretations.",
"However, current discourse data and frameworks ignore the social aspect, expecting only a single ground truth.",
"We present the first discourse dataset with multiple and subjective interpretations of English conversation in the form of perceived conversation acts and intents.",
"We carefully analyze our dataset and create computational models to (1) confirm our hypothesis that taking into account the bias of the interpreters leads to better predictions of the interpretations, (2) and show disagreements are nuanced and require a deeper understanding of the different contextual factors.",
"We share our dataset and code at http://github.com/ elisaF/subjective_discourse .",
"Discourse, like many uses of language, has inherent ambiguity, meaning it can have multiple , valid interpretations.",
"Much work has focused on characterizing these genuine disagreements (Asher and Lascarides, 2003; Das et al., 2017; Poesio et al., 2019; Webber et al., 2019) and incorporating their uncertainty through concurrent labels (Rohde et al., 2018) and underspecified structures (Hanneforth et al., 2003).",
"However, prior work does not examine the subjectivity of discourse: how you resolve an ambiguity by applying your personal beliefs and preferences.",
"Our work focuses on subjectivity in question-answer conversations, in particular how ambiguities of responses are resolved into subjective assessments of the conversation act , a speech act in conversation (Traum and Hinkelman, 1992), and the communicative intent , the intention underly-So do you adjust your algorithms to prevent individuals interested in violence from being connected with like-minded individuals?",
"ing the act (Cohen and Perrault, 1979).",
"We choose conversation acts (or more broadly, dialogue acts) as a challenge to the view that dialog act classification may be an easy task that has never been approached from a subjective perspective.",
"Moreover, they are a good fit for our question-answering setting and are intuitive for naive annotators to understand.",
"Our data consists of witness testimonials in U.S. congressional hearings.",
"In Figure 1, annotators give conflicting assessments of responses given by the witness Mark Zuckerberg (CEO of Facebook) who is being questioned by Congressman Eliot Engel.",
"To make sense of our setting that has speakers (witness, politicians) and observers (annotators), we are inspired by the game-theoretic view of conversation in Asher and Paul (2018).",
"The players (witness, politicians) make certain discourse moves in order to influence a third party, who is the judge of the game (the annotator).",
"Importantly, the judge makes biased evaluations about the type of the player (e.g., sincere vs. deceptive ), which leads to differing interpretations of the same response.",
"In our example, the two annotators are the biased judges with differing judgments on what type of player Zuckerberg is: the first assumes sincere and the second deceptive .",
"For Zuckerberg's first response, the conversation act is interpreted unambiguously: both annotators agree he is signaling he can't answer the question.",
"The intent, however, is ambiguous, where the cynical annotator interprets the clarification question as lying in order to stall vs. being honest .",
"The second response yields both diverging conversation acts and intents: the first judge interprets the conversation act as an answer with the intent to provide a direct response, whereas the second judge perceives the conversation act as a shift to answer a different question with the intent to dodge the original, unfavorable question.",
"We detail our full label set in Section 3.2.",
"We create the first discourse dataset with multiple, valid labels that are subjective.",
"They do not hold concurrently and vary depending on the annotator; we collect annotator sentiments towards the conversants as a rough proxy for annotator bias.",
"We further elicit annotator explanations for a window into their rationalization.",
"A careful annotation protocol and qualification process ensure high quality crowd-sourced annotators with a strong understanding of the task.",
"Our dataset contains 6k judgments over 1k question-response pairs, with disagreements in 53.5% of the data.",
"However, unlike our prior example, disagreements are not often trivially attributable to differing sentiments.",
"Uncooperative moves are sometimes warranted, regardless of annotator sentiment.",
"Interpretation of a response is further influenced by its question.",
"A qualitative analysis of annotator explanations reveals strikingly different uses of subjective language across diverging interpretations.",
"Identifying all the possible interpretations of a response is a useful way of analyzing discourse in a realistic setting with multiple observers, and could aid in uncovering sociolinguistic aspects relevant to variations in discourse comprehension.",
"With these goals in mind, we propose the task of predicting the complete set of annotator labels for a given response.",
"We find a transformer-based model outperforms other neural and linear models.",
"We confirm our assumption that incorporating the context of the judge helps the model make better predictions, but still leaves room for improvement.",
"In summary, the task together with the dataset present a valuable opportunity to understand perceptions of discourse in a non-cooperative environment.",
"More broadly, we show the need and value for considering the subjectivity of NLP tasks.",
"Our work introduces a framework for identifying, eliciting, and analyzing these subjective elements, to enable application for other tasks.",
"Asher and Paul (2016) apply their game-theoretic view of non-cooperative conversations to discourse moves in Segmented Discourse Representation Theory (Asher and Lascarides, 2003).",
"Our work is applied instead to conversation acts and their communicative intents, which are more amenable to untrained annotators.",
"Conversation acts are speech acts specific to conversation that can encompass entire turns in a conversation (Traum and Hinkelman, 1992).",
"Speech act theory describes performative actions, i.e., how we can do things with words (Austin, 1962; Searle, 1969), but fails to account for how the act is perceived by an observer (the annotator in our scenario).",
"Subsequent work in planning extends the theory to incorporate the cognitive context of an observer that includes the perceived communicative intent underlying a speech act (Cohen and Perrault, 1979; Pollack, 1986).",
"Speech act theory originally did not consider insincere speakers, but later work recognized that even in non-cooperative settings, conversants adhere to the conventions of dialogue, or discourse obligations , such as responding to a question (Traum and Allen, 1994; Potts, 2008).",
"For this reason, we explicitly separate judgments on conversation acts (that usually fulfill a specific obligation) from communicative intents, which can be perceived as deceptive (or sincere).",
"Prior work examines how writer intentions are often misaligned with reader perceptions (Chang et al., 2020), which further motivates our focus on the reader (our annotator).",
"While our work focuses on subjectivity, ambiguity is studied in many NLP tasks, including Natural Language Inference (Pavlick and Kwiatkowski, 2019; Nie et al., 2020), evaluation of NLG (Schoch et al., 2020), a recent SemEval 2021 shared task, 1 as well as several discourse tasks (Asher and Lascarides, 2003; Versley, 2011; Webber and Joshi, 2012; Das et al., 2017; Poesio et al., 2019; Webber et al., 2019).",
"Only one study strives to understand how these ambiguities are resolved: Scholman (2019) shows different interpretations of ambiguous coherence relations can be attributable to different cognitive biases.",
"However, our work focuses more generally on subjec-1 https://sites.google.com/view/ semeval2021-task12/home tivity rather than cognitive processes.",
"Related NLP tasks include dialog act classification, intent detection, deception detection and argumentation, though we importantly note these predict only a single interpretation.",
"Dialog acts are similar to conversation acts that apply at the utterance level.",
"Classification models typically combine representations of linguistic units (word, utterance, conversation-level) (Chen et al., 2018).",
"In our work, we employ a hierarchical model to account for the levels in our label taxonomy.",
"Intent detection is traditionally applied to human-computer scenarios for task-specific goals such as booking a flight.",
"Our conversation data is not task-oriented, and we thus define our intents more closely aligned with beliefs in the sincerity of the speaker.",
"Detection of deception is, unlike many other NLP tasks, challenging even for humans (Ott et al., 2011).",
"Most datasets consist of instructed lies (where participants are told to lie).",
"Our work contains naturally-occurring deception where we include not just lying but other more covert mechanisms such as being deliberately vague or evasive (Clementson, 2018), both frequent in political discourse (Bull, 2008).",
"Argumentation mining analyzes non-cooperative conversations, but typically requires expert annotators.",
"Recent work decomposes the task into intuitive questions for crowdsourcing (Miller et al., 2019), inspiring our annotation schemes that assume little to no training.",
"Closer to our setting is argument persuasiveness, where Durmus and Cardie (2018) find prior beliefs of the audience play a strong role in their ability to be persuaded, which further motivates our focus on the annotator's bias.",
"We create the first dataset with multiple, subjective interpretations of discourse (summarized in Table 1).",
"Recalling our example in Figure 1, we focus on responses to questions: the conversation act , how the response is perceived to address the question (such as Zuckerberg saying he cant_answer ); and the communicative intent , the sincere or deceptive intent behind choosing that form of response (such as one annotator believing the intent was honest ).",
"As our source of data, we choose the question-answer portions of U.S. congressional hearings (all in English) for several reasons: they contain political and societal controversy identifi-able by crowdsourced workers, they have a strong signal of ambiguity as to the form and intent of item #sents/ #toks/ total total total turn turn sents toks spkrs question 4.1 81.5 4096 82582 91 response 2.6 47.0 2634 48831 20 Table 1: Statistics of our 20 U.S. congressional hearings.",
"the response, and the data is plentiful.",
"2 A dataset statement is in Appendix D. 3.1 Dataset creation Congressional hearings are held by committees to gather information about specific issues before legislating policies.",
"Hearings usually include testimonies and interviews of witnesses.",
"We focus on hearings that interview a single witness and that exceed a certain length ( > 100 turns) as a signal of argumentative discourse.",
"To ensure a variety of topics and political leanings are included, we sample a roughly equal number of hearings from 4 Congresses (113th-116th) that span the years 2013-2019, for a total of 20 hearings.",
"For each hearing, we identify a question as a turn in conversation containing a question posed by a politician that is immediately followed by a turn in conversation from the witness, which is the response .",
"We thus extract the first 50 question-response pairs from each hearing.",
"Each data point consists of a question followed by a response.",
"Table 1 summarizes the dataset statistics.",
"We collect labels through the Amazon Mechanical Turk crowdsourcing platform.",
"In the task, we ask a series of nested questions feasible for untrained annotators (from which we derive question response labels), then elicit annotator sentiment.",
"Each HIT consists of five question-response pairs in sequential order from the same hearing; we group them to preserve continuity of the conversation while not overloading the annotator.",
"We collect 7 judgments for each HIT.",
"3 Screenshots of the task and the introductory example with all annotations are in Appendix A. 3.2.1 Annotations For each question-response pair we collect three pieces of information: the question label, the re-2 Transcripts lack intonation and gestures, and thus a certain amount of information is lost from the original discourse.",
"3 During our pilot, we experimented with increasing the number of judgments (up to 11) but found the number of chosen labels remains stable.",
"sponse label, and an explanation.",
"At the end of each HIT, we collect two pieces of information: the annotator's sentiment towards the questioners, and sentiment towards the witness.",
"4 Question We collect judgments on the question as it can influence the response.",
"For example, an objective, information-seeking question lends itself to a direct answer (Table 2 example (1)).",
"A loaded question with presuppositions can instead result in an indirect answer when rejecting these presuppositions (Walton, 2003; Groenendijk and Stokhof, 1984), as in example (2) of Table 2.",
"Leading questions, often asked as declarative or tag questions, are conducive to a particular answer (Bolinger, 1957) and signal the questioner is making a commitment to that underlying proposition.",
"A pragmatic listener, such as our annotator, is inclined to believe the questioner has reliable knowledge to make this commitment (Gunlogson, 2008).",
"Challenging the commitment leads to indirect answers as in example (3) of Table 2.",
"To elicit the question intent without requiring familiarity with the described linguistic concepts, we ask the annotator a series of intuitive questions to decide if the question is an attack on the witness, favor ing the witness, or is neutral .",
"We use a rule-based classifier to determine the question type ( wh , polar , disjunctive , tag , declarative ).",
"Response For judging the response, we combine conversation acts with communicative intents as in Figure 2, in the spirit of the compositional semantic framework of Govindarajan et al. (2019).",
"The taxonomy is a result of a combination of expert involvement, data observation and user feedback.",
"5 4 We elicit sentiments at the end because we do not expect annotators to be familiar with the hearing or conversants.",
"Future annotations could elicit sentiments at the beginning to capture strong a priori biases in high-profile hearings.",
"5 We consulted with existing taxonomies (SWBD-DAMSL Jurafsky et al. (1997), MRDA Shriberg et al. (2004), DialogBank Bunt et al. (2018), evasive rhetorical strategies in Gabrielsen et al. (2017), dialogue acts paired with content Response Intent answer Conversation Act shift cant_answer direct overanswer lying honest correct dodge Response Label answer +direct answer +overanswer cant_ans +lying cant_ans +honest shift +dodge shift +correct Figure 2: Hierarchical taxonomy of the perceived conversation act and intent for a response, forming the 6 response labels.",
"We next describe the taxonomy and its theoretical motivations.",
"In accordance with the discourse obligations of a conversation, a witness must respond in some form to a question (Traum and Allen, 1994).",
"The function of the response is captured by the perceived conversation act , and is meant to be a more objective judgment (e.g., recognizing that Zuckerberg is using the can't answer' form of a response, regardless of whether you believe him).",
"This conversation act constitutes the top layer of the taxonomy.",
"The conversation acts include the standard answer and cant_answer .",
"Inspired by work on answerhood (Ginzburg et al., 2019; de Marn-effe et al., 2009; Groenendijk and Stokhof, 1984) and evasion in political discourse (Gabrielsen et al., 2017), we also include a more nuanced view of answering the question where giving a partial answer or answering a different question is labeled as shift .",
"The bottom layer of the taxonomy is the perceived intent underlying that conversation act, and is meant to be subjective.",
"The intents hinge on whether the annotator believes the witness's conversation act is sincere or not.",
"For answer , the annotator may believe the intent is to give a direct answer, or instead an overanswer with the in-features in Plss and Piwek (2016)), and researchers in the dialogue field to construct the initial taxonomy, then conducted internal pilots with linguists and non-linguists, and finally conducted several iterations of an external pilot with crowdworkers to further refine the taxonomy.",
"tent to sway the questioner (or even the public au-dience).",
"6 If shift ing the question, the annotator may believe the responder is correct ing the question (e.g., to reject a false presupposition) or is attempting to dodge the question.",
"If the witness says they cant_answer , the annotator may believe the witness is honest or is lying .",
"The annotation task implements a series of nested questions that mimic the hierarchy of the label taxonomy, which we map to conversation act and intent labels.",
"That is, we first ask how the witness responds to the question (conversation act), then what is the intent and combine these into a single response label.",
"Explanation We ask annotators for a free-form explanation of their choices in order to elicit higher quality labels (McDonnell et al., 2016) and for use in the qualifying task as explained later.",
"Sentiment At the end of the HIT, we ask the annotator to rate their sentiment towards the politicians and towards the witness on a 7-point scale (we later collapse these into 3 levels: negative, neutral, positive).",
"These ratings provide a rough proxy for annotator bias.",
"Because the task requires significant time and cognitive effort, we establish a qualification process.",
"7 In the qualifying task, we include question-response pairs already explained in the instructions, and unambiguous cases as far as the conversation act (e.g., a response of Yes' can only be construed as an answer).",
"The criteria for qualification are: correctly labeling the conversation act for the instruction examples and unambiguous cases, providing explanations coherent with the intent label, and response times not shorter than the reading time.",
"6 Overanswering with the intent to be helpful was included in our original taxonomy but then eliminated due to sparsity.",
"7 This in addition to the requirements of > 95% approval rating, > 500 approved HITs, and living in the US for greater familiarity with the political issues.",
"This rigorous process yielded high quality data from 68 annotators who were genuinely engaged with the task.",
"On average, an annotator labeled 91 question-response pairs, with 4 superannotators who provided labels for half of the data.",
"During post-processing, we consider a label valid if it receives more than one annotator vote.",
"The annotated dataset consists of 1000 question-response pairs with 6,207 annotations (3-7 annotations per item) on the first 50 question-responses from each of 20 congressional hearings.",
"Here, we explore the annotated dataset to confirm its validity, focusing on the response labels (Figure 3) and sentiment towards the witness.",
"We then conduct a word association analysis that finds meaningful lexical cues for the conversation act, but not for the intent label.",
"Is there disagreement?",
"One initial question with collecting data on multiple interpretations is whether crowdworkers have sufficiently different viewpoints.",
"However, we do find there is suffi-cient disagreement: Figure 4",
"(a) shows annotators disagree about the response label (the combined conversation act + intent) on roughly half the data (53.5%), though this trend can vary considerably from one hearing to the next as shown in",
"(b) and",
"(c).",
"examine the response label's inter-annotator agreement (IAA) and which labels are disagreed upon.",
"We do not expect high IAA for the response label as we are eliciting disagreement.",
"Overall, IAA is 0.494 in Krippendorff's (considered moder-ate'; Artstein and Poesio (2008)), but importantly, we find higher agreement on the conversation act (0.652) compared to the intent (0.376).",
"This finding confirms annotator understanding that the top-level label is more objective than the bottom-level one.",
"We next group annotators with the same sentiments, expecting that when there is a disagreement, the same-sentiment groups will agree more with each other than with others.",
"We partly confirm this intuition in Figure 5: grouping annotators by their sentiment increases agreement, but not by much.",
"Sentiment is actually a more complicated signal, as we show in the following section.",
"Exploring annotator disagreements on the response label, we list the most frequent in Table 4.",
"We find the disagreements often have opposing intents, but agree on the conversation act (e.g., shift+correct vs. shift+dodge ).",
"This result is encouraging, showing annotators have a shared understanding of the label definitions and further motivating our label taxonomy (Figure 2).",
"Is sentiment predictive of intent?",
"We have pointed out how the annotator's sentiment towards the witness can help explain the label they choose.",
"Is annotator sentiment then an easy predictor of the intent label or is it a more complicated signal?",
"A correlation study shows they are in fact only weakly correlated (correlation ratio = 0 . 34 for coarse-grained sentiment).",
"There are two reasons for this result: (1) responses may have an unambiguous interpretation regardless of annotator sentiment, and (2) annotator sentiment towards the witness typically fluctuates throughout the hearing.",
"The most common unambiguous response is answer+direct (58%).",
"Direct answers often leave little room for interpretation (e.g., Yes, that is correct.').",
"More interestingly, annotators sometimes choose an intent that conflicts with their sentiment towards the witness (in 10% of unambiguous items).",
"We illustrate the two cases in Table 3.",
"In the first case, even the annotators with a negative view of the witness choose a sincere intent label.",
"Conversely, in the second case, even the annotators with a positive view of the witness choose a deceptive intent label.While these are small phenomena, they illustrate the nuances of signaling sincerity and how they interact with the annotator's sentiment towards the witness.",
"For the annotator's sentiment across a hearing, a simplifying assumption is that it remains constant (recall the sentiment is reported at the end of each HIT, and HITs are presented to annotators in almost the same order as the original hearing).",
"In practice it does not: 59% of annotators that label more than one HIT change their sentiment.",
"As one annotator explained,When he [the witness] said that, I got a different attitude towards him.",
"Influence of question Earlier, we posited the question influences the response (Table 2).",
"We find the question intent and type are weakly correlated with the response label.",
"On a per-hearing basis, though, we observe stronger correlation for declarative question types in some hearings, partly confirming our hypothesis.",
"We find qualitative evidence in explanations that annotators consider the question (it was a terrible question to begin with).",
"Lexical cues for labels To understand whether the response labels have lexical cues, we follow Schuster et al. (2019) to analyze the local mu-Annotator with positive sentiment Annotator with negative sentiment R: Congresswoman, it might be useful to clarify what actually happened.",
"tual information (LMI) between labels and the response text n-grams (n=1,2,3).",
"Unlike PMI, LMI highlights high frequency words co-occurring with the label.",
"The top-scoring n-grams in Table 6 show most labels have a meaningful cue (the lower scoring words are not informative as they tend to be hearing-specific with much lower frequencies).",
"The ans+direct cues signal straight answers.",
"Dashes for both shift indicate the witness was interrupted (recall these include partial answers).",
"Both cant_answer labels have the same cues, which include negation (to indicate not being able to answer) and question mark for clarification questions.",
"We thus expect these cues may help identify conversation acts, but not the intents.",
"In summary, our analysis of the dataset shows there is ample and genuine disagreement.",
"Interestingly, these disagreements are only partly attributable to differences in annotator sentiment.",
"Furthermore, sentiment often fluctuates across a hearing, and can be influenced by what is said during the hearing.",
"The question labels are not a straightforward signal for the response labels, but can vary by hearing.",
"Finally, we find evidence of lexical cues for the conversation act label, but not for the intent.",
"The explanations are a rich source of data for understanding annotator interpretations, with evidence they are applying personal beliefs (Bankers are generally evil') and experiences (I have watched",
"hearings in congress').",
"We conduct a qualitative analysis to gain insight into the differing interpretations.",
"Explanations are free-form, but annotators sometimes quote parts of the response.",
"Interestingly, multiple annotators can quote the same text, yet arrive at opposite labels, as in Table 5.",
"Studying these cases offers a window into what part of a discourse may trigger a subjective view, and how this view is expressed.",
"To this end, we examine the discourse and argumentative relations of the quoted text, and the linguistic devices used by the annotator to present the quote.We find the quoted text is often part of the response's supporting argument, serving as the background or motivation that underpins the main claim.",
"The annotator's presentation of the quote differs drastically depending on their slant.",
"Sincere labels use neutral or positive language (state', say factually'), whereas deceptive labels use negative words and framing (evades', goes off on a tan-gent').",
"Quotation marks in positive explanations become scare quotes in a negative one (first example in Table 5).",
"On the negative side, we also find hedging (claim') and metaphors (skirting the meaning',dances around').",
"Our qualitative analysis shows annotators consider the side arguments underpinning the main claims, and employ rich linguistic devices to reflect their judgments.",
"We propose the task of predicting all possible interpretations of a response (i.e., all perceived conversation act+intent labels) with the goals of analyzing discourse in a realistic setting and understanding sociolinguistic factors contributing to variations in discourse perception.",
"We frame this task as a multilabel classification setting where 6 binary classifiers predict the presence of each of the 6 labels.",
"8 8 We experimented with a set-valued classifier that predicts the label set from all observed combinations (27-way multi-We evaluate with macro-averaged F1 which gives equal weight to all classes, unlike micro-averaging which in our imbalanced data scenario (Figure 3) would primarily reflect the performance of the large classes.",
"We experiment with pretrained language models with the intuition that a general language understanding module can pick up on patterns in the response to distinguish between the classes.",
"Training We split the data into 5 cross-validation folds, stratified by congressional hearing (to preserve the differing response distributions as seen in Figure 3).",
"We reserve one fold for hyperparameter tuning and use the remaining 4 folds for cross-validation at test time.",
"9 Baselines The ALLPOSITIVE baseline predicts 1 for all labels.",
"This baseline easily outperforms a majority baseline that predicts the most frequent label ( answer+direct ).",
"LOGREGRESSION performs logistic regression with bag-of-words representations.",
"CNN is a convolutional neural network as implemented in Adhikari et al. (2020).",
"Other baselines performing lower than CNN are in Appendix C. Pretrained We experiment with several pretrained language models, and find ROBERTA (Liu et al., 2019) performs the best on the held-out development fold.",
"We use the implementation from Hugging Face.",
"10 We feed in the tokenized response text and truncate input to 512 word pieces (addi-tional inputs used in the model variants we describe next are separated by the [SEP] token).",
"Hierarchical We use two classifiers to mimic the hierarchy of our taxonomy: the first classifier predicts the conversation act while the second predicts the complete label (conversation act+intent).",
"We train the classifiers independently, and condition the second classifier on the ground truth of the first classifier during training, only placing a distribution over intents consistent with that conversation act.",
"At test time, we use predictions from the first classifier instead of ground truth.",
"+Question Building on top of the hierarchical model, this model incorporates the context of the class classification), but found this didn't work well.",
"question by including all interrogative sentences.",
"11 +Annotator This model incorporates annotators' coarse-grained sentiment towards the witness (fed in as a space-separated sequence of numbers, where each number is mapped from {negative, neutral, positive} sentiment to {-1, 0, 1}).",
"The pretrained models easily outperform the baselines as seen in Table 7, where ROBERTA performs best.",
"We next report results on incorporating hierarchy and context.",
"Macro-F1 is calculated over the pooled results of the 4 folds; statistical significance is measured with the paired bootstrap test (Efron and Tibshirani, 1994) and < 0.05.",
"Adding hierarchy As seen in Table 8, incorporating an additional classifier to predict the top-level conversation helps, but not significantly.",
"12 The per-class performance shows it mainly helps the less-represented conversation acts shift and cant_answer , with a better false negative rate for these classes.",
"While the HIERARCHICAL model makes fewer errors of the kind intended to be corrected by the hierarchy as illustrated in Table 9 (by not predicting labels incompatible with the conversation act), the difference is very small.",
"Jointly training these two classifiers with an adaptive learning curriculum may yield better results, which we leave for future work.",
"Adding context As shown in Table 8, adding the question in +Q UESTION actually hurts performance, in particular by overpredict-ing the smaller classes ans+overans and cant_ans+honest .",
"The lack of a benefit 11 We employ this truncation method because questions can be very lengthy (Table 1) We obtain poorer results using other forms of question context, including the entire question text, or only the last question.",
"12 We nevertheless choose to build on this model as the subsequent models incorporating context exhibit more stable and significant differences.",
"contradicts our expectations of the importance of the question and qualitative evidence, but is consistent with the weak correlation results.",
"We hypothesize a different representation of the question is needed for the model to exploit its signal, which we leave for future work.",
"Incorporating the annotator sentiments in +A NNOTATOR provides a statistically significant benefit that helps both the false positive and false negative rate of the smaller classes ans+overans and cant_ans+lying .",
"In the example of Table 9 which has mostly neutral sentiments, the model corrects the false positive made by the HIERARCHICAL model for cant_ans+lying .",
"From these results, we conclude that our task is heavily contextual with complex labels.",
"On the one hand, taking into account the sentiments of the annotator leads to better predictions.",
"On the other hand, we've shown annotator sentiment is not a simple reflection of intent.",
"Furthermore, questions qualitatively influence the response labels, but linguistic features and labels of the question are not strongly correlated with the response and our model is not able to make effective use of it.",
"The disagreements appear to reflect other axes, and this work begins to scratch the surface of understanding the subjective conversation acts and intents in conversational discourse.",
"In this paper, we tackle the subjectivity of discourse; that is, how ambiguities are resolved.",
"We present a novel English dataset containing multiple ground truths in the form of subjective judgments on the conversation acts and intents of a response in a question-response setting.",
"We show the dataset contains genuine disagreements which turn out to be complex and not easily attributable to a single feature, such as annotator sentiment.",
"The annotator rationales provide a window into understanding these complexities, and offer a rich source of linguistic devices.",
"We propose a task to predict all possible interpretations of a response, whose results are consistent with our data analysis: incorporating the annotator bias helps the model significantly improve.",
"We publicly release the dataset in hopes to spur further research by exploring the sequential na-ture of the hearings to employ CRF-type losses and other forms of aggregating annotator judgments.",
"We provide a detailed dataset statement in Appendix D. The data collected in this dataset is produced by the U.S. government and is freely available to the public.",
"The ids of the crowdsourced workers that contributed to the annotation are anonymized.",
"Workers were compensated an average of $1.20 per HIT (approximately $8/hour), using the U.S. federal minimum wage as a minimum bar.",
"We recognize that crowdsourced workers, and thus the collected judgments in our dataset, are not representative of the U.S. population (Difallah et al., 2018).",
"We thank the annotators that contributed to this dataset.",
"We thank reviewers and the first author's thesis committee for insightful feedback.",
"We acknowledge the Texas Advanced Computing Center for grid resources.",
"The first author was supported by the NSF Graduate Research Fellowship Program under Grant No. 2017247409."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"abstain",
"result",
"method",
"other",
"method",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"method",
"other",
"other",
"method",
"other",
"other",
"method",
"other",
"method",
"other",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other"
] |
[
"Evaluation for many natural language understanding (NLU) tasks is broken: Unreliable and biased systems score so highly on standard benchmarks that there is little room for researchers who develop better systems to demonstrate their improvements.",
"The recent trend to abandon IID benchmarks in favor of adversarially-constructed, out-of-distribution test sets ensures that current models will perform poorly, but ultimately only obscures the abilities that we want our benchmarks to measure.",
"In this position paper, we lay out four criteria that we argue NLU benchmarks should meet.",
"We argue most current benchmarks fail at these criteria, and that adversarial data collection does not meaningfully address the causes of these failures.",
"Instead, restoring a healthy evaluation ecosystem will require significant progress in the design of benchmark datasets, the reliability with which they are annotated, their size, and the ways they handle social bias.",
"A large and impactful thread of research on natural language understanding (NLU) has focused on improving results on benchmark datasets that feature roughly independent and identically distributed (IID) training, validation, and testing sections, drawn from data that were collected or annotated by crowdsourcing (Maas et al., 2011; Bowman et al., 2015; Rajpurkar et al., 2016; Wang et al., 2019b).",
"Recent methodological progress combined with longstanding issues in crowdsourced data quality has made it so state-of-the-art systems are nearing the maximum achievable values on most of these benchmarks and thus are unlikely to be able to measure further improvements (Devlin et al., 2019; Raffel et al., 2020).",
"At the same time, these apparently high-performing systems have serious known issues and have not achieved human-level competence at their tasks (Ribeiro et al., 2020).",
"1. Good performance on the benchmark should imply robust in-domain performance on the task.",
"(cid:44) We need more work on dataset design and data collection methods.",
"2. Benchmark examples should be accurately and unambiguously annotated.",
"(cid:44) Test examples should be validated thoroughly enough to remove erroneous examples and to properly handle ambiguous ones.",
"3. Benchmarks should offer adequate statistical power.",
"(cid:44) Benchmark datasets need to be much harder and/or much larger.",
"4. Benchmarks should reveal plausibly harmful social biases in systems, and should not incentivize the creation of biased systems.",
"(cid:44) We need to better encourage the development and use auxiliary bias evaluation metrics.",
"Progress suffers in the absence of a trustworthy metric for benchmark-driven work: Newcomers and non-specialists are discouraged from trying to contribute, and specialists are given significant freedom to cherry-pick ad-hoc evaluation settings that mask a lack of progress (Church and Hestness, 2019).",
"The plight of benchmark-driven NLU research has prompted widespread concern about the assumptions underlying standard benchmarks and widespread interest in alternative models of evaluation.",
"As an especially clear example, the documentation for the recent DynaBench benchmark suite argues that benchmarks saturate, benchmarks have artifacts, researchers overfit on bench-marks, and benchmarks can be deceiving and use these claims to motivate abandoning the IID paradigm in favor of benchmark data that is collected adversarially by asking a broad population of annotators to try to fool some reference neural network model.",
"1 1 https://dynabench.org/about The DynaBench approach falls into the broader category of adversarial filtering (Paperno et al., 2016; Zellers et al., 2018; Nie et al., 2020; Le Bras et al., 2020).",
"Adversarial filtering starts with a pipeline that produces candidate examples for the task, often through crowdsourcing, and then constructs a dataset by selecting those examples from the pipeline where one or more machine learning models fails to predict the correct label.",
"This approach is appealing in that it guarantees that, at least in the short term, existing approaches to dataset construction can be patched to keep producing data that will challenge current systems.",
"However, collecting examples on which current models fail is neither necessary nor sufficient to create a useful benchmark.",
"Among other points of concern, this approach can create a counterproductive incentive for researchers to develop models that are different without being better, since a model can top the leaderboard either by producing fewer errors than the adversary or by simply producing different errors, because the examples on which these new errors would be tested will not appear in the evaluation set.",
"One could attempt to do this by, for example, pretraining new models that deliberately avoid any data that was used to pretrain the original adversary model, in order to minimize the degree to which the idiosyncratic mistakes of the new model line up with those of the old one.",
"This incentive can slow progress and contribute to spurious claims of discovery.",
"This position paper argues that concerns about standard benchmarks that motivate methods like adversarial filtering are justified, but that they can and should be addressed directly, and that it is possible and reasonable to do so in the context of static, IID evaluation.",
"We propose four criteria that adequate benchmarks should satisfy: benchmarks should offer a valid test of the full set of relevant language phenomena, they should be built around consistently-labeled data, they should offer adequate statistical power, and they should disincentivize the use of systems with potentially harmful biases.",
"We then briefly survey some ongoing or promising research directions that could enable us to meet these challenges, including hybrid data collection protocols involving both crowdworkers and domain experts, larger-scale data validation, and auxiliary bias metric datasets attached to benchmarks.",
"The Problem Performance on popular benchmarks is extremely high, but experts can easily find issues with high-scoring models.",
"The GLUE benchmark (Wang et al., 2019b; Nangia and Bowman, 2019), a compilation of NLU evaluation tasks, has seen performance on its leaderboard approach or exceed human performance on all nine of its tasks.",
"The follow-up SuperGLUE benchmark project (Wang et al., 2019a) solicited dataset submissions from the NLP research community in 2019, but wound up needing to exclude the large majority of the submitted tasks from the leaderboard because the BERT model (Devlin et al., 2019) was already showing performance at or above that of a majority vote of human crowdworkers.",
"Of the eight tasks for which BERT did poorly enough to leave clear headroom for further progress, all are now effectively saturated (Raffel et al., 2020; He et al., 2020).",
"State-of-the-art performance on the highly popular SQuAD 2 English reading-comprehension leaderboard (Rajpurkar et al., 2018) has long exceeded that of human annotators.",
"Ample evidence has emerged that the systems that have topped these leaderboards can fail dramatically on simple test cases that are meant to test the very skills that the leaderboards focus on (McCoy et al., 2019; Ribeiro et al., 2020).",
"This result makes it clear that our systems have significant room to improve.",
"However, we have no guarantee that our benchmarks will detect these needed improvements when they're made.",
"Most were collected by crowdsourcing with relatively limited quality control, such that we have no reason to expect that perfect performance on their metrics is achievable or that the benchmark will meaningfully distinguish between systems with superhuman metric performance.",
"While the true upper bound on performance for any task (Bayes error) is not measurable, the fact that our systems have exceeded serious estimates of human performance leaves us with no reason to expect there to be much more headroom.",
"In addition, many of our best models display socially-relevant biases that render them inappropriate for deployment in many applications.",
"2 Our best current benchmarks do little or nothing to dis-2 The state-of-the-art T5 model, for example, shows far more sensitivity to irrelevant gender information than humans do when making coreference judgments, according to results on the SuperGLUE leaderboard with the DNC Winogender dataset (Rudinger et al., 2018; Poliak et al., 2018).",
"courage harmful biases and, by building largely on crowdsourced or naturally-occurring text data, they likely incentivize the development of models that reproduce problematic biases, at least to some degree.",
"The Goal This paper lays out four criteria that we would like our benchmarks to satisfy in order to facilitate further progress toward a primarily scientific goal: building machines that can demonstrate a comprehensive and reliable understanding of everyday natural language text in the context of some specific well-posed task, language variety, and topic domain.",
"Among language understanding tasks, we focus on those that use labeled data and that are designed to test relatively general language understanding skills, for which the design of benchmarks can be especially difficult.",
"We distinguish between a task and a benchmark : A task , in our terms, is a language-related skill or competency that we want a model to demonstrate in the context of a specific inputoutput format.",
"A benchmark attempts to evaluate performance on a task by grounding it to a text domain and instantiating it with a concrete dataset and evaluation metric.",
"As a rough example, multiple-choice reading-comprehension question answering is a task, which the Cosmos benchmark (Huang et al., 2019) attempts to test using an accuracy metric over a specific sample of passages and questions from the English personal narrative domain.",
"There is no general way to prove that a concrete benchmark faithfully measures performance on an abstract task.",
"Nevertheless, since we can only evaluate models on concrete benchmarks, we have no choice but to strengthen the correspondence between the two as best we can.",
"We set aside the evaluation of computational efficiency and data efficiency, despite its relevance to many specific applications of language technology.",
"We will not fully set aside issues of social bias.",
"Even though it is possible for the same system to demonstrate both adept language understanding and harmful social prejudices, 3 ethical concerns prompt us to argue that community-wide benchmarks should identify and disincentivize potentially harmful biases in models.",
"The widespread sharing of trained models among NLU researchers and en-3 The performance of models like RoBERTa (Liu et al., 2019) or T5 (Raffel et al., 2020) on benchmarks like SuperGLUE that include some coverage of social bias is a good example of this, and typical human behavior is an even better example.",
"gineers and the fast pace of NLP R&D work mean that it is easy for systems designed with scientific goals in mind to be deployed in settings where their biases can cause real harm.",
"While recent initiatives around data documentation should reduce the accidental deployment of models built on inappropriate data (Bender and Friedman, 2018; Gebru et al., 2018), we see room to do more.",
"We will also set aside few-shot learning, in which tasks are made artificially difficult by training models only on small subsets of the available training data (as was prominently used for GPT-3 by Brown et al., 2020).",
"This paper focuses instead on the case where one is interested in reaching excellent performance on some language task and is willing to collect data or otherwise expend resources to make that possible.",
"While few-shot learning represents a potentially impactful direction for engineering research, and success on some task in a few-shot setting is clear evidence of success more generally, artificial constraints on the use of training data do not fit the broad goals laid out above and do not fit many applied settings.",
"This paper focuses on four criteria, outlined in Figure 1, that we argue effective future benchmarks for NLU tasks should satisfy.",
"We believe that no current benchmark for any difficult broad-domain NLU task satisfies all four: 3.1 Validity If one system significantly outperforms another on some benchmark, then that result should be strong evidence that the higher-scoring system is actually better at the task tested by the benchmark.",
"In other words, benchmarks are only useful for language understanding research if they evaluate language understanding.",
"General-purpose benchmarks that are designed to cover tasks like paragraph reading comprehension over Wikipedia are only effective if they test the full range of skills that are required to understand and reason about paragraphs from Wikipedia.",
"This criterion is difficult to fully formalize, and we know of no simple test that would allow one to determine if a benchmark presents a valid measure of model ability.",
"Minimally, though, it requires the following: An evaluation dataset should reflect the full range of linguistic variationincluding words and higher-level constructionsthat is used in the relevant domain, context, and language variety.",
"An evaluation dataset should have a plausible means by which it tests all of the language-related behaviors that we expect the model to show in the context of the task.",
"An evaluation dataset should be sufficiently free of annotation artifacts (as in Si et al., 2019; Sugawara et al., 2020b; Niven and Kao, 2019) that a system cannot reach near-human levels of performance by any means other than demonstrating the required language-related behaviors.",
"If a benchmark fully meets this challenge, we should expect any clear improvement on the benchmark to translate to similar improvements on any other valid and reasonable evaluation data for the same task and language domain.",
"4 The rest of this section surveys common paradigms for constructing a benchmark dataset, and points to reasons that none offers a straightforward way to satisfy this criterion: Naturally-Occurring Examples It is intuitively appealing to, where possible, build benchmark datasets based on naturally-occurring data distributions.",
"This minimizes our effort in creating benchmarks and minimizes the risk that the benchmark is somehow skewed in a way that omits important phenomena.",
"However, this is often not viable.",
"For tasks like reading comprehension or natural language inference that require multiple related texts (such as a passage and a question) as input, there is often no natural distribution that ef-ficiently isolates the relevant task behaviors.",
"One can find naturally-occurring distributions over questions, like those used to construct Natural Questions (Kwiatkowski et al., 2019), but these will generally be tied to the use contexts of a specific NLP product and will thus be limited by users' perceptions of the current abilities of that product.",
"Even for single-input tasks like coreference resolution or Cloze, for which any text corpus can be the basis for a benchmark, naturalistic distributions do nothing to separate skills of interest from factual world knowledge and can be overwhelmingly dominated by the latter, making them poor 4 Though, of course, any model with non-zero test error could be presented with a potentiallyun reasonable benchmark entirely consisting of its own test errors.",
"metrics for incremental progress on NLU.",
"Credible existing NLU-oriented benchmarks for such tasks are generally heavily curated (Paperno et al., 2016; Levesque et al., 2012; Sakaguchi et al., 2019).",
"Expert-Authored Examples Expert-constructed datasets for language understanding like FraCaS (Cooper et al., 1996) and the Winograd Schema Challenge (Levesque et al., 2012) have been crucial for defining several new tasks and introducing them as objects of study.",
"However, expert example construction isn't desirable for the creation of benchmarks for the use cases we focus on here.",
"Setting aside the logistical challenges of creating sufficiently large and diverse datasets by expert labor alone, expert authorship generally gives members of the research community direct, fine-grained control over the data on which their systems will be evaluated.",
"Intentionally or unintentionally, this can produce data that is oriented toward linguistic phenomena that are widely studied and widely known to be important to the task at hand.",
"While this can be helpful when building diagnostic datasets that focus on specific types of model failure (Cooper et al., 1996; Naik et al., 2018; Wang et al., 2019b), it is counterproductive when our goal is to build a broad-coverage benchmark dataset to set priorities and guide progress toward the solution of some task.",
"Dunietz et al. (2020) and Sugawara et al. (2020a) work around this issue by leaning on taxonomies of required phenomena from outside NLP.",
"This is a direction worth pursuing, but it is not clear that appropriate taxonomies will be available for most NLU tasks of interest, or that these taxonomies will be broad and thorough enough to be straightforwardly implemented as datasets.",
"Crowdsourcing Most recent benchmarks for language understanding have been collected, at least in part, through crowdsourcing example construction, where non-expert annotators are given some freedom to construct examples based on a simple set of guidelines.",
"This has an obvious appeal: Using non-expert annotators significantly lowers costs and using simple guidelines significantly reduces the risk that the resulting data will be skewed artificially toward phenomena of interest to experts.",
"However, straightforward standard practice, as was used to collect datasets like SNLI (Bowman et al., 2015) and SQuAD, seem to be relatively poor at producing difficult datasets that test the intended phenomena.",
"Existing datasets focus heavily on repetitive, easy cases and often fail to isolate key behaviors (Jia and Liang, 2017; Tsuchiya, 2018; McCoy et al., 2019).",
"Adversarial Filtering Given a source of examples and a model, adversarial-filtering-style approaches build a benchmark based on samples from that source for which the model fails.",
"Adversarial filtering can remove examples that are easy due to trivial artifacts, but it does not ensure that the resulting dataset supports a valid test of model ability, and it can systematically eliminate coverage of linguistic phenomena or skills that are necessary for the task but already well-solved by the adversary model.",
"This mode-seeking (as opposed to mass covering) behavior by adversarial filtering, if left unchecked, tends to reduce dataset diversity and thus make validity harder to achieve.",
"In contrast with this benchmark data collection setting, adversarial competitions , in which one compares the difficulty of collecting valid task examples that are adversarial to each of several systems, could be part of a healthy evaluation ecosystem.",
"Such an ecosystem might involve frequent formative evaluations on a conventional non-adversarial benchmark in conjunction with periodic organized evaluations in an adversarial setting.",
"For our benchmarks to incentivize the development of sound new methods, the labels for their test examples should be reliably correct.",
"This means avoiding three failure cases:",
"(i) examples that are carelessly mislabeled,",
"(ii) examples that have no clear correct label due to unclear or underspecified task guidelines, and",
"(iii) examples that have no clear correct label under the relevant metric due to legitimate disagreements in interpretation among annotators.",
"The first two cases straightforwardly compromise the validity of the benchmark, but the third is somewhat subtler.",
"Legitimate disagreement emerges when an example can be labeled in multiple ways depending on an annotator's choice between reasonable interpretations of the text of an example.",
"Such disagreements might stem from dialectal variants in the interpretation of words or constructions or different reasonable interpretations of the actual state of the world.",
"As a toy example, consider the question: Does Ed ate a burrito entail Ed ate a sandwich ?",
"While most US English speakers would likely answer no , many pedants and regulatory officials have argued for yes (Florestall, 2008).",
"When a benchmark contains many instances of this kind of legitimate disagreement, a machine learning model will be able to study a benchmark dataset's training set for clues about typical human behavior that might allow it to perform better than any single human annotator.",
"This effect could contribute to misleading reports of super-human performance on such benchmarks, where human performance reflects the behavior of humans who are reporting their own judgments, rather than attempting to predict the most frequently assigned label, as the model does.",
"We observe evidence of this kind of ambiguity in existing benchmarks: For example, Pavlick and Kwiatkowski (2019) find that 20% of examples across several textual entailment datasets are significantly ambiguous, and Kwiatkowski et al. (2019) show that 36% of short answer annotations in Natural Questions differ significantly from the majority answer.",
"Benchmark evaluation datasets should be large and discriminative enough to detect any qualitatively relevant performance difference between two models.",
"This criterion introduces a trade-off: If we can create benchmark datasets that are both reliable and highly difficult for the systems that we want to evaluate, then moderate dataset sizes will suffice.",
"However, if our benchmark datasets contain many examples that are easy for current or near-future systems, then we will need dramatically larger evaluation sets to reach adequate power.",
"In the context of a reliable dataset that is difficult for current systems, a 1% absolute accuracy improvement, such as that from 80% to 81%, may be an acceptable minimum detectable effect.",
"In this case, an evaluation set of a few thousand examples would suffice under typical conditions seen in NLU (Card et al., 2020).",
"Many, though not all, popular benchmark datasets satisfy this size threshold.",
"Since our systems continue to improve rapidly, though, we should expect to be spending more time in the long tail of our data difficulty distributions: If we build reliable datasets, much of their future value may lie in their ability to measure improvements in accuracy among highly accurate systems.",
"For example, an improvement from 98% accuracy to 98.1% represents the same 5% relative improvement as we saw from 80% to 81%.",
"To reliably detect this smaller absolute improvement, though, requires two orders of magnitude more evaluation data (Card et al., 2020).",
"A benchmark should, in general, favor a model without socially-relevant biases over an otherwise equivalent model with such biases.",
"Many current benchmarks fail this test.",
"Because benchmarks are often built around naturally-occurring or crowdsourced text, it is often the case that a system can improve its performance by adopting heuristics that reproduce potentially-harmful biases (Rudinger et al., 2017).",
"Developing adequate methods to minimize this effect will be challenging, both because of deep issues with both the precise specification of what constitutes harmful bias and because of the limited set of tools that we have available to us.",
"There is no precise enumeration of social biases that will be broadly satisfactory across applications and cultural contexts.",
"This can be most easily illustrated with the example of biased associations between word representations for US English (as in Bolukbasi et al., 2016).",
"Associations between race or gender and occupation are generally considered to be undesirable and potentially harmful in most contexts, and are something that benchmarks for word representations should discourage, or at least carefully avoid rewarding.",
"If a set of word representations encodes typically Black female names like Keisha as being less similar to professional occupation terms like lawyer or doctor than typically white male names like Scott are, then a model using those representations is likely to reinforce harmful race or gender biases in any downstream content moderation systems or predictive text systems it gets used in.",
"Adequately enumerating the social attributes for which we might want to evaluate bias in some context can be difficult.",
"For example, Indian castes, like racial categories in the United States, are often signaled by names and are an axis on which managers sometimes discriminate in hiring.",
"Caste is a salient category of social bias in India that is subject to legal and institutional recognition.",
"However, this bias also arises in some cases within the United States, where it has no such recognition (Tiku, 2020), and where it could be easily overlooked by non-specialist bias researchers.",
"is also deeply political.",
"Within living memory, popular and legal attitudes have changed significantly in the United States about attributes like race, gender, gender expression, sexual orientation, and disability.",
"Attitudes on these issues continue to change, and new categories can gain recognition and protection over time.",
"In many cases, this means that choosing whether to include some attribute in a computational metric of bias means choosing which group of people to align oneself with on a political issue.",
"While there are clear ethical rules of thumb to follow when doing so, 5 making any particular choice is nonetheless likely to put researchers in conflict with established institutions in ways that can change quickly.",
"Any strategy for handling bias in the context of NLP benchmarks will have to grapple with this difficult reality.",
"Building new benchmarks that improve upon our four axes is likely to be quite difficult.",
"Below we attempt to sketch out some possible directions for improvement along each axis.",
"Building valid benchmarks will require significant new research into data collection methods, at least some of which will be specific to the task under study.",
"We suspect that much of this work will involve improvements in crowdsourcing and the use of non-experts, as most of the annotation behind the tasks we discuss requires no expertise other than fluent knowledge of the language variety under study.",
"One promising direction involves methods that start from relatively high-quality crowdsourced datasets, then use expert effort to augment them in ways that mitigate annotation artifacts.",
"The Build-it-Break-it challenge (Ettinger et al., 2017), the Open Reading Benchmark (Dua et al., 2019), and the Gardner et al. (2020) contrast sets, among their other features, allow expert annotators to add examples to a test set to fill perceived gaps in coverage or correct perceived artifacts in a starting set of crowdsourced examples.",
"To the extent that crowdsourcing with non-experts can produce data that has broad coverage and high difficulty but retains some measurable artifacts or flaws, this compro-5 The ACM code of ethics states, when the interests of multiple groups conflict, the needs of those less advantaged should be given increased attention and priority. mise approach may help to create usable benchmark datasets out of the results.",
"Another approach brings computational linguists directly into the crowdsourcing process.",
"This was recently demonstrated at a small scale by Hu et al. (2020) with OCNLI: They show that it is possible to significantly improve data quality issues by making small interventions during the crowdsourcing processlike offering additional bonus payments for examples that avoid overused words and constructionswithout significantly limiting anno-tators' freedom to independently construct creative examples.",
"Of course, implementing interventions like these in a way that offers convincing evidence of validity will be difficult.",
"The use of standard techniques from crowdsourcinggenerally involving multiple redundant annotations for each examplecan largely resolve the issue of mistaken annotations.",
"Careful planning and pilot work before data collection can largely resolve the issue of ambiguous annotation guidelines.",
"Handling legitimate annotator disagreements can take two fairly different approaches, depending on the goals of the benchmark.",
"The simplest approach treats ambiguously labeled examples in the same way as mislabeled examples, and systematically identifies and discards them during a validation phase.",
"For some tasks, it may still be possible to test models' handling of fundamentally ambiguous linguistic phenomena or domains using unambiguous examples: In the case of multiple-choice question answering, for example, one can construct examples where one answer candidates is only debatably correct, but all other candidates are unequivocally wrong.",
"Any sound model would then be expected to select the debatable choice.",
"Alternately, one can decline to assign single, discrete labels to ambiguous examples.",
"This can involve asking models to predict the empirical distribution of labels that trustworthy annotators assign (Pavlick and Kwiatkowski, 2019; Poesio et al., 2019), or allowing models to predict any of several answer choices that are supported by trustworthy annotators (as in the SQuAD benchmark).",
"This comes at the cost, though, of requiring many more annotator judgments per evaluation example.",
"In principle, achieving adequate statistical power is straightforward: we simply estimate the number of examples required to reach the desired statistical power for any plausible short-to-medium term system evaluation for the task, and collect that number of examples.",
"In practice, however, costs can become prohibitive.",
"For a relatively simple task like NLI, labeling an existing example likely requires a bare minimum of 45 seconds (Vania et al., 2020), and creating a new example requires at least one minute (Bowman et al., 2020).",
"Even if we use these very optimistic numbers to estimate annotation speed, a ten-way-annotated dataset of 500,000 examples will still cost over $1 million at a $15/hr pay rate.",
"6 Recruiting more experienced annotators or encouraging annotators to work more carefully could increase this figure dramatically.",
"While such an amount of money is not completely out of reach in a well-funded field like NLP, 7 investments of this kind will inevitably be rare enough that they help reinforce the field's concentration of data and effort on a few high-resource languages and tasks.",
"For settings in which large datasets are necessary, we see no clear way to avoid high costs.",
"Gamification, in the style of the ESP game or ZombiLingo (Von Ahn and Dabbish, 2004; Fort et al., 2014), promises to offer free human labor, but at the cost of the expert time needed to refine the task definition into a game that is widely enjoyable.",
"This approach also introduces severe constraints on the kinds of data collection protocols that can be used and raises tricky new ethical issues (Morschheuser and Hamari, 2019).",
"Ultimately, the community needs to compare the cost of making serious investments in better benchmarks to the cost of wasting researcher time and computational resources due to our inability to measure progress.",
"Because there is no one-size-fits-all definition of harmful social bias, there is little prospect of creat-6",
"creat-6 This figure ignores platform fees and makes the additional optimistic assumption that only 10% of fully-annotated exam-",
"ples will be discarded because of annotator disagreement.",
"7 To put this number in context, public estimates of the cost of OpenAI's GPT-3 (Brown et al., 2020) exceed $10M (Wiggers, 2020), and in machine translation, Meng et al. (2019)'s use of 512 Nvidia V100 GPUs for three months would have cost over $1M USD on commodity cloud infrastructure.",
"ing a benchmark for language understanding that is guaranteed to never reward the development of harmfully biased models.",
"This is not a compelling reason to accept the status quo , and we nonetheless have a clear opportunity to mitigate some of the potential harms caused by applied NLP systems before those systems are even developed.",
"Opting not to test models for some plausible and potentially-harmful social bias is, intentionally or not, a political choice.",
"While it would be appealing to try to guarantee that our evaluation data does not itself demonstrate evidence of bias, we are aware of no robust strategy for reliably accomplishing this, and work on the closely-related problem of model bias mitigation has been fraught with false starts and overly optimistic claims (Gonen and Goldberg, 2019).",
"A viable alternate approach could involve the expanded use of auxiliary metrics: Rather than trying to fully mitigate bias within a single general dataset and metric for some task, benchmark creators can introduce a family of additional expert-constructed test datasets and metrics that each isolate and measure a specific type of bias.",
"Any time a model is evaluated on the primary task test set in this setting, it would be evaluated in parallel on these additional bias test sets.",
"This would not prevent the primary metric from unintentionally and subtly rewarding biased models, but it would combat this effect by more directly highlighting and penalizing bias in models.",
"In addition, the fact that these metrics would target specific types of biases would make it easier for benchmark maintainers to adapt as changing norms or changing downstream applications demand coverage of additional potential harms.",
"For several tasks, metrics like this already exist, at least for gender in English, in the form of auxiliary test sets meant to be combined with a preexisting training set (Rudinger et al., 2018; Webster et al., 2018; Kiritchenko and Mohammad, 2018; Li et al., 2020).",
"Even so, refining these metrics and developing new ones will likely require us to face many of the same challenges that we highlight in this paper for benchmark design more generally.",
"The larger challenge in implementing this approach, however, is a matter of community structure and incentive design.",
"Methods papers dealing with tasks for which metrics already exist rarely report numbers on these metrics.",
"Even for the SuperGLUE benchmark, which requires users to compute test set metrics on the DNC Winogender test set in order to reveal test set results for any other target task, a large majority of papers that report test set numbers omit this metric and decline to report potentially unflattering bias numbers (Raffel et al., 2020; Pruksachatkun et al., 2020; Schick and Schtze, 2020; He et al., 2020).",
"The difficulty, then, is in developing community infrastructure to encourage the widespread reporting of metrics that address the full range of relevant likely harms.",
"This could plausibly involve peer review norms, explicit publication venue policies, stricter versions of the SuperGLUE approach for which users can only retrieve aggregate performance numbers, without a precise separation of the primary and bias-oriented metrics, or even the introduction of professional licensing standards.",
"Of course, ensuring that bias is measured and reported is not enough to prevent bias-related harms from emerging in practice: It is also necessary to ensure that those who build and deploy NLP products will take these metrics seriously and respond to them appropriately.",
"And, of course, even if a system encodes no social bias at all, it can still be deployed in ways that produce unfair or unjust outcomes.",
"These difficult issues are beyond the scope of a paper on benchmark design.",
"The NLP and ML research communities are increasingly interested in issues surrounding data and evaluation.",
"This section surveys relevant positions and issues that don't quite fit our schema.",
"Welty et al. (2019) advocate for the more precise reporting of the focus and abilities of test sets and metrics in ML broadly, with a focus on issues surrounding statistical power.",
"Bender and Friedman (2018) and Gebru et al. (2018) advocate for explicit freestanding datasheets documenting dataset releases of all kinds, with a focus on making potential harmful mismatches between data and application visible, and Hutchinson et al. (2021) argue along similar lines for a broader program of transparency and stakeholder engagement in data creation.",
"Dodge et al. (2019) lay out a set of best practices for results reporting, with a focus on the impact of hyperparameter tuning on model comparison.",
"Ethayarajh and Jurafsky (2020) advocate for the inclusion of efficiency considerations in leaderboard design.",
"Boyd-Graber and Brschinger (2020) describe ways that trivia competitions can provide a model for carefully-considered dataset design.",
"Church and Hestness (2019) revisit the arguments that motivated the NLP community's shift toward quantitative benchmarking in the early 1990s and warn that the overwhelming success of this shift has indirectly laid the groundwork for the widespread use of poor-quality benchmarks.",
"Blodgett et al. (2020) challenge researchers working on social bias in NLP to focus more precisely on specific types of harm to specific populations of users, a challenge that our broad position piece does not fully meet.",
"NLP has had longstanding debates over the types of tasks that best test substantial language understanding skills.",
"Many task-specific papers contribute to this debate, as does a prominent recent thread advocating for an increased focus on grounding of various kinds by Bender and Koller (2020), Bisk et al. (2020), Zellers et al. (2020), and others.",
"Benchmarking for NLU is broken.",
"We lay out four major criteria that benchmarks should fulfill to offer faithful, useful, and responsible measures of language ability.",
"We argue that departing from IID evaluation (as is seen with benchmark datasets collected by adversarial filtering) does not help to address these criteria, but lay out in broad strokes how each criterion might be addressed directly.",
"Nonetheless, important open research questions remain.",
"Most centrally, it is still unclear how best to integrate expert effort into crowdsourced data collection, and we do not yet see a clear institutional model by which to ensure that bias metrics are built and used when they are most needed.",
"This paper advocates for reforms to a set of benchmarking practices that have so far largely failed to address issues of social bias, and that have thereby helped create a false sense of security among those building applied systems.",
"While this paper offers no complete and satisfactory solutions, it proposes measures that should contribute to harm reduction.",
"We thank Emily Bender, Iacer Calixto, Haokin Liu, Kyunghyun Cho, Will Huang, Jamie Kiros, and audiences at CMU, Google, and Apple for feedback on these ideas.",
"This project has benefited from financial support to SB by Eric and Wendy Schmidt (made by recommendation of the Schmidt Futures program), Samsung Research (under the project Improving Deep Learning using Latent Structure ), and Intuit.",
"This material is based upon work supported by the National Science Foundation under Grant No. 1922658.",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation."
] | [
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"method",
"objective",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"objective",
"other",
"other",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"abstain",
"result",
"abstain",
"result",
"method",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"In this paper, we propose a multi-granularity interaction network for extractive and abstractive multi-document summarization, which jointly learn semantic representations for words, sentences, and documents.",
"The word representations are used to generate an abstractive summary while the sentence representations are used to produce an extractive summary.",
"We employ attention mechanisms to interact between different granularity of semantic representations, which helps to capture multi-granularity key information and improves the performance of both abstractive and extractive summarization.",
"Experiment results show that our proposed model substantially outperforms all strong baseline methods and achieves the best results on the Multi-News dataset.",
"Document summarization aims at producing a flu-ent, condensed summary for given documents.",
"Single document summarization has shown promising results with sequence-to-sequence models that encode a source document and then decode it into a summary (See et al., 2017; Paulus et al., 2018; Gehrmann et al., 2018; Celikyilmaz et al., 2018).",
"Multi-document summarization requires producing a summary from a cluster of thematically related documents, where the given documents complement and overlap each other.",
"Multi-document summarization involves identifying important information and filtering out redundant information from multiple input sources.",
"There are two primary methodologies for multi-document summarization: extractive and abstractive .",
"Extractive methods directly select important sentences from the original, which are relatively simple.",
"Cao et al. (2015) rank sentences with a recursive neural network.",
"Yasunaga et al. (2017) employ a Graph Convolutional Network (GCN) to incorporate sentence relation graphs to improve the performance for the extractive summarization.",
"Abstractive methods can generate new words and new sentences, but it is technically more difficult than extractive methods.",
"Some works on multi-document summarization simply concatenate multiple source documents into a long flat sequence and model multi-document summarization as a long sequence-to-sequence task (Liu et al., 2018; Fabbri et al., 2019).",
"However, these approaches don't take the hierarchical structure of document clusters into account, while the too-long input often leads to the degradation in document summarization (Cohan et al., 2018; Liu and Lapata, 2019).",
"Recently, hierarchical frameworks have shown their effectiveness on multi-document summarization (Zhang et al., 2018; Liu and Lapata, 2019).",
"These approaches usually use multiple encoders to model hierarchical relationships in the discourse structure, but other methods to incorporate the structural semantic knowledge have not been explored.",
"The combination of extractive and abstractive has been explored in single document summarization.",
"Chen and Bansal (2018) use the extracted sentences as the input of the abstractive summarization.",
"Sub-ramanian et al. (2019) concatenate the extracted summary to the original document as the input of the abstractive summarization.",
"In this work, we treat documents, sentences, and words as the different granularity of semantic units, and connect these semantic units within a three-granularity hierarchical relation graph.",
"With the multi-granularity hierarchical structure, we can unify extractive and abstractive summarization into one architecture simultaneously.",
"Extractive summarization operates on sentence-granularity and directly supervises the sentence representations while abstractive summarization operates on word-granularity and directly supervises the word representations.",
"We propose a novel multi-granularity interaction network to enable the supervisions to promote the learning of all granularity representations.",
"We employ the attention mechanism to encode the relationships between the same semantic granularity and hierarchical relationships between the different semantic granularity, respectively.",
"And we use a fusion gate to integrate the various relationships for updating the semantic representations.",
"The decoding part consists of a sentence extractor and a summary generator.",
"The sentence extractor utilizes the sentence representations to select sentences, while the summary generator utilizes the word representations to generate a summary.",
"The two tasks are trained in a unified architecture to promote the recognition of important information simultaneously.",
"We evaluate our model on the recently released Multi-News dataset and our proposed architecture brings substantial improvements over several strong baselines.",
"We explore the influence of semantic units with different granularity, and the ablation study shows that joint learning of extractive and abstractive summarization in a unified architecture improves the performance.",
"We establish multi-granularity semantic representations for documents, sentences, and words, and propose a novel multi-granularity interaction network to encode multiple input documents.",
"Our approach can unify the extractive and abstractive summarization into one architecture with interactive semantic units and promote the recognition of important information in different granularities.",
"Experimental results on the Multi-News dataset show that our approach substantially outperforms several strong baselines and achieves state-of-the-art performance.",
"Our code is publicly available at https://github.",
"com/zhongxia96/MGSum .",
"The methods for multi-document summarization can generally be categorized to extractive and abstractive.",
"The extractive methods produce a summary by extracting and merging sentences from the input documents, while the abstractive methods generate a summary using arbitrary words and expressions based on the understanding of the documents.",
"Due to the lack of available training data, most previous multi-document summarization methods were extractive (Erkan and Radev, 2004; Christensen et al., 2013; Yasunaga et al., 2017).",
"Since the neural abstractive models have achieved promising results on single-document summarization (See et al., 2017; Paulus et al., 2018; Gehrmann et al., 2018; Celikyilmaz et al., 2018), some works trained abstractive summarization models on a single document dataset and adjusted the model to adapt the multi-document summarization task.",
"Zhang et al. (2018) added a document set encoder into the single document summarization framework and tuned the pre-trained model on the multi-document summarization dataset.",
"Lebanoff et al. (2018) combined an extractive summarization algorithm (MMR) for sentence extraction to reweight the original sentence importance distribution learned in the single document abstractive summarization model.",
"Recently, two large scale multi-document summarization datasets have been proposed, one for very long input, aimed at generating Wikipedia (Liu et al., 2018) and another dedicated to generating a comprehensive summarization of multiple real-time news (Fabbri et al., 2019).",
"Liu et al. (2018) concatenated multiple source documents into a long flat text and introduced a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoder-decoder architectures.",
"Liu and Lapata (2019) introduced intermediate document representations and simply add the document representations to word representations for modeling the cross-document relationships.",
"Compared with our proposed multi-granularity method, Liu and Lapata (2019) inclined to the traditional bottom-up hierarchical method and don't effectively utilize the hierarchical representations while ignoring the hierarchical relationships of sentences.",
"Fabbri et al. (2019) incorporated MMR into a hierarchical pointer-generator network to address the information redundancy in multi-document summarization.",
"Our model consists of a multi-granularity encoder, a sentence extractor, and a summary generator.",
"Firstly, the multi-granularity encoder reads multiple input documents and learns the multi-granularity representations for words, sentences, and documents.",
"Self-attention mechanisms are employed for capturing semantic relationships of the representations with same granularity, while cross-attention mechanisms are employed for the information interaction between representations with different granularity.",
"Fusion gates are used for integrating the information from different attention mechanisms.",
"Then the sentence extractor scores sentences according to the learned sentence representations.",
"Meanwhile, the summary generator produces the abstractive summary by attending to the word representations.",
"In the following sections, we will describe the multi-granularity encoder, the sentence extractor, and the summary generator, respectively.",
"Given a cluster of documents, we establish explicit representations for documents, sentences, and words, and connect them within a hierarchical semantic relation graph.",
"The multi-granularity encoder is a stack of L 1 identical layers.",
"Each layer has two sub-layers: the first is the multi-granularity attention layer, and the second is multiple fully connected feed-forward networks.",
"The multi-granularity attention sub-layer transfers semantic information between the different granularity and the same granularity, while the feed-forward network further aggregates the multi-granularity information.",
"We employ multi-head attention to encode multi-granularity information and use a fusion gate to propagate semantic information to each other.",
"Figure 1 shows the overview of the multi-granularity encoder layer, and Figure 2 illustrates how the semantic representations are updated, which takes the sentence representation as an example.",
"Let w i,j,k be the k -th word of the sentence s i,j in the document d i .",
"At the bottom of the encoder stack, each input word w i,j,k is converted into the vector representation e i,j,k by learned embeddings.",
"We assign positional encoding to indicate the position of the word w i,j,k and three positions need to be considered, namely i (the rank of the docu-ment), j (the position of the sentence within the document), k (the position of the word within the sentence).",
"We concatenate the three position embedding P E i , P E j ,and P E k to get the final position embedding p i,j,k .",
"The input word representation can be obtained by simply adding the word duplicate Word Sentence Document self-attention cross-attention Figure 1: The overview of the multi-granularity encoder layer.",
"p i,j,k = [ P E i ; P E j ; P E k ] h 0 w i,j,k = e i,j,k + p i,j,k",
"where the definition of positional encoding P E is consistent with the Transfomer (Vaswani et al., 2017).",
"For convenience, we denote the output of l -th multi-granularity encoder layer as h l and the input for the first layer as h 0 .",
"Symbols with subscripts w i,j,k , s i,j and d i are used to denote word, sentence, and document granularities, respectively.",
"Both sentence representations h 0 s i,j and document representations h 0 d i are initialized to zeros.",
"In each multi-granularity attention sub-layers, the word representation is updated by the information of word granularity and sentence granularity.",
"We perform multi-head self-attention across the word representations in the same sentence h l 1 w i,j, = { h l 1 w i,j,k | w i,j,k s i,j } to get the context representation h lw i,j,k .",
"In order to propagate semantic information from sentence granularity to the word granularity, we duplicate sentence-aware representation h l w i,j,k from corresponding sentence s i,j and employ a fusion gate to integrate h lw i,j,k and h lw i,j,k to get the updated word representation f lw i,j,k .",
"f lw i,j,k = Fusion (cid:16) h lw i,j,k , h lw i,j,k (cid:17) h lw i,j,k = MHAtt (cid:16) h l 1 w i,j,k , h l 1 w i,j, (cid:17) h lw i,j,k = h l 1 s i,j (2) where MHAtt denotes the multi-head attention proposed in Vaswani et al. (2017) and Fusion denotes the fusion gate.",
"h l 1 w i,j,k is the query and h l 1 w i,j, are Multi-Head Self-Attention Multi-Head Cross-Attention Feed-Forward Fusion Gate Fusion Gate S i,1 S i,2 S i,n ...",
"the keys and values for attention.",
"The fusion gate works as z = ([ x ; y ] W f + b f ) Fusion( x, y ) = z x + (1 z) y (3) where is the sigmoid function, parameters W f R 2 d model 1 and b f R .",
"The sentence representation is updated from three sources: (1) We take the sentence representation h l 1 s i,j as the query, the word representations h l 1 w i,j, = { h l 1 w i,j,k | w i,j,k s i,j } as the keys and values, to perform multi-head cross-attention to get the intermediate word-aware representation h l 1 s i,j ; (2) Multi-head self-attention across sentence representations h l 1 s i, = { h l 1 s i,j | s i,j d i } is performed to get the context representation h ls i,j ; (3) In order to propagate document granularity semantic information to the sentence, we duplicate the document-aware representation h ls i,j from corresponding document d i .",
"h ls i,j = MHAtt (cid:16) h l 1 s i,j , h l 1 w i,j, (cid:17) h ls i,j = MHAtt (cid:16) h l 1 s i,j , h l 1 s i, (cid:17) h ls i,j = h l 1 d i (4) Semantic representations from the three sources are fused by two fusion gate to get the updated sentence representation f ls i,j .",
"To update the document representation, multihead self-attention across all document representations",
"representations h l 1 d = { h l 1 d i } is performed to get the context representation h ld i .",
"Meanwhile, we take the document representation h l 1 d i as the query, sentence representations { h l 1 s i,j | s i,j d i } as the keys and values to perform multi-head cross-attention to get the intermediate sentence-aware representation h ld i .",
"A fusion gate is used to aggregate the above outputs h ld i and h ld i .",
"f ld i = Fusion (cid:16) h ld i , h ld i (cid:17) h ld i = MHAtt (cid:16) h l 1 d i , h l 1 d (cid:17) h ld i = MHAtt (cid:16) h l 1 d i , h l 1 s i, (cid:17) (6) The feed-forward network FFN is used to transform multiple-granularity semantic information further.",
"To construct deep network, we use the residual connection (He et al., 2016) and layer normalization (Ba et al., 2016) to connect adjacent layers.",
"where l [1 , L 1 ] , FFN consists of two linear transformations with a ReLU activation in between.",
"Note that we used different FFN and LayerNorm for the different granularity.",
"The final representation h L 1 s is fed to the sentence extractor while h L 1 w is fed to the summary generator.",
"For convenience, we denote h L 1 s as o s , and h L 1 w as o w .",
"we build a classifier to select sentences based on the sentence representations o s from the multi-granularity encoder.",
"The classifier uses a linear transformation layer with the sigmoid activation function to get the prediction score for each sentence y s = ( o s W o + b o ) (8) where is the sigmoid function, parameters W o R d model 1 and b o R .",
"The summary generator in our model is also a stack of L 2 identical layers.",
"The layer consists of three parts: a masked multi-head self-attention mechanism, a multi-head cross-attention mechanism, and a fully connected feed-forward network.",
"As the input and output of multi-document summarization are generally long, the multi-head attention degenerates as the length increases (Liu and Lapata, 2019).",
"Following Zhao et al. (2019) 's idea, we adopt a sparse attention mechanism where each query only attends to the topk values according to their weights calculated by the keys rather than all values in the original attention (Vaswani et al., 2017).",
"And k is a hyper-parameter.",
"This ensures that the generator focuses on critical information in the input and ignores much irrelevant information.",
"We denote the multi-head sparse attention as MSAttn .",
"Similar to the multi-granularity encoder, we add the positional encoding of words in the summary to the input embedding at the bottom of the decoder stack.",
"We denote the output of the l -th layer as g l and the input for the first layer as g 0 .",
"The self-attention sub-layer with masking mechanism is used to encode the decoded information.",
"The masking mechanism ensures that the prediction of the position t depends only on the known output of the position before t .",
"The cross-attention sub-layer take the self-attention output g as the queries and the multi-granularity encoder output o w as keys and values to performs multi-head sparse attention.",
"The feed-forward network is used to further transform the outputs.",
"The generation distribution p gt over the target vocabulary is calculated by feeding the output g L 2 t to a softmax layer.",
"The copy mechanism (Gu et al., 2016) is employed to tackle the problem of out-of-vocabulary (OOV) words.",
"We compute the copy attention t with the decoder output g L 2 and the input representations o w , and further obtain copy distribution p ct .",
"t = softmax( g L 2 t o (cid:62) w + b ) p ct = (cid:88) i,j,k t z (cid:62) i,j,k (12) where z i,j,k is the one-hot indicator vector for w i,j,k and b R d vocab .",
"A gate is used over the the decoder output g L 2 to control generating words from the vocabulary or copying words directly from the source text.",
"The final distribution p t is the mixture of the two distributions p gt and p ct .",
"t = (cid:16) g L 2 t W + b (cid:17) p t = t p gt + (1 t ) p ct (13) where is the sigmoid function, W R d model 1 , b R .",
"We train the sentence extractor and the summary generator in a unified architecture in an end-to-end manner.",
"We use the cross entropy as both the extractor loss and the generator loss.",
"where y s is the ground-truth extracted label, y w is the ground-truth summary and N is the number of samples in the corpus.",
"The final loss is as below L mix = L abs + L ext (15) where is a hyper-parameter.",
"We experiment with the latest released Multi-News dataset (Fabbri et al., 2019), which is the first large scale multi-document news summarization dataset.",
"It contains about 44972 pairs for training, 5622 pairs for development, and 5622 for the test.",
"Each summary of the average of 264 words is paired with a documents cluster of average 2103 words discussing a topic.",
"The number of source documents per summary presents as shown in Table 1 .",
"While the dataset contains abstractive gold summaries, it is not readily suited to training extractive models.",
"So we follow the work of Zhou et al. (2018) on extractive summary labeling, constructing gold-label sequences by greedily optimizing ROUGE2 F1 on the gold-standard summary.",
"We set our model parameters based on preliminary experiments on the development set.",
"We prune the vocabulary to 50k and use the word in the source documents with maximum weight in copy attention to replace the unknown word of the generated summary.",
"We set the dimension of word embed-dings and hidden units d model to 512 , feed-forward units to 2048 .",
"We set 8 heads for multi-head self-attention, masked multi-head sparse self-attention, and multi-head sparse cross-attention.",
"We set the number of multi-granularity encoder layer L 1 to 5 and summary decoder layer L 2 to 6 .",
"We set dropout (Srivastava et al., 2014) rate to 0 .",
"1 and use Adam optimizer with an initial learning rate = 0 .",
"0001 , momentum 1 = 0 .",
"9 , 2 = 0 .",
"999 and weight decay (cid:15) = 10 5 .",
"When the valid loss on the development set increases for two consecutive epochs, the learning rate is halved.",
"We use a mini-batch size of 10 , and set the hyper-parameter k = 5 and = 2 .",
"Given the salience score predicted by the sentence extractor, we apply a simple greedy procedure to select sentences.",
"We select one sentence based on the descending order of the salience scores and append to the extracted summary until the summary reaches 300 words.",
"We disallow repeating the same trigram (Paulus et al., 2018; Edunov et al., 2019) and use beam search with a beam size of 5 for summary generator.",
"We use ROUGE (Lin, 2004) to evaluate the produced summary in our experiments.",
"Following previous work, we report ROUGE F1 1 on Multi-News dataset.",
"We compare our model with several typical baselines and several baselines proposed in the latest years.",
"Lead3 is an extractive baseline which concatenates the first3 sentences of each source document as a summary.",
"LexRank (Erkan and Radev, 2004) 1 The ROUGE evaluation option: -c 95 -2 4 -U -r 1000 -n 4 -w 1.2 -a Model R-1 R-2 R-SU4 Lead-3 39.41 11.77 14.51 LexRank (Erkan and Radev, 2004) 38.27 12.70 13.20 TextRank (Mihalcea and Tarau, 2004) 38.44 13.10 13.50 MMR (Carbonell and Goldstein, 1998) 38.77 11.98 12.91 HIBERT (Zhang et al., 2019) 43.86 14.62 18.34 PGN (See et al., 2017) 41.85 12.91 16.46 CopyTransformer (Gehrmann et al., 2018) 43.57 14.03 17.37 Hi-MAP (Fabbri et al., 2019) 43.47 14.89 17.41 HF (Liu and Lapata, 2019) 43.85 15.60 18.80 MGSumext 44.75 15.75 19.30 MGSumabs 46.00 16.81 20.09 oracle ext 49.02 29.78 29.19 Table 2: ROUGE F1 evaluation results on the Multi-News test set.",
"is an unsupervised graph based method for computing relative importance in extractive summarization.",
"TextRank (Mihalcea and Tarau, 2004) is also an unsupervised algorithm while sentence importance scores are computed based on eigenvector centrality within weighted-graphs for extractive sentence summarization.",
"MMR (Carbonell and Goldstein, 1998) extracts sentences with a ranked list of the candidate sentences based on the relevance and redundancy.",
"HIBERT (Zhang et al., 2019) first encodes each sentence using the sentence Transformer encoder, and then encode the whole document using the document Transformer encoder.",
"It is a single document summarization model and cannot handle the hierarchical relationship of documents.",
"We migrate it to multi-document summarization by concatenating multiple source documents into a long sequence.",
"These extractive methods are set to give an output of 300 tokens.",
"PGN (See et al., 2017) is an RNN based model with an attention mechanism and allows the system to copy words from the source text via pointing for abstractive summarization.",
"CopyTransformer (Gehrmann et al., 2018) augments Transformer with one of the attention heads chosen randomly as the copy distribution.",
"Hi-MAP (Fabbri et al., 2019) expands the pointer-generator network model into a hierarchical network and integrates an MMR module to calculate sentence-level scores, which is trained on the Multi-News corpus.",
"The baseline above has been compared and reported in the Fabbri et al. (2019), which releases the Multi-News dataset, and we directly cite the results of the above methods from this paper.",
"HT (Liu and Lapata, 2019) is a Transformer based model with an attention mechanism to share information cross-document for abstractive multi-document summarization.",
"It is used initially to generate Wikipedia, and we reproduce their method for the multi-document news summarization.",
"Following previous work, we report ROUGE-1 (unigram), ROUGE-2 (bigram), and ROUGE-SU4 (skip bigrams with a maximum distance of 4 words) scores as the metrics for automatic evaluation (Lin and Hovy, 2003).",
"In Table 2, we report the results on the Multi-News test set and our proposed multi-granularity model (denoted as MGSum) outperforms various previous models.",
"Our abstractive method achieves scores of 46 .",
"00 , 16 .",
"81 , and 20 .",
"09 on the three ROUGE metrics while our extractive method achieves scores of 44 .",
"75 , 15 .",
"75 , and 19 .",
"30 on the three ROUGE metrics.",
"We can also see that the abstractive methods perform better than the extractive methods.",
"We attribute this result to the observation that the gold summary of this dataset tends to use new expressions to summarize the original input documents.",
"Owing to the characteristics of the news, lead-3 is superior to all unsupervised extractive methods.",
"Our extractive method achieves about 1 .",
"13 points improvement on ROUGE-2 F1 compared with HIBERT.",
"We attribute the improvement to two aspects: Firstly, the abstractive objective can promote the recognition of important sentences for the extractive model with the multi-granularity interaction network.",
"Besides, while extractive gold-label sequences are obtained by greedily optimizing ROUGE-2 F1 on the gold-standard summary, gold labels may not be accurate.",
"Joint learning of two objectives may correct some biases for the extractive model due to the inaccurate labels.",
"We calculate the oracle result based on the gold-label extractive sequences, which achieves a score of 29 .",
"78 on ROUGE-2 F1 and is 14 .",
"03 points higher than the score of our extractive method.",
"While there is a big gap between our model and the oracle, more efforts can be made to improve extractive performance.",
"Among the abstractive baselines, CopyTransformer performs much better than PGN and achieves 1 .",
"12 points improvement on the ROUGE-2 F1, which demonstrates the superiority of the Transformer architecture.",
"Our abstractive model gains an improvement of 2 .",
"78 points compared with CopyTransformer, 1 .",
"92 points compared with Hi-MAP, and 1 .",
"21 points compared with HF on 2.81 2.89 2.73 2.98 3.05 2.95 2.82 2.97 2.96 3.07 3.06 3.03 3.22 3.38 3.29 fluency informativeness non-redundancy PGN CopyTransformer Hi-MAP HF MGSum Figure 3: Human evaluation.",
"ROUGE-2 F1, which verifies the effectiveness of the proposed multi-granularity interaction network for the summary generation.",
"To evaluate the linguistic quality of generated summaries, we carry out a human evaluation.",
"We focus on three aspects: fluency , informativeness , and non-redundancy .",
"The fluency indicator focuses on whether the summary is well-formed and grammatical.",
"The informativeness indicator can reflect whether the summary covers salient points from the input documents.",
"The measures whether the summary contains repeated information.",
"We sample 100 instances from the Multi-News test set and employ 5 graduate students to rate each summary.",
"Each human judgment evaluates all outputs of different systems for the same sample.",
"3 human judgments are obtained for every sample, and the final scores are averaged across different judges.",
"Results are presented in Figure 3.",
"We can see that our model performs much better than all baselines.",
"In the fluency indicator, our model achieves a high score of 3 .",
"22 , which is higher than 2 .",
"98 of CopyTransformer and 3 .",
"07 of HF, indicating that our model can reduce the grammatical errors and improve the readability of the summary.",
"In the informativeness indicator, our model is 0 .",
"32 better than HF on ROUGE-2 F1.",
"It indicates that our model can effectively capture the salient information.",
"In the non-redundancy indicator, MGSum outperforms all baselines by a large margin, that indicates the multi-granularity semantic information and joint learning with extractive summarization does help to avoid the repeating information of the generated summary.",
"We perform an ablation study on the development set to investigate the influence of different modules in our proposed MGSum model.",
"Modules are tested in four ways: (1) we remove the sentence extractor and only train the generator to verify the effectiveness of joint learning on the abstractive summarization; (2) we remove the summary generator part and only train the sentence extractor to verify the effectiveness of joint learning on the extractive summarization; (3) we remove the document representation and use only the sentence and word representations to verify the effectiveness of the document granularity semantic information; (4) We remove the document and sentence representation and use only the word representation to verify the importance of the sentence representation further.",
"Since there are no interactions between the sentences of different documents without document representations, we establish connections between all sentences after the document representation is removed.",
"Furthermore, we also establish connections between all the words after the sentence representation is removed, and the model degenerates into Transformer at this time.",
"Table 3 presents the results.",
"We find that the ROUGE-2 F1 score of extractive summarization drops by 0 .",
"31 after the summary generator is removed.",
"This indicates that the joint learning method helps extractive summarization to bene-fit from the abstractive summarization.",
"ROUGE-2 F1 score of abstractive summarization drops by 0 .",
"6 after the sentence extractor is removed.",
"This indicates that extractive summarization does help abstractive summarization identify important sentences during the interactive encoding phrase.",
"ROUGE-2 F1 score of extractive summarization drops by 0 .",
"4 , while the ROUGE-2 F1 score of abstractive summarization drops by 0 .",
"3 after the document representation is removed.",
"It indicates es-Human : it ' s a race for the governor ' s mansion in 11 states today , and the gop could end the night at the helm of more than two-thirds of the 50 states .",
"the gop currently controls 29 of the country ' s top state offices ; it ' s expected to keep the three republican ones that are up for grabs ( utah , north dakota , and indiana ) , and wrest north carolina from the dems .",
"that brings its toll to 30 , with the potential to take three more , reports npr .",
"races in montana , new hampshire , and washington are still too close to call , and in all three , democrat incumbents aren ' t seeking reelection .",
"the results could have a big impact on health care , since a supreme court ruling grants states the ability to opt out of obamacare ' s medicaid expansion .",
"a romney victory would dramatically empower republican governors , said one analyst .",
"click for npr ' s state-by-state breakdown of what could happen .",
"HF : delaware , new hampshire , and missouri are expected to notch safe wins in 11 states , reports npr .",
"the state ' s top state of the state has seen its top state offices , and it ' s expected to be more than twothirds of the nation ' s state , reports the washington post .",
"the top 10 : montana , montana , and rhode island .",
"indiana : missouri : the state is home to the top of the list of state offices .",
"new hampshire : montana : incumbent john kasich : he ' s the first woman to hold a state seat in the state , notes the huffington post .",
"north carolina : the only state to win gop-held seats in vermont and delaware .",
"new jersey : the biggest state in the history of the year has seen a population of around 40 % of the population , reports ap .",
"montana : new hampshire and missouri : a state department of emergency has been declared a state of emergency .",
"click for the full list , or check out a list of the states that voted tonight .",
"MGSumext : gop eyes gains as voters in 11 states pick governors enlarge this image toggle caption jim cole/ap jim cole/ap voters in 11 states will pick their governors tonight , and republicans appear on track to increase their numbers by at least one , with the potential to extend their hold to more than two-thirds of the nation ' s top state offices .",
"and that ' s health care , says political scientist thad kousser , co-author of the power of american governors .",
"republicans currently hold 29 governorships , democrats have 20 , and rhode island ' s gov .",
"lincoln chafee is an independent .",
"eight of the gubernatorial seats up for grabs are now held bydemocrats; threeareinrepublicanhands.",
"pollsandraceanalystssug-gest that only three of tonight ' s contests are considered competitive , all in states where incumbent democratic governors aren ' t running again : montana , new hampshire and washington .",
"MGSumabs : voters in 11 states will pick their governors tonight , and republicans appear on track to increase their numbers by at least one , with the potential to extend their hold to more than two-thirds of the nation ' s top state offices .",
"republicans currently hold 29 governorships , democrats have 20 , and rhode island ' s gov .",
"lincoln chafee is an independent .",
"the seat is expected to be won by former charlotte mayor walter dalton , who won his last election with 65 % of the vote , reports the washington post .",
"democrats are expected to hold on to their seats in west virginia and missouri , and democrats are likely to hold seats in vermont and delaware , reports npr .",
"polls and race analysts say that only three of tonight ' s contests are considered competitive , and all in states where incumbent democratic governors aren ' t running again .",
"no matter who wins the presidency , national politics is going to be stalemated on the affordable care act , says one political scientist .",
"tablishing the document representation to simulate the relationships between documents is necessary to improve the performance of both extractive and abstractive summarization.",
"ROUGE-2 F1 score drops by 1 .",
"61 compared with MGSum and 1 .",
"01 compared with the only summary generator after removing both the document representation and the sentence representation.",
"And there is no extractive summarization to co-promote the recognition of important information for abstractive summarization after the sentence representation is removed.",
"It indicates the semantic information of sentence granularity is of great importance to encode multiple documents.",
"In Table 4, we present example summaries generated by strong baseline HF, and our extractive and abstractive methods.",
"The output of our model has the highest overlap with the ground truth.",
"Moreover, our extractive and abstractive summary show consistent behavior with the high overlap, which further indicates that the two methods can jointly promote the recognition of important information.",
"Compared with the extracted summary, the generated summary is more concise and coherent.",
"In this work, we propose a novel multi-granularity interaction network to encode semantic representations for documents, sentences, and words.",
"It can unify the extractive and abstractive summarization by utilizing the word representations to generate the abstractive summary and the sentence representations to extract sentences.",
"Experiment results show that the proposed method significantly outperforms all strong baseline methods and achieves the best result on the Multi-News dataset.",
"In the future, we will introduce more tasks like document ranking to supervise the learning of the multi-granularity representations for further improvement.",
"This work was supported by National Natural Science Foundation of China (61772036), Tencent AI Lab Rhino-Bird Focused Research Program (No.JR201953) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology).",
"We thank the anonymous reviewers for their helpful comments.",
"Xiaojun Wan is the corresponding author."
] | [
"objective",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"result",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Correct natural language understanding requires computers to distinguish the literal and metaphorical senses of a word.",
"Recent neural models achieve progress on verb metaphor detection by viewing it as sequence labeling.",
"In this paper, we argue that it is appropriate to view this task as relation classification between a verb and its various contexts.",
"We propose the Metaphor-relation BERT (Mr-BERT) model, which explicitly models the relation between a verb and its grammatical, sentential and semantic contexts.",
"We evaluate our method on the VUA, MOH-X and TroFi datasets.",
"Our method gets competitive results compared with state-of-the-art approaches.",
"Metaphor is ubiquitous in our daily life for effective communication (Lakoff and Johnson, 1980).",
"Metaphor processing has become an active research topic in natural language processing due to its importance in understanding implied meanings.",
"This task is challenging, requiring contextual semantic representation and reasoning.",
"Various contexts and linguistic representation techniques have been explored in previous work.",
"Early methods focused on analyzing restricted forms of linguistic context, such as subject-verb-object type grammatical relations, based on hand-crafted features (Shutova and Teufel, 2010b; Tsvetkov et al., 2013; Gutirrez et al., 2016).",
"Later, word embeddings and neural networks were introduced to alleviate the burden of feature engineering for relation-level metaphor detections (Rei et al., 2017; Mao et al., 2018).",
"However, although grammatical relations provide the most direct clues, other contexts in running text are mostly ignored.",
"ered that wider context can lead to better performance.",
"Do Dinh and Gurevych (2016) considered a fixed window surrounding each target token as context.",
"Gao et al. (2018) and Mao et al. (2018) argued that the full sentential context can provide strong clues for more accurate prediction.",
"Some recent work also attempted to design models motivated by metaphor theories (Mao et al., 2019; Choi et al., 2021).",
"Despite the progress of exploiting sentential context, there are still issues to be addressed.",
"First of all, a word's local context, its sentential context and other contexts should be all important for detecting metaphors; however, they are not well combined in previous work.",
"More importantly, as shown in Figure 1, most token-level metaphor detection methods formulate metaphor detection as either a single-word classification or a sequence labeling problem (Gao et al., 2018).",
"The context information is mainly used for learning contextual representations of tokens, rather than modeling the interactions between the target word and its contexts (Zayed et al., 2020).",
"In this paper, we focus on token-level verb metaphor detection, since verb metaphors are of the most frequent type of metaphoric expressions (Shutova and Teufel, 2010a).",
"As shown in Figure 1, we propose to formulate verb metaphor detection as a relation extraction problem, instead of token classification or sequence labeling formulations.",
"In analogy to identify the relations between entities, our method models the relations between a target verb and its various contexts, and determines the verb's metaphoricity based on the relation representation rather than only the verb's (contextual) representation.",
"We present a simple yet effective model Metaphor-relation BERT (MrBERT), which is adapted from a BERT (Devlin et al., 2019) based state-of-the-art relation learning model (Bal-Sentence Encoder = $ , , ' = , , ) M L Sentence Encoder = $ , , ' = , , ) M L L 0 ' $ ' ) Sentence Encoder = $ , , ' = , , ) $ ' ) Relation Encoder M L ( ' , 2 )",
"dini Soares et al., 2019).",
"Our model has three highlights, as illustrated in Figure 2. First, we explicitly extract and represent context components, such as a verb's arguments as the local context, the whole sentence as the global context, and its basic meaning as a distant context.",
"So multiple contexts can be modeled interactively and integrated together.",
"Second, MrBERT enables modeling the metaphorical relation between a verb and its context components, and uses the relation representation for determining the metaphoricity of the verb.",
"Third, the model is flexible to incorporate sophisticated relation modeling methods and new types of contexts.",
"We conduct experiments on the largest metaphor detection corpus VU Amsterdam Metaphor Corpus (VUA) (Steen, 2010).",
"Our method obtains competitive results on the large VUA dataset.",
"Detail analysis demonstrates the benefits of integrating various types of contexts for relation classification.",
"The results on relatively small datasets, such as MOH-X and TroFi, also show good performance and model transferability.",
"This section briefly summarizes the common formulations of token-level verb metaphor detection as a background, and discusses the relation between this paper and previous work.",
"The task A given sentence contains a sequence of n tokens x = x 1 , ..., x n , and a target verb in this sentence is x i .",
"Verb metaphor detection is to judge whether x i has a literal or a metaphorical sense.",
"Basic formulations Most neural networks based approaches cast the task as a classification or sequence labeling problem (Do Dinh and Gurevych, 2016; Gao et al., 2018).",
"As shown in Figure 1, the classification paradigm predicts a single binary label to indicate the metaphoricity of the target verb, while the sequence labeling paradigm predicts a sequence of binary labels to all tokens in a sentence.",
"Based on the basic formulations, various approaches have tried to enhance feature representations by using globally trained contextual word embeddings (Gao et al., 2018) or incorporating wider context with powerful encoders such as BiLSTM (Gao et al., 2018; Mao et al., 2019) and Transformers (Dankers et al., 2019; Su et al., 2020).",
"Limitations and recent trends However, the above two paradigms have some limitations.",
"First, contextual information is mostly used to enhance the representation of the target word, but the interactions between the target word and its contexts are not explicitly modeled (Zayed et al., 2020; Su et al., 2020).",
"To alleviate this, Su et al. (2020) proposed a new paradigm by viewing metaphor detection as a reading comprehension problem, which uses the target word as a query and captures its interactions with the sentence and clause.",
"A concurrent work to this work (Choi et al., 2021) adopted a pre-trained contextualized model based late interaction mechanism to compare the basic meaning and the contextual meaning of a word.",
"Second, exploiting wider context will bring in more noise and may lose the focus.",
"Fully depending on data-driven models to discover useful contexts is difficult, given the scale of available datasets for metaphor detection is still limited.",
"The grammar structures, such as verb arguments, are important for metaphor processing (Wilks, 1978), but is not well incorporated into neural models.",
"Stowe et al. (2019) showed that data augmentation based on syntactic patterns can enhance a standard model.",
"Le et al. (2020) adopted graph convolutional networks to incorporate dependency graphs, but did [CLS] [subj] He [/subj] [verb] absorbed [/verb] the [obj] costs [/obj] for the accident [SEP] Deep Transformer (BERT) Relation Representation and Prediction ()?",
"This paper presents a new paradigm for verb metaphor detection to overcome these limitations, by viewing the task as a relation extraction task.",
"We assume a target verb and its multiple contexts are entities, and metaphor detection is to determine whether a metaphorical relation holds between the verb and its contexts.",
"We will introduce the proposed model in Section 3. Before diving into details, we argue that viewing metaphor as a relation is reasonable and consistent with existing metaphor theories.",
"According to Wilks (1978), metaphors show a violation of selectional preferences in a given context.",
"The conceptual metaphor theory views metaphors as transferring knowledge from a familiar, or concrete domain to an unfamiliar, or more abstract domain (Lakoff and Johnson, 1980; Turney et al., 2011).",
"The metaphor identification procedure (MIP) theory (Group, 2007) aims to identify metaphorically used words in discourse based on comparing their use in particular context and their basic meanings.",
"All the theories care about a kind of relations between a target word and its contexts, which may help identify metaphors.",
"We propose the Metaphor-relation BERT (Mr-BERT) model to realize verb metaphor detection as a relation classification task.",
"Figure 2 shows the architecture of MrBERT.",
"We use the pre-trained language model BERT as the backbone model.",
"There are three main procedures: (1) extract and represent contexts; (2) model the contextual relations between the target verb and its contexts; (3) manipulate the contextual relations for predicting the verb's metaphoricity.",
"A metaphor can result when a target word interacts with a certain part in a sentence.",
"Previous work often explored individual context types, such as verb arguments through grammatical relations or the whole sentence/clause.",
"Little work has attempted to summarize and combine different contexts.",
"Global context : We view the whole sentence as the global context.",
"A metaphorically used word may seem divergent to the meaning or topic of the sentence.",
"Local context : We view the words that have a close grammatical relation to the target words as the local context, which is widely studied to capture selectional preference violations.",
"Distant context : Motivated by the MIP theory, the difference between the contextual usage of a word and its basic meaning may indicate a metaphor so that we view the basic meaning of the target verb as a distant context.",
"Then, we have to extract and represent these contexts.",
"3.1.2 Context Extraction and Representation We call the target verb's contexts as context components .",
"To get the contextual or basic meanings of these components.",
"we use the deep transformer models, such as BERT.",
"We first use Stanford dependency parser (Chen and Manning, 2014) to parse each sentence and extract verb-subject and verb-direct object relations with VB head and NN dependent.",
"The nominal subjects and objects are used as the local context components.",
"Motivated by (Baldini Soares et al., 2019), we introduce 6 component marker tokens, [ subj ] , [ /subj ] , [ verb ] , [ /verb ] , [ obj ] and [ /obj ] , to explicitly label the boundaries of the target verb, its subject and object in each sentence.",
"We also use [ CLS ] and [ SEP ] to mark the whole sentence.",
"For example, the marker inserted token sequence for the sentence He absorbed the costs for the accident is shown in Figure 2. The whole token sequence is fed into BERT's tokenizer, and then the transformer layers.",
"To get the contextual representations, we use the hidden states of the final transformer layer.",
"For each marked component, we use the start marker (e.g., [ subj ] ) or the averaged embedding between the start and the end markers (e.g., [ subj ] and [ /subj ] ) as the component representation.",
"The contextual representation of the whole sentence is read from the final hidden state of [ CLS ] .",
"To represent the basic meaning of the verb, we use the output from the BERT tokenizer to get the context independent verb representation.",
"If word pieces exist, their averaged embedding is used.",
"purpose is to utilize the contextual relation(s) to",
"determine the metaphoricity of the verb.",
"The representations of the verb and a context component are denoted as v R d and c R k , respectively.",
"We adopt three ways to explicitly define the form of the relation r for capturing the interactions between v and c .",
"Linear model We use a parameter vector V r R d + k and a bias b r to represent the relation r , and the probability of the relation being metaphorical is computed according to p ( r | v, c ) = ( V (cid:62) r (cid:18) v c (cid:19) + b r ) , (1) where is the sigmoid function.",
"Bilinear model We use a parameter matrix A r R d k and a bias b r to represent the relation r : p ( r | v, c ) = ( v (cid:62) A r c + b r ) .",
"(2) The components and the relation can interact more sufficiently with each other in this way.",
"Neural tensor model We also exploit a sim-plified neural tensor model for relation representation: p ( r | v, c ) = ( v (cid:62) A r c + V (cid:62) r (cid:18) v c (cid:19) + b r ) .",
"(3) 3.3 Integrating Contextual Relations for Prediction We focus on 3 types of contextual relations: Verb-global relation The relation between the contextual representations of the verb v and the whole sentence c CLS .",
"Verb-local relation The relation between the contextual representations of the verb v and its subject c subj or object c obj .",
"Verb-distant relation The relation between the verb v and its basic meaning v bsc .",
"The representations of c subj , c obj , c CLS and v bsc can be obtained as described in Section 3.1.2.",
"We try three ways to integrate the contextual relations.",
"The first two ways build a combined context c first: Context concatenation We can concatenate the representations of context components together as the combined context, i.e., c = c subj c obj c CLS v bsc .",
"Context average Similarly, we can use the averaged representation of all context components as the combined context, i.e., c = average ( c subj , c obj , c CLS , v bsc ) .",
"Then we compute the probability that the relation is metaphorical, i.e., p ( r | v, c ) , where either linear, bilinear or neutral tensor model can be applied.",
"The other way is to choose the most confident single prediction, i.e., Context maxout The prediction is based on max { p ( r | v, c ) } , where c belongs to { c CLS , c subj , c obj , v bsc } .",
"where N is the number of training samples; y i is the golden label of a verb with y i = 1 indicating a metaphorical usage and y i = 0 indicating a literal usage; y i is the probability of being metaphorical predicted by our model.",
"We further combine relation-level and sequence-level metaphor detection via multi-task learning.",
"The sequence metaphor detection uses the hidden states of the final layer and a softmax layer for predicting the metaphoricity of each token.",
"We use cross-entropy as the loss function and denote the average loss over tokens in training samples as L 1 .",
"The final loss of MrBERT is L = L 0 + L 1 .",
"VUA dataset We mainly conduct experiments on the VUA (Steen, 2010) dataset.",
"It is the largest publicly available metaphor detection dataset and has been used in metaphor detection shared tasks (Leong et al., 2018, 2020).",
"This dataset has a training set and a test set.",
"Previous work utilized the training set in different ways (Neidlein et al., 2020).",
"We use the preprocessed version of the VUA dataset provided by Gao et al. (2018).",
"The first reason is that this dataset has a fixed development set so that different methods can adopt the same model selection strategy.",
"The second reason is that several recent important methods used the same dataset (Mao et al., 2018; Dankers et al., Train Dev Test # tokens 116,622 38,628 50,175 (5,873) # unique sent.",
"2019; Stowe et al., 2019; Le et al., 2020).",
"Therefore it is convenient for us to compare the proposed method with previous work.",
"There are two tracks: Verb and All-POS metaphor detection.",
"Some basic statistics of the dataset are shown in Table 1. We focus on the Verb track since we mainly model metaphorical relations for verbs.",
"We use MrBERT's relation-level predictions for the verb track and use its sequence labeling module to deal with the All-POS track.",
"MOH-X and TroFi datasets MOH-X (Moham-mad et al., 2016) and TroFi (Birke and Sarkar, 2006) are two relatively smaller datasets compared with VUA.",
"Only a single target verb is annotated in each sentence.",
"We will report the results on MOH-X and TroFi in three settings: zero-shot transfer, re-training and fine-tuning.",
"Metrics The evaluation metrics are accuracy (Acc), precision (P), recall (R) and F1-score (F1), which are most commonly used in previous work.",
"We compare with the following approaches.",
"Gao et al. (2018) use contextual embeddings ELMo to enhance word representations and use BiLSTM as the encoder.",
"It has two settings: classification (CLS) and sequence labeling (SEQ).",
"Mao et al. (2019) exploit two linguistic theory motivated intuitions based on the basis of (Gao et al., 2018).",
"This work motivates us to further explore contextual relation modeling with pre-trained language models.",
"Stowe et al. (2019) exploit grammatical relations for data augmentation to enhance (Gao et al., 2018).",
"Le et al. (2020) propose a multi-task learning approach with graph convolutional neural networks and use word sense disambiguation as an auxiliary task.",
"Neidlein et al. (2020) (BERT-SEQ) provide a detail setting for a BERT based sequence labeling model.",
"This method is used as a main pre-trained language model based baseline.",
"The above methods all used Gao et al. (2018)'s dataset for evaluation so that their results can be directly read from their papers for comparison.",
"Su et al. (2020) (DeepMet) view metaphor detection as a reading comprehension problem with RoBERTa as the backbone model.",
"It obtained the best performance on 2020 metaphor detection shared task.",
"Choi et al. (2021) (MelBERT) present a concurrent work to ours.",
"The method shares similar ideas and architecture with us, but it does not consider the grammatical relations.",
"Notice that the systems participating in the VUA metaphor detection shared tasks (Leong et al., 2018, 2020) can use any way to manipulate the training set for model selection and ensemble learning so that the reported results in the task report are not directly comparable to us.",
"The results of DeepMet and MelBERT are based on the single model evaluation in (Choi et al., 2021).",
"The first four baselines do not utilize pre-trained language models, while the last three baselines use BERT or RoBERTa.",
"These baselines support comprehensive comparisons from multiple aspects.",
"During context component extraction, if the target verb does not have a subject or an object, we use a fixed zero vector instead.",
"We use the bert-base-uncased model and the standard tokenizer.",
"The values of hyper-parameters are shown in Table 2. For MrBERT, we view the ways of component representation ( start marker or averaged embedding , see Section 3.1.2), relation modeling ( linear, bilinear , and neural tensor (NT)) models, see Section 3.2) and context integration ( context concatenation, average and maxout , see Section 3.3) strategies as hyper-parameters as well.",
"We run each model for 10 epoches, and choose the best combination according to the performance on the development set.",
"The best combination uses the averaged embeddings, the bilinear model and the context average strategy, and it will represent MrBERT for performance report in Section 4.2.",
"Table 3 shows the results of the baselines and MrBERT.",
"Except for (Gao et al., 2018)-CLS, all methods use the annotation information of all tokens.",
"For the All-POS track, we report the performance on either all POS tags or 4 main POS tags for comparison with previous work.",
"We can see that MrBERT achieves superior or competitive performance compared with previous work on verb metaphor detection.",
"The use of pre-trained language models improves the performance in general, compared with several LSTM based methods.",
"Recent proposed models, such as DeepMet, MelBERT and MrBERT, gain further improvements compared with BERT-SEQ.",
"MrBERT outperforms (Stowe et al., 2019) and (Le et al., 2020) largely.",
"The two baselines attempt to make use of grammar information, through data augmentation or graph neural networks.",
"In contrast, MrBERT provides a simple yet effective way to incorporate verb arguments and new contexts into a pre-trained language model.",
"MrBERT also has competitive performance compared with DeepMet and MelBERT.",
"We share the similar idea to enhance interactions between the target verb and its contexts, but implement in different ways.",
"DeepMet and MelBERT base on the pre-trained model RoBERTa and use additional POS or FGPOS information.",
"Moreover, these two models are trained for every token so that the training might be more sufficient.",
"In contrast, we mainly model metaphorical relation for verbs.",
"This is perhaps also the reason that on the All-POS metaphor detection track, MrBERT has slightly worse results compared with MelBERT.",
"However, our model is flexible and can be applied to tokens with other POS tags as well.",
"We leave this as future work.",
"Relation modeling and context integration strategies Table 4 shows the results of different VUA Verb VUA All-POS VUA All-POS (4 POS) Model Acc P R F1 Acc P R F1 Acc P R F1 Gao et al. (2018)-CLS 69.1 53.4 65.6 58.9 Gao et al. (2018)-SEQ 81.4 68.2 71.3 69.7 93.1 71.6 73.6 72.6 Mao et al. (2019) 81.8 66.3 75.2 70.5 93.8 73.0 75.7 74.3 Stowe et al. (2019) 69.5 73.5 Le et al. (2020) 83.2 72.5 70.9 71.7 93.8 74.8 75.5 75.1 Neidlein et al. (2020) 84.9 78.0 69.0 73.2 94.5 83.0 71.9 77.0 91.8 77.9 64.6 70.7 DeepMet (Su et al., 2020) 79.5 70.9 74.9 82.0 71.3 76.3 MelBERT (Choi et al., 2021) 78.7 72.9 75.7 80.1 76.9 78.5 MrBERT 86.4 80.8 71.5 75.9 94.7 82.7 72.5 77.2 91.8 78.4 64.6 70.9 Table 3: Results on the VUA dataset.",
"combinations of relation modeling and context integration strategies.",
"BERT-SEQ here refers to the re-trained baseline with model selection based on the performance on the development set, and surpasses the reported results in (Neidlein et al., 2020).",
"We can see that most combinations outperform BERT-SEQ, and have consistent performance.",
"The bilinear and neural tensor models perform better than the linear model.",
"This means that sophisticated relation modelling techniques can benefit the performance.",
"Context average and context maxout strategies perform better than context concatenation.",
"The reason may be that context concatenation is more difficult to be trained due to more parameters.",
"Effects of different contexts Table 5 shows the performance of MrBERT when it considers the global context (MrBERT-G), the global and the local contexts (MrBERT-GL), and the full model with the distant context (MrBERT-GLD).",
"Each model is trained separately, with the same model selection procedure.",
"We can see that integrating multiple contexts leads to better performance.",
"MrBERT explicitly incorporates verb arguments through grammatical relations as the local context, which differs from other methods.",
"We are interested in the effect of such information.",
"We analyze MrBERT-G and MrBERT-GL.",
"Table 6 shows the distribution of auto-extracted verb-subject and verb-direct object relations in the VUA test dataset.",
"F 1 values indicate the improvements of MrBERT-G compared with BERT-SEQ in F1.",
"We can see that MrBERT-G outperforms BERT-SEQ mainly when verb's arguments are incomplete.",
"For verbs with complete verb-subject and verb-direct object structures, little improvement is gained.",
"Table 7 shows the corresponding performance of MrBERT-GL.",
"Better performance is obtained for verbs with all status of grammatical relations.",
"The improvement on verbs in the lower right corner is obvious.",
"In these cases, the verbs are usually intransitive verbs or used as a noun or an adjective.",
"The benefit of involving grammatical relations may be that it helps keep a dynamic and balanced focus between the global and local contexts according to the signals expressed by the grammatical structure.",
"Intuitively, the effect of incorporating grammatical relations should be more obvious for metaphor detection in long sentences, since the local and global contexts are quite different.",
"To verify this, we divide sentences in the test dataset into bins Verbsubject Verb-direct object Yes No total Yes 1,324 (36%) F 1 =0.0 2,035 (23%) F 1 = +0.57 3,359 No 1,201 (38%) F 1 =+0.05 1,313 (27%) F 1 = +1.51 2,514 total 2,525 3,348 Table 6: The distribution of available syntactic patterns in VUA-verb test dataset and the improved F1 score of MrBERT-G compared with BERT-SEQ.",
"according to the number of clauses.",
"Figure 3 con-firms our hypothesis that MrBERT obtains larger improvements on sentences with more clauses, indicating that incorporating grammatical relations can help filter noisy information.",
"Finally, the use of distant context obtains a further improvement.",
"This observation is consistent with the conclusion of (Choi et al., 2021).",
"It also indicates that the BERT tokenizer's embedding can be used to approximate the representation of the target verb's basic meaning.",
"Table 8 shows the results on the MOH-X and TroFi datasets.",
"In the zero-shot transfer setting, MrBERT obtains better performance compared with DeepMet and MelBERT on both datasets.",
"The performance of DeepMet and MelBERT is read from (Choi et al., 1 2 3 4 4+ Number of clauses 0.60 0.65 0.70 0.75 0.80 0.85 F 1 BERT-SEQ MrBERT Figure 3: The F1 scores of MrBERT and BERT-SEQ for sentences with different number of clauses.",
"2021).",
"The results means MrBERT has good zero-shot transferability, although these datasets have quite different characteristics.",
"In the 10-fold cross-validation setting, the retrained MrBERT can also obtain superior or competitive results compared with previous work.",
"If we continue to fine-tune the pre-trained MrBERT on the target datasets, better performance can be obtained, especially on the MOH-X dataset.",
"Metaphor detection is a key task in metaphor processing (Veale et al., 2016).",
"It is typically viewed as a classification problem.",
"The early methods were based on rules (Fass, 1991; Narayanan, 1997), while most recent methods are data-driven.",
"Next, we summarize data-driven methods from the perspective of context types that have been explored.",
"Grammatical relation-level detection This line of work is to determine the metaphoricity of a given grammatical relation, such as verb-subject, verb-direct object or adjective-noun relations (Shutova et al., 2016).",
"The key to this category of work is to represent semantics and capture the relation between the arguments.",
"Feature-based methods are based on handcrafted linguistic features.",
"Shutova and Teufel (2010b) proposed to cluster nouns and verbs to construct semantic domains.",
"Turney et al. (2011) and Shutova and Sun (2013) considered the abstractness of concepts and context.",
"Mohler et al. (2013) exploited Wikipedia and WordNet to build domain signatures.",
"Tsvetkov et al. (2014) combined abstractness, imageability, supersenses, and cross-lingual features.",
"Bulat et al. (2017) exploited attribute-based concept representations.",
"The above handcrafted features heavily rely on linguistic resources and expertise.",
"Recently, distributed representations are exploited for grammatical relation-level metaphor detection.",
"Distributed word embeddings were used as features (Tsvetkov et al., 2014) or to measure semantic relatedness (Gutirrez et al., 2016; Mao et al., 2018).",
"Visual distributed representations were also proven to be useful (Shutova et al., 2016).",
"Rei et al. (2017) designed a supervised similarity network to capture interactions between words.",
"Song et al. (2020) modeled metaphors as attribute-dependent domain mappings and presented a knowledge graph embedding approach for modeling nominal metaphors.",
"Zayed et al. (2020) identified verb-noun and adjective-noun phrasal metaphoric expressions by modeling phrase representations as a context.",
"Token-level detection Another line of work formulates metaphor detection as a single token classification or sequence labeling problem (Do Dinh and Gurevych, 2016; Gao et al., 2018; Mao et al., 2019).",
"These approaches are mostly based on neural network architectures and learn representations in an end-to-end fashion.",
"These approaches depend on token-level human annotated datasets, such as the widely used VUA dataset (Steen, 2010).",
"BiLSTM plus pre-trained word embeddings is one of the popular architectures for this task (Gao et al., 2018; Mao et al., 2019).",
"Recently, Transformer based pre-trained language models become the most popular architecture in the metaphor detection shared task (Leong et al., 2020).",
"Multitask learning (Dankers et al., 2019; Rohanian et al., 2020; Le et al., 2020; Chen et al., 2020) and discourse context (Dankers et al., 2020) have been exploited as well.",
"Discussion The grammatical relation-level and token-level metaphor detection consider different aspects of information.",
"Grammatical relations incorporate syntactic structures, which are well studied in selectional preferences (Wilks, 1975, 1978) and provide important clues for metaphor detection.",
"However, sentential context is also useful but is ignored.",
"In contrast, token-level metaphor detection explores wider context and gains improvements, but syntactic information is neglected and as discussed in (Zayed et al., 2020), interactions between metaphor components are not explicitly modeled.",
"This paper aims to combine the grammatical relation-level, token-level and semantic-level information through pre-trained language model based contextual relation modeling.",
"This paper presented the Metaphor-relation BERT (MrBERT) model for verb metaphor detection.",
"We propose a new view to formulate the task as modeling the metaphorical relation between the target verb and its multiple context components, i.e., contextual relations.",
"We propose and evaluate various ways to extract, model and integrate contextual relations for metaphoricity prediction.",
"We conduct comprehensive experiments on the VUA dataset.",
"The evaluation shows that MrBERT achieves superior or competitive performance compared with previous methods.",
"We also observe that incorporating grammatical relations can help balance local and global contexts, and the basic meaning of the verb as a distant context is effective.",
"Further experiments on small datasets MOH-X and TroFi also show good model transferability of MrBERT.",
"This work is supported by the National Natural Science Foundation of China (Nos. 61876113, 61876112), Beijing Natural Science Foundation (No. 4192017), Support Project of High-level Teachers in Beijing Municipal Universities in the Period of 13th Five-year Plan (CIT&TCD20170322).",
"Lizhen Liu is the corresponding author."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"objective",
"objective",
"method",
"abstain",
"result",
"abstain",
"other",
"other"
] |
[
"Commonsense reasoning is fundamental to natural language understanding.",
"While traditional methods rely heavily on human-crafted features and knowledge bases, we explore learning commonsense knowledge from a large amount of raw text via unsupervised learning.",
"We propose two neural network models based on the Deep Structured Semantic Models (DSSM) framework to tackle two classic commonsense reasoning tasks, Winograd Schema challenges (WSC) and Pronoun Disambiguation (PDP).",
"Evaluation shows that the proposed models effectively capture contextual information in the sentence and coreference information between pronouns and nouns, and achieve significant improvement over previous state-of-the-art approaches.",
"Commonsense reasoning is concerned with simulating the human ability to make presumptions about the type and essence of ordinary situations they encounter every day (Davis and Marcus, 2015).",
"It is one of the key challenges in natural language understanding, and has drawn increasing attention in recent years (Levesque et al., 2011; Roemmele et al., 2011; Zhang et al., 2017; Rashkin et al., 2018a,b; Zellers et al., 2018; Trinh and Le, 2018).",
"However, due to the lack of labeled training data or comprehensive hand-crafted knowledge bases, commonsense reasoning tasks such as Winograd Schema Challenge (Levesque et al., 2011) are still far from being solved.",
"In this work, we propose two effective unsupervised models for commonsense reasoning, and evaluate them on two classic commonsense reasoning tasks: Winograd Schema Challenge (WSC) and Pronoun Disambiguation Problems (PDP).",
"Compared to other commonsense reasoning tasks, Work done when the author was at Microsoft 1. The city councilmen refused the demonstrators a permit because they feared violence.",
"Who feared violence?",
"A. The city councilmen B. The demonstrators 2. The city councilmen refused the demonstrators a permit because they advocated violence.",
"Who advocated violence?",
"A. The city councilmen B. The demonstrators Table 1: Examples from Winograd Schema Challenge (WSC).",
"WSC and PDP better approximate real human reasoning, and can be more easily solved by native English-speaking adults (Levesque et al., 2011).",
"In addition, they are also technically challenging.",
"For example, the best reported result on WSC is only 20 percentage points better than random guess in accuracy (Radford et al., 2019).",
"Table 1 shows two examples from WSC.",
"In order to resolve the co-reference in these two examples, one cannot predict what they refers to unless she is equipped with the commonsense knowledge that demonstrators usually cause violence and city councilmen usually fear violence .",
"As no labeled training data is available for these tasks, previous approaches are based on either hand-crafted knowledge bases or large-scale language models.",
"For example, Liu et al. (2017) used existing knowledge bases such as ConceptNet (Liu and Singh, 2004) and WordNet (Miller, 1995) for external supervision to train word embeddings and solve the WSC challenge.",
"Recently, Trinh and Le (2018) first used raw text from books/news to train a neural Language Model (LM), and then employed the trained model to compare the probabilities of the sequences, where the pronouns are replaced by each of the candidate references, and to pick the candidate that leads to the highest probability as the answer.",
"Because none of the existing hand-crafted knowledge bases is comprehensive enough to cover all the world knowledge 1 , we focus on building unsupervised models that can learn commonsense knowledge directly from unlimited raw text.",
"Different from the neural language models, our models are optimized for co-reference resolution and achieve much better results on both the PDP and WSC tasks.",
"In this work we formulate the two commonsense reasoning tasks in WSC and PDP as a pairwise ranking problem.",
"As the first example in Table 1, we want to develop a pair-wise scoring model Score ( x i , y ) that scores the correct antecedent-pronoun pair ( councilmen , they ) higher than the incorrect one ( demonstrators , they ).",
"These scores depend to a large degree upon the contextual information of the pronoun (e.g., they) and the candidate antecedent (e.g., councilmen).",
"In other words, it requires to capture the semantic meaning of the pronoun and the candidate antecedent based on the sentences where they occur, respectively.",
"To tackle this issue, we propose two models based on the framework of Deep Structured Similarity Model (DSSM) (Huang et al., 2013), as shown in Figure",
"1(a).",
"Formally, let S x be the sentence containing the candidate antecedent x i and S y the sentence containing the pronoun y which we're interested in.",
"DSSM measures the semantic similarity of a pair of inputs ( x i , y ) by 1) mapping x i and y , together with their context information, into two vectors in a semantic space using deep neural networks f 1 and f 2 , parameterized by ; and 2) computing cosine similarity 2 between them.",
"In our case, we need to learn a task-specific semantic space where the distance between two vectors measures how likely they co-refer.",
"Commonsense knowledge such as demonstrators usually cause violence can be implicitly captured in the semantic space through DSSM, which is 1 We don't believe it is possible to construct such a knowledge base given that the world is changing constantly.",
"2 DSSMs can be applied to a wide range of tasks depending on the definition of ( x, y ) .",
"For example, ( x, y ) is a query-document pair for Web search ranking, a document pair in recommendation, a question-answer pair in QA, and so on.",
"See Chapter 2 of (Gao et al., 2018) for a survey.",
"DSSM requires labeled pairs for training.Since there is no labeled data for our tasks, we propose two unsupervised DSSMs, or UDSSMs.",
"As shown in Figure",
"1(b) and",
"1(c), ( S x , S y ) are encoded into contextual representations by deep neural networks f 1 and f 2 ; then we compute pair-wise their co-reference scores.",
"In what follows, we will describe two assumptions we propose to harvest training data from raw text.",
"Assumption I: A pronoun refers to one of its preceding nouns in the same sentence.",
"The sentences generated by this assumption will be used for training UDSSM-I.",
"Some examples will be shown in the data generation section.",
"Assumption II: In a sentence, pronouns of the same gender and plurality are more likely to refer to the same antecedent than other pronouns.",
"Similarly, the sentences following the assumption will be used for training UDSSM-II.",
"Note that the two models, UDSSM-I and UDSSM-II are trained on different types of pairwise training data, thus the model structures are different, as illustrated in Figure",
"1(b) and",
"1(c), respectively.",
"Experiments demonstrated that our methods outperform stat-of-the-art performance on the tasks of WSC and PDP.",
"As a key component of natural language understanding, commonsense reasoning has been included in an increasing number of tasks for evaluation: COPA (Roemmele et al., 2011) assesses commonsense causal reasoning by selecting an alternative, which has a more plausible causal relation with the given premise.",
"Story Cloze Test (ROCStories, Mostafazadeh et al. 2016) evaluates story understanding, story generation, and script learning by choosing the most sensible ending to a short story.",
"JOCI (Zhang et al., 2017) generalizes the natural language inference (NLI) framework (Cooper et al., 1996; Dagan et al., 2006; Bowman et al., 2015; Williams et al., 2018) and evaluates commonsense inference by predicting the ordinal likelihood of a hypothesis given a context.",
"Event2Mind (Rashkin et al., 2018b) models stereotypical intents and reactions of people, described in short free-form text.",
"SWAG (Zellers et al., 2018) frames commonsense inference as multiple-choice questions for follow-up events given some context.",
"Re Co RD (Zhang et al., 2018) h x h y f 1 f 2 S x S y Similarity measurement h x h y f 1 f 2 Co-reference Scoring h x h y f 1 f 2 Co-reference Scoring ContextualRepresentation DNN Input sequences",
"Among all these commonsense reasoning tasks, the Winograd Schema Challenge (WSC) and Pronoun Disambiguation Problems (PDP) (Levesque et al., 2011) are known as the most challenging tasks for commonsense reasoning.",
"Although both tasks are based on pronoun disambiguation, a subtask of coreference resolution (Soon et al., 2001; Ng and Cardie, 2002; Peng et al., 2016), PDP and WSC differ from normal pronoun disambiguation due to their unique properties, which are based on commonsense, selecting the most likely antecedent from both candidates in the directly preceding context.",
"Previous efforts on solving the Winograd Schema Challenge and Pronoun Disambiguation Problems mostly rely on human-labeled data, sophisticated rules, hand-crafted features, or external knowledge bases (Peng et al., 2015; Bailey et al., 2015; Sch uller, 2014).",
"Rahman and Ng (2012) hired workers to annotate supervised training data and designed 70K hand-crafted features.",
"Sharma et al. (2015); Schuller (2014); Bailey et al. (2015); Liu et al. (2017) utilized expensive knowledge bases in their reasoning processes.",
"Recently, Trinh and Le 2018 applied neural language models trained with a massive amount of unlabeled data to the Winograd Schema Challenge and improved the performance by a large margin.",
"In contrast, our unsupervised method based on DSSM sig-nificantly outperforms the previous state-of-the-art method, with the advantage of capturing more contextual information in the data.",
"As shown in Figure 1, we propose two unsupervised deep structured semantic models ( UDSSM-I and UDSSM-II ), which consist of two components: DNN encoding and co-reference scoring.",
"For the model UDSSM-I, the co-referred word pairs are automatically learned through an attention mechanism, where the attention weights are the co-reference scores for word pairs.",
"For the second model UDSSM-II, we will directly optimize the co-reference score during training.",
"After all, we will get the co-reference scoring function, Score ( x i , y ) , to compare the candidate answers in the tasks of PDP/WSC.",
"Next, we will show the details of our models trained in an unsupervised way.",
"In the following sections, we will use uppercase symbols in bold, e.g., S x , to represent matrices.",
"Lowercase symbols in bold, e.g., h x , represent vectors.",
"A regular uppercase symbol, e.g., S x , represents a lexical sequence.",
"A regular lowercase symbol, e.g., x i or y , represents a token.",
"This model is developed based on Assumption I. Its architecture is shown in Figure 2. The sentences generated based on this assumption contain a pronoun y and a set of its preceding nouns { x i , x j ... } , which includes the referred word by pronoun.",
"For example, the sentence in Figure 2. As there is no clear label for the co-referred word pairs under this assumption, our model will rank the set of nouns { x i , x j ... } which contains the noun that the pronoun y refers to higher than the set which does not.",
"And the co-reference score between words will not be optimized directly during",
"training, but is learned indirectly through the attention mechanism. We will describe in turn how the training data is generated from raw text, the model architecture, and the co-reference scoring function for the final prediction on the tasks of PDP/WSC.",
"The main challenge of PDP/WSC tasks is that it has no labeled training data. Here we introduce a simple method to collect unsupervised training data by leveraging some linguistic patterns. Following Assumption 1, the first hypothesis we make is that the pronoun refers to one of the preceding nouns, which is a common phenomenon in well-written stories or news. In this way, we generate ( S x , S y ) pairs from raw text as follows:",
"Pick sentences that contain at least one pronoun and multiple nouns preceding it. Split each sentence into two sub-sentences to form a positive pair ( S x , S y ), where S x is the first sub-sentence with identified nouns and entity names while S y is the second sub-sentence with a pronoun. One or more negative pairs are generated from ( S x , S y ) by replacing S y with S y neg randomly sampled from other positive pairs.",
"We split the sentence with pronouns and nouns into two sub-sequences separated by the previous word of the pronoun. Therefore, the example sentence in the Figure 2 can be split into two sub-sentences as shown below:",
"for more than an hour and S y : overflew their Minneapolis destination by 150 miles before discovering the mistake and turning around.",
"As the sentences are collected from raw text, the co-reference words are not given. Our proposed UDSSM-I model will learn the co-reference scoring function through attention mechanism based on the generated sequence pairs. Next, we will introduce the details of this model.",
"This method takes the pair of sequences, ( S x , S y ), as inputs, and computes similarity between the sequences collected from the same sentence. As we hypothesize that one of the nouns in the first sequence and the pronoun in the second are co-referred, we only use the contextual representations of nouns and pronoun to represent the sequences. To obtain the contextual representation, we first use a bi-directional LSTM to process these sequences 3 :",
"H = Bi-LSTM ( S ) , H = Bi-LSTM ( S ) , (1) where S x R d X , S y R d Y are the word embeddings of the two sequences. d is the dimension of the word embeddings. X, Y are the lengths of the two sequences. H x R l X and H y R l Y are the hidden states of bi-directional LSTM. Our model is task-specifically constructed, so we directly use the hidden state of the first pronoun in the second sequence as its representation:",
"where h y 2 R l is the second 4 vector from H y and it represents the contextual information of the pronoun. Next, we will get the representation of the first sequence. As there are multiple nouns in the first sequence and the pronoun usually refers to only one of them, we use the weighted sum of all the LSTM hidden states of the nouns to represent the sequence, h x R l , as follows:",
"3 We use two different LSTMs to process the sequences S x and SY here.",
"This is to make the negative sampling in Eqn.",
"(4) more efficient, so that we can directly use the other representations in the same batch as negative ones.",
"4 We assign the word just before the pronoun to the second sequence, so the pronoun always appears in the second position of the sequence.",
"where i, j... are the positions of the nouns in the sequence S x and [; ] is the concatenation of two vectors.",
"H n R l N are all the hidden states of the nouns 5 in H x in the sequence.",
"N is the number of nouns in the sequence.",
"RN is the weights assigned for the different nouns and h x R l is the weighted sum of all the hidden states of the nouns.",
"W g R l l and b g R l are the parameters to learn; e N RN is a vector of all 1s and it is used to repeat the bias vector N times into the matrix.",
"Then we will maximize the similarity of the contextual representations of ( h x , h y ).",
"Meanwhile, we also need some negative samples h y neg k for h x .",
"Then our loss function for this method is constructed: L = log exp (cid:16) h x h y (cid:17) exp (cid:16) h x h y (cid:17) + (cid:80) Kk exp (cid:16) h x h y neg k (cid:17) , (4) where h y neg k R l is the randomly sampled hidden state of pronoun from the sequences not in the same sentence with S y .",
"Overall, the model tries to make the co-reference states similar to each other.",
"The co-reference scoring function is defined: Score ( x i , y ) = g ( h x i , h y ) = ( W g h x i + b g ) T h y , (5) where the candidate located at the i -th position is represented by its LSTM hidden state h x i and the pronoun in the snippet is represented by h y .",
"And the output value of this function for each candidate will be used for the final prediction.",
"Next, we will introduce the other unsupervised method.",
"This model is developed based on Assumption II.",
"Its architecture is shown in Figure 3. As the model is similar to the previous one, we will introduce the details in a similar way.",
"The second assumption is that the pronoun pairs in a single sentence are co-reference words if they are of the same gender and plurality; otherwise they are not.",
"Based on this assumption, we can directly construct the co-reference training pairs as follows: 5 We use the toolkit of spaCy in Python for POS and NER, and we will remove the sequences that contain less than 2 nouns.",
"Parse the raw sentences to identify pronouns.",
"Pick sentences that contain at least two pronouns.",
"The sub-sequence pair with pronouns of the same gender and plurality is labeled as a positive pair; otherwise it is labeled as negative.",
"Replace the corresponding pronoun pairs with a special token @Ponoun .",
"Take the following sentence as an example: He tried twice to call her but she did not answer the phone.",
"There are three pronouns detected in the sentence, and we assume that the words her and she are co-reference words, while pairs ( she , He ) and ( her , He ) are not.",
"Thus we can obtain three training examples from the given sentence.",
"However, in the PDP and WSC tasks, models are asked to compute the co-reference scores between pronoun and candidate nouns, instead of two pronouns.",
"Therefore, we replace the first pronoun in the sentence with a place holder;",
"i.e., a negative training pair is generated by splitting the raw sentence into the following two sub-sequences: S x : @Ponoun tried twice to call her S y : but she did not answer the phone. label: Negative and the positive training pair can be generated by the same way: S x : He tried twice to call @Ponoun S y : but she did not answer the phone. label: Positive Thus, we could directly train the encoder and coreference scoring components through the generated training pairs.",
"The previous method, UDSSM-I, follows the task setting of PDP/WSC, and builds the model based on the similarity of the representations between nouns and the pronoun.",
"As there is no signal indicating the exact alignment between co-reference words, the model tries to learn it based on the co-occurrence information from large scale unlabelled corpus.",
"For the method of UDSSM-II, each representation pair ( h x , h y ) has a clear signal, r , indicating whether they are co-referred or not.",
"For simplicity, we do not have to split the sentence into two parts.",
"We first use LSTM to process the sentence as follows: H = LSTM ([ S x ; S y ]) , H = LSTM ([ S x ; S y ]) , (6) where we can concatenate the word embeddings, [ S x ; S y ] , of two sequences collected under Assumption II.",
"LSTM and LSTM are built in different directions, and H , H are the hidden states of the corresponding LSTM.",
"Suppose that the pronoun pair in the sentence are located at the i -th and j -th positions as shown in the bottom part of Figure",
"3(a).",
"We use the hidden states around the pronouns as their contextual representations as follows: f 1 ( S x ) = h x = (cid:34) h i 1 h i +1 (cid:35) , f 2 ( S y ) = h y = (cid:34) h j 1 h j +1 (cid:35) , (7) where (cid:20) (cid:21) is the concatenation of all the vectors inside it.",
"Then we further concatenate these representation pair: h c = (cid:20) h x h y (cid:21) , (8) where h c R 4 l , and it will be the input of loss function with cross entropy as follows: L = r log (cid:18) exp( w p h c ) exp( w p h c ) + exp( w n h c ) (cid:19) (1 r ) log (cid:18) exp( w n h c ) exp( w p h c ) + exp( w n h c ) (cid:19) , where r { 0 , 1 } indicates whether the pronouns at the m -th and n -th positions should be considered co-reference or not.",
"w p R 4 l and w n R 4 l are the parameters to learn.",
"Similar to the",
"Eqn.(5), for each candidate, we use co-reference scoring function Score ( x i , y ) for the answer selection: Score ( x i , y ) = g ( h x i , h y ) = w p h i 1 h i +1 h j 1 h j +1 , (9) where i is the position of the candidate in the sentence and j is the position of the pronoun.",
"In this section, we will introduce the datasets to train and evaluate our models for commonsense reasoning, the hyper-parameters of our model, and the analysis of our results.",
"Training Corpus We make use of the raw text from Gutenberg 7 , a corpus offerring over 57,000 free eBooks, and 1 Billion Word 8 , a corpus of news, to train our model.",
"We first ignore the sentences that contain less than 10 tokens or longer than 50 tokens.",
"Then, for the model UDSSM-I, we collect all the sentences with the pronoun before which there're at least two nouns.",
"For UDSSM-II, we collect all the sentences with at least 2 pronouns.",
"In total, we collect around 4 million training pairs from each corpus for our proposed method respectively, and we split 5% as validation set.",
"Evaluation Dataset We evaluate our model on the commonsense reasoning datasets, Pronoun 6 The best models reported in the works of Radford et al. (2019) and Trinh and Le (2018) are trained on a much larger corpus from Common Crawl.",
"7 http://www.gutenberg.org 8 https://github.com/ciprian-chelba/ 1-billion-word-language-modeling-benchmark Disambiguation Problems (PDP) 9 and Winograd Schema challenges (WSC) 10 , which include 60 and 285 questions respectively.",
"Both of the tasks are constructed for testing commonsense reasoning and all the questions from these challenges are obvious for human beings to solve with commonsense knowledge, but hard for machines to solve with statistical techniques.",
"We use the same setting for both our models.",
"The hidden state dimension of a single-directional LSTM is set to be 300.",
"We use 300 dimensional GloVe embeddings 11 for initialization.",
"We use Adamax to optimise the model, set learning rate to be 0.002, dropout rate on all layers are tuned from [0, 0.1, 0.2] and the batch size from [30, 50, 100, 200].",
"For the model UDSSM-I, in one batch, we treat all sequence pairs not from the same sentence as negative cases.",
"And it takes around 30 hours on a single K40 GPU to train our models, which are much faster than training a large LM (Jozefowicz et al., 2016) taking weeks on multiple GPUs.",
"The experiment results are shown in Table 2. Most of the performance in the top of the Table 2 are the models trained with external knowledge bases, such as Cause-Effect (Liu et al., 2016), WordNet (Miller, 1995), ConceptNet (Liu and Singh, 2004) knowledge bases.",
"Unsupervised Semantic Similarity Method (USSM) (Liu et al., 2017) is based on the skip-gram model (Mikolov et al., 2013) to train word embeddings and the embeddings of all the words connected by knowledge bases are optimized to be closer.",
"Neural Knowledge Activated Method (NKAM) (Liu et al., 2017) trained a binary classification model based on whether the word pairs appear in the knowledge base.",
"One limitation of these methods is that they rely heavily on the external knowledge bases.",
"Another limitation is that they just linearly aggregate the embeddings of the words in the context, and that's hard to integrate the word order information.",
"Instead, our model with LSTM can better represent the contextual information.",
"Besides, our model don't need any external knowledge bases, and achieve a significant improvement on both of the datasets.",
"We further compare our models with the unsupervised baselines, ELMo (Peters et al., 2018) which selects the candidate based on the cosine similarity of the hidden states of noun and pronoun.",
"Another unsupervised baseline, Google Language Model for commonsense reasoning (Trinh and Le, 2018), which compares the perplexities of the new sentences by replacing the pronoun with candidates.",
"To make a fair comparison to Trinh and Le (2018)'s work, we also train our single model on the corpus of Gutenberg only.",
"We can see that both of our methods get significant improvement on the PDP dataset, and our UDSSM-II can achieve much better performance on the WSC dataset.",
"We also report our ensemble model (nine models with different hyper-parameters) trained with both corpus of Gutenberg and 1 Billion Word, and it also achieve better performance than Google Language Model trained with the same corpus.",
"Finally, we also compare to the pre-trained Coreference Resolution Tool (Clark and Manning, 2016a,b) 12 , and we can see that it doesn't adapt to our commonsense reasoning tasks and can't tell 12 https://github.com/huggingface/ neuralcoref the difference between each pair of sentences from WSC.",
"In this subsection, we will conduct further analysis on the reason that our models work, the benefit of our models comparing to a baseline, and the limitation of our proposed models.",
"We have a further analysis on the pair-wise sentences, which we collected for training, to check how our model can work.",
"We find that some reasoning problems can somehow be converted to the paraphrase problem.",
"For example, in Table 3, we make use of Lucene Index 13 with BM25 to retrieve the similar sentences to the WSC sentences from our training dataset, and make a comparison.",
"We can see that these pairs are somehow paraphrased each other respectively.",
"For the first pair, the contextual representations of Paul and he in WSC could be similar to the contextual representations of he in our training sentence.",
"As these representations are used to compute the co-reference score, the final scores would be similar.",
"The pseudo label positive for our first sentence will make the positive probability of the golden co-references Paul and he in WSC higher.",
"And for the second pair in Table 3, the pseudo label of positive in our second sentence will make the positive probability of the golden co-references George and he in WSC 2 higher.",
"In this way, these kinds of co-reference patterns from training data can be directly mapped to solve the Winograd Schema Challenges.",
"13 http://lucene.apache.org/pylucene/ Here's another example from PDP demonstrating the benefit of our method: Always before, Larry had helped Dad with his work. But he could not help him now, for Dad said that .",
"Trinh and Le (2018) failed on this one, probably because language models are not good at solving long distance dependence, and tends to predict that he refers to his in the near context rather the correct answer Larry.",
"And our model can give the correct prediction.",
"We further analysis the predictions of our model.",
"We find that some specific commonsense knowledge are still hard to learn, such as the following pairs: The trophy doesn't fit into the brown suitcase because it is too small.",
"The trophy doesn't fit into the brown suitcase because it is too large.",
"To solve this problem, the model should learn the knowledge to compare the size of the objects.",
"However, all of our models trained with different hyper-parameters select the same candidate as the co-referred word for it in both sentences.",
"To solve the problem, broader data need to collect for learning more commonsense knowledge.",
"In conclusion, to overcome the lack of human labeled data, we proposed two unsupervised deep structured semantic models (UDSSM) for commonsense reasoning.",
"We evaluated our models on the commonsense reasoning tasks of Pronoun Disambiguation Problems (PDP) and Winograd Schema Challenge (Levesque et al., 2011), where the questions are quite easy for human to answer, but quite challenging for the machine.",
"Without using any hand-craft knowledge base, our model achieved stat-of-the-art performance on the two tasks.",
"In the future work, we will use Transformer, which is proved to be more powerful than LSTM, as the encoder of our unsupervised deep structured semantic models, and we will collect a larger corpus from Common Crawl to train our model."
] | [
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"method",
"other",
"objective",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"method"
] |
[
"Machine translation has an undesirable propensity to produce translationese artifacts, which can lead to higher BLEU scores while being liked less by human raters.",
"Motivated by this, we model translationese and original (i.e. natural) text as separate languages in a multilingual model, and pose the question: can we perform zero-shot translation between original source text and original target text?",
"There is no data with original source and original target, so we train a sentence-level classifier to distinguish translationese from original target text, and use this classifier to tag the training data for an NMT model.",
"Using this technique we bias the model to produce more natural outputs at test time, yielding gains in human evaluation scores on both adequacy and fluency.",
"Additionally, we demonstrate that it is possible to bias the model to produce translationese and game the BLEU score, increasing it while decreasing human-rated quality.",
"We analyze these outputs using metrics measuring the degree of translationese, and present an analysis of the volatility of heuristic-based train-data tagging.",
"Translationese is a term that refers to artifacts present in text that was translated into a given language that distinguish it from text originally written in that language (Gellerstam, 1986).",
"These artifacts include lexical and word order choices that are in-fluenced by the source language (Gellerstam, 1996) as well as the use of more explicit and simpler constructions (Baker et al., 1993).",
"These differences between translated and original text mean that the direction in which parallel data (bitext) was translated is potentially important for machine translation (MT) systems.",
"Most *Work done while at Google Research.",
"parallel data is either source-original (the source was translated into the target) or target-original (the target was translated into the source), though sometimes neither side is original because both were translated from a third language.",
"Figure 1 illustrates the four possible combinations of translated and original source and target data.",
"Recent work has examined the impact of translationese in MT evaluation, using the WMT evaluation campaign as the most prominent example.",
"From 2014 through 2018, WMT test sets were constructed such that 50% of the sentence pairs are source-original (upper right quadrant of Figure 1) and the rest are target-original (lower left quadrant).",
"Toral et al. (2018), Zhang and Toral (2019), and Graham et al. (2019) have examined the effect of this testing setup on MT evaluation, and have all argued that target-original test data should not be included in future evaluation campaigns because the translationese source is too easy to translate.",
"While target-original test data does have the downside of a translationese source side, recent work has also shown that human raters prefer MT output that is closer in distribution to original target text than translationese (Freitag et al., 2019).",
"This indicates that the target side of test data should also be original (upper left quadrant of Figure 1); however, it is unclear how to produce high-quality test data (let alone training data) that is simultaneously source-and target-original.",
"Because of this lack of original-to-original sentence pairs, we frame this as a zero-shot translation task, where translationese and original text are distinct languages or domains.",
"We adapt techniques from zero-shot translation with multilingual models (Johnson et al., 2016), where the training pairs are tagged with a reserved token corresponding to the domain of the target side: translationese or original text.",
"Tagging is helpful when the training set mixes data of different types by allowing the model to 1) see each pair's type in training to preserve distinct behaviors and avoid regressing to a mean/dominant prediction across data types, and 2) elicit different behavior in inference, i.e. providing a tag at test time yields predictions resembling a specific data type.",
"We then investigate what happens when the input is an original sentence in the source language and the model's output is also biased to be original, a scenario never observed in training.",
"Tagging in this fashion is not trivial, as most MT training sets do not annotate which pairs are source-original and which are target-original 1 , so in order to distinguish them we train binary classifiers to distinguish original and translated target text.",
"Finally, we perform several analyses of tagging these languages and demonstrate that tagged back-translation (Caswell et al., 2019) can be framed as a simplified version of our method, and thereby improved by targeted decoding.",
"Our contributions are as follows: 1. We propose two methods to train translationese classifiers using only monolingual text, coupled with synthetic text produced by machine translation.",
"2. Using only original translationese and translationese original training pairs, we apply techniques from zero-shot multilingual MT to enable original original translation.",
"3. We demonstrate with human evaluations that this technique improves translation quality, both in terms of fluency and adequacy.",
"1 Europarl (Koehn, 2005) is a notable exception, but it is somewhat small and not in the news domain.",
"4. We show that biasing the model to instead produce translationese outputs inflates BLEU scores while harming quality as measured by human evaluations.",
"Motivated by prior work detailing the importance of distinguishing translationese from original text (Kurokawa et al., 2009; Lembersky et al., 2012; Toral et al., 2018; Zhang and Toral, 2019; Graham et al., 2019; Freitag et al., 2019; Edunov et al., 2019) as well as work in zero-shot translation (Johnson et al., 2016), we hypothesize that performance on the source-original translation task can be improved by distinguishing target-original and target-translationese examples in the training data and constructing an NMT model to perform zero-shot original original translation.",
"Because most MT training sets do not annotate each sentence pair's original language, we train a binary classifier to predict whether the target side of a pair is original text in that language or translated from the source language.",
"This follows several prior works attempting to identify translations (Kurokawa et al., 2009; Koppel and Ordan, 2011; Lembersky et al., 2012).",
"To train the classifier, we need target-language text annotated by whether it is original or translated.",
"We use News Crawl data from WMT 2 as target-original data.",
"It consists of news articles crawled from the internet, so we assume that most of them are not translations.",
"Getting translated data is trickier; most human-translated pairs where the original language is annotated are only present in test sets, which are generally small.",
"To sidestep this, we choose to use machine translation as a proxy for human translationese, based on the assumption that they are similar.",
"This allows us to create classifier training data using only unannotated monolingual data.",
"We propose two ways of doing this: using forward translation (FT) or round-trip translation (RTT).",
"Both are illustrated in Figure 2. To generate FT data, we take source-language News Crawl data and translate it into the target language using a machine translation model trained on WMT training bitext.",
"We can then train a classifier to distinguish the generated text from monolingual target-language text.",
"One potential problem with the FT data set is that the original and translated pairs may differ not only 2 http://www.statmt.org/wmt18/translation-task.html Figure 2: Illustration of data set creation for the FT and RTT translationese classifiers.",
"in the respects we care about (i.e. translationese), but also in content.",
"Taking English French as an example language pair, one could imagine that certain topics are more commonly reported on in original English language news than in French, and vice versa, e.g. news about American or French politics, respectively.",
"The words and phrases representing those topics could then act as signals to the classifier to distinguish the original language.",
"To address this, we also experiment with RTT data.",
"For this approach we take target-language monolingual data and round-trip translate it with two machine translation models (target source and then source target), resulting in another target-language sentence that should contain the same content as the original sentence, alleviating the concern with FT data.",
"Here we hope that the noise introduced by round-trip translation will be similar enough to human translationese to be useful for our downstream task.",
"In both settings, we use the trained binary classifier to detect and tag training bitext pairs where the classifier predicted that the target side is original.",
"We perform our experiments on WMT18 English German bitext and WMT15 English French bitext.",
"We use WMT News Crawl for monolingual data (2007-2017 for German and 2007-2014 for French).",
"We filter out sentences longer than 250 subwords (see Section 3.2 for the vocabulary used) and remove pairs whose length ratio is greater than 2. This results in about 5M pairs for English German.",
"We do not filter the English French bitext, resulting in 41M sentence pairs.",
"For monolingual data, we deduplicate and filter sentences with more than 70 tokens or 500 characters.",
"For the experiments described later in Section 5.3, this monolingual data is back-translated with a target-to-source translation model; after doing so, we remove any sentence pairs where the back-translated source is longer than 75 tokens or 550 characters.",
"This results in 216.5M sentences for English German (of which we only use 24M at a time) and 39M for English French.",
"As a final step, we use an in-house language identifica-tion tool based on the publicly-available Compact Language Detector 2 3 to remove all pairs with the incorrect source or target language.",
"This was motivated by observing that some training pairs had the incorrect language on one side, including cases where both sides were the same; Khayrallah and Koehn (2018) found that this type of noise is especially harmful to neural models.",
"The classifiers were trained on the target language monolingual data in addition to either an equal amount of source language monolingual data machine-translated into the target language (for the FT classifiers) or the same target sentences round-trip translated through the source language with MT (for the RTT classifiers).",
"In both cases, the MT models were trained only with WMT bitext.",
"The models used to generate the synthetic data have BLEU (Papineni et al., 2002) performance as follows on newstest2014/full: German English 31.8; English German 28.5; French English 39.2; English French 40.6.",
"Here and elsewhere, we report BLEU scores with SacreBLEU (Post, 2018); see Section 3.3.",
"Both language pairs considered in this work are high-resource.",
"While translationese is a potential concern for all language pairs, in low-resource settings it is overshadowed by general quality concerns stemming from the lack of training data.",
"We leave for future work the application of these techniques to low-resource language pairs.",
"Our NMT models use the transformer-big architecture (Vaswani et al., 2017) implemented in lingvo (Shen et al., 2019) with a shared source-target byte-pair-encoding (BPE) vocabulary (Sen-nrich et al., 2016b) of 32k types.",
"To stabilize training, we use exponentially weighted moving average (EMA) decay (Buduma and Locascio, 2017).",
"3 https://github.com/CLD2Owners/cld2 Language Classifier Bitext BT Type % Orig.",
"For the translationese classifier, we trained a three-layer CNN-based classifier optimized with Adagrad.",
"We picked checkpoints by F1 on the development set, which was newstest2015 for English German and a subset of newstest2013 containing 500 English-original and 500 French-original sentence pairs for English French.",
"We found that the choice of architecture (RNN/CNN) and hyperparameters did not make a substantial difference in classifier accuracy.",
"We report BLEU (Papineni et al., 2002) scores with SacreBLEU (Post, 2018) and include the identifi-cation string 4 to facilitate comparison with future work.",
"We also run human evaluations for the best performing systems (Section 4.3).",
"Before evaluating the usefulness of our translationese classifiers for the downstream task of machine translation, we can first evaluate how accurate they are at distinguishing original text from human translations.",
"We use WMT test sets for this evaluation, because they consist of source-original and target-original sentence pairs in equal number.",
"For French, the FT classifier scored 0 .",
"81 F1 and the RTT classifier scored 0 .",
"68 on newstest2014/full.",
"For German, the FT classifier achieved 0 .",
"85 F1 and the RTT classifier scored 0 .",
"65 on newstest2015.",
"We note that while the FT classifiers perform reasonably well, the RTT classifiers are less effective.",
"This result is in line with prior work by 4 BLEU + case.mixed + lang.LANGUAGE PAIR + num-refs.1 + smooth.exp + test.SET + tok.intl + version.1.2.15 Test set Src-Orig Trg-Orig Both Decode Nt.",
"Kurokawa et al. (2009), who trained an SVM classifier on French sentences to detect translations from English.",
"They used word n-gram features for their classifier and achieved 0.77 F1, but were worried about a potential content effect and so also trained a classifier where nouns and verbs were replaced with corresponding part-of-speech (POS) tags, achieving 0.69 F1.",
"Note that they tested on the Canadian Hansard corpus (contain-ing Canadian parliamentary transcripts in English and French) while we tested on WMT test sets, so the numbers are not directly comparable, but it is interesting to see the similar trends in comparing content-aware and content-unaware versions of the same method.",
"We also point out that Kurokawa et al. (2009) both trained and tested with human-translated sentences, while we trained our classifiers with machine-translated sentences while still testing on human-translated data.",
"The portion of our data classified as target-original by each classifier is reported in Table 1. 4.2 NMT with Translationese-Classified Bitext Table 2a shows the BLEU scores of three models all trained on WMT 2014 English French bitext.",
"They differ in how the data was partitioned: either it wasn't, or tags were applied to those sentence pairs with a target side that a classifier predicted to be original French.",
"We first note that the model trained on data tagged by the round-trip translation Test set Src-Orig Tagging Decode BLEU % Preferred Untagged -43.9 26.6% FT clf.",
"Natural 41.5 31.9% Test set Src-Orig Tagging Decode BLEU % Preferred FT clf.",
"Transl.",
"44.6 24.2% FT clf.",
"Natural 41.5 30.7% Table 3: Fluency side-by-side human evaluation for WMT English French newstest2014/full (Table 2a).",
"We evaluate only the source-original half of the test set because it corresponds to our goal of original original translation.",
"Despite a BLEU drop, humans rate the natural decode on average as more fluent than both the bitext model output and the same model with the translationese decode.",
"(RTT) classifier performs slightly worse than the baseline.",
"However, the model trained with data tagged by the forward translation (FT) classifier is able to achieve an improvement of 0.5 BLEU on both halves of the test set when biased toward translationese on the source-original half and original text on the target-original half.",
"This, coupled with the observation that the BLEU score on the source-original half sharply drops when adding the tag, indicates that the two halves of the test set represent quite different tasks, and that the model has learned to associate the tag with some aspects specific to generating original text as opposed to translationese.",
"However, we were not able to replicate this positive result on the English German language pair (Table 2b).",
"Interestingly, in this scenario the relative ordering of the FT and RTT models is reversed, with the German RTT-trained model outperforming the FT-trained one.",
"This is also interesting because the German FT classifier achieved a higher F1 score than the French one, indicating that a classifier's performance alone is not a sufficient indicator of its effect on translation performance.",
"One possible explanation for the negative result is that the English German bitext only contains 5M pairs, as opposed to the 41M for English French, so splitting the data into two portions could make it difficult to learn both portions' output distributions properly.",
"In the previous subsection, we saw that BLEU for the source-original half of the test set went down when the model trained with FT classifications ( FT clf. ) was decoded it as if it were target-original (Ta-ble 2a).",
"Prior work has shown that BLEU has a low correlation with human judgments when the reference contains translationese but the system output is biased toward original/natural text (Fre-itag et al., 2019).",
"This is the very situation we find ourselves in now.",
"Consequently, we run a human evaluation to see if the output truly is more natural and thereby preferred by human raters, despite the loss in BLEU.",
"We run both a fluency and an adequacy evaluation for English French to compare the quality of this system when decoding as if source-original vs. target-original.",
"We also compare the system with the Untagged baseline.",
"All evaluations are conducted with bilingual speakers whose native language is French, and each is rated by 3 different raters, with the average taken as the final score.",
"Our two evaluations are as follows: Adequacy : Raters were shown only the source sentence and the model output.",
"Each output was scored on a 6-point scale.",
"Fluency : Raters saw two target sentences (two models' outputs) without the source sentence, and were asked to select which was more fluent, or whether they were equally good.",
"Fluency human evaluation results are shown in Table 3. We measured inter-rater agreement using Fleiss' Kappa (Fleiss, 1971), which attains a maximum value of 1 when raters always agree.",
"This value was 0.24 for the comparison with the untagged baseline, and 0.16 for the comparison with the translationese decodes.",
"The agreement levels are fairly low, indicating a large amount of subjectivity for this task.",
"However, raters on average still indicated a preference for the FT clf.",
"model's natural decodes.",
"This provides evidence that they are more fluent than both the translationese decodes from the same model and the baseline untagged model, despite the drop in BLEU compared to each.",
"Adequacy human ratings are summarised in Table 4. Both decodes from the FT clf.",
"model scored significantly better than the baseline.",
"This is especially true of the natural decodes, demonstrating that the model does not suffer a loss in adequacy by generating more fluent output, and actually sees a significant gain.",
"We hypothesize that splitting the data as we did here allowed the model to learn a sharper distribution for both portions, thereby increasing the quality of both decode types.",
"Some Test set Src-Orig Tagging Decode BLEU Adequacy Untagged -43.9 4.51 FT clf.",
"additional evidence for this is the fact that the FT clf.",
"model's training loss was consistently lower than that of the baseline.",
"Translationese tends to be simpler, more standardised and more explicit (Baker et al., 1993) compared to original text and can retain typical characteristics of the source language (Toury, 2012).",
"Toral (2019) proposed metrics attempting to quantify the degree of translationese present in a translation.",
"Following their work, we quantify lexical simplicity with two metrics: lexical variety and lexical density.",
"We also calculate the length variety between the source sentence and the generated translations to measure interference from the source.",
"An output is simpler when it uses a lower number of unique tokens/words.",
"By generating output closer to original target text, our hope is to increase lexical variety.",
"Lexical variety is calculated as the type-token ratio (TTR): T T R = number of types number of tokens (1) 5.1.2 Lexical Density Scarpa (2006) found that translationese tends to be lexically simpler and have a lower percentage of content words (adverbs, adjectives, nouns and verbs) than original written text.",
"Lexical density is calculated as follows: lex density = number of content words number of total words (2) 5.1.3 Length Variety Both MT and humans tend to avoid restructuring the source sentence and stick to sentence structures popular in the source language.",
"This results in a translation with similar length to that of the source sentence.",
"By measuring the length variety, we measure interference in the translation because its length is guided by the source sentence's structure.",
"We compute the normalized absolute length difference at the sentence level and average the scores over the test set of source-target pairs ( x, y ) : length variety = || x | | y || | x | (3) 5.1.4 Results Results for all three different translationese measurements are shown in Table 5. Test set Src-Orig Tagging Decode Lex.",
"Lex.",
"Len.",
"Var.",
"Density Var.",
"Untagged -0.258 0.393 0.246 FT clf.",
"Transl.",
"0.255 0.396 0.264 FT clf.",
"Natural 0.260 0.397 0.245 Table 5: Measuring the degree of translationese for WMT English French newstest2014/full on the source-original half.",
"Higher lexical variety, lexical density, and length variety indicate less translationese output.",
"Lexical Variety : Using the tag to decode as natural text (i.e. more like original target text) increases lexical variety.",
"This is expected as original sentences tend to use a larger vocabulary.",
"Lexical Density : We also increase lexical density when decoding as natural text.",
"In other words, the model has a higher percentage of content words in its output, which is an indication that it is more like original target-language text.",
"Length Variety : Unlike the previous two metrics, decoding as natural text does not lead to a more natural (i.e. larger) average length variety.",
"One reason may be related to the fact that this is the only metric that also depends on the source sentence: since all of our training pairs feature translationese on either the source or target side, both the tagged and untagged training pairs will feature similar sentence structures, so the model never fully learns to produce different structures.",
"This further illustrates the problem of the lack of original original training data noted in the introduction.",
"Rather than tagging training data with a trained classifier, as explored in the previous sections, it might be possible to tag using much simpler heuristics, and achieve a similar effect.",
"We explore two options here.",
"Here, we partition the training pairs ( x, y ) according to a simple length ratio | x | | y | .",
"We use a threshold length empirically calculated from two large monolingual corpora, M x and M y : length = 1 | M x | (cid:80) x i M x | x i | 1 | M y | (cid:80) y i M y | y i | (4) For English French, we found length = 0 .",
"8643 , meaning that original French sentences tend to have more tokens than English.",
"We tag all pairs with length ratio greater than length (49.8% of the training bitext).",
"Based on the discussion in Section 5.1.3, we expect that | x | | y | 1 .",
"0 indicates translationese, so in this case the tag should mean produce translationese instead of produce original text. 5.2.2 Lexical Density Tagging We tag examples with a target-side lexical density of greater than 0.5, which means that the target is more likely to be original than translationese.",
"Please refer to Section 5.1.2 for an explanation of this metric.",
"Table 6 shows the results for this experiment, compared to the untagged baseline and the classifier-tagged model from Table 2a.",
"This table specifically looks at the effect of controlling whether the output should feature more or less translationese on each subset of the test set.",
"We see that the lexical density tagging approach yields expected results, in that the tag can be used to effectively increase BLEU on the target-original portion of the test set.",
"The length-ratio tagging, however, has the opposite effect: producing shorter outputs (decode as if translationese) produces higher target-original BLEU and lower source-original BLEU.",
"We speculate that this data partition has accidentally picked up on some artifact of the data.",
"Two interesting observations from Table 6 are that 1) both heuristic tagging methods perform much more poorly than the classifier tagging method on both test set halves, and 2) all varieties of tagging produce large performance changes (up to -7.2 BLEU).",
"This second observation highlights that tagging can be powerful and dangerous when it does not correspond well with the desired feature.",
"We also investigated whether using a classifier to tag training data improved model performance in the presence of back-translated (BT) data.",
"Caswell et al. (2019) introduced tagged back-translation (TBT), where all back-translated pairs are tagged and no bitext pairs are.",
"They experimented with decoding the model with a tag (as-if-back-translated) but found it harmed BLEU score.",
"However, in our early experiments we discovered that doing this actually improved the model's performance on the target-original portion of the test set, while harming it on the source-original half.",
"Thus, we frame TBT as a heuristic method for identifying target-original pairs: the monolingual data used for the back-translations is assumed to be original, and the target side of the bitext is assumed to be translated.",
"We wish to know whether we can find a better tagging scheme for the combined BT+bitext data, based on a classifier or some other heuristic.",
"Results for English French models trained with BT data are presented in Table 7a.",
"While combining the bitext classified by the FT classifier with all-tagged BT data yields a minor gain of 0.2 BLEU over the TBT baseline of Caswell et al. (2019), the other methods do not beat the baseline.",
"This indicates that assuming all of the target monolingual data to be original is not as harmful as the error introduced by the classifiers.",
"English German results are presented in Table 7b.",
"Combining the bitext classified by the RTT classifier with all-tagged BT data matched the performance of the TBT baseline, but none of the models outperformed it.",
"This is expected, given the poor performance of the bitext-only models for this language pair.",
"In Table 8, we show example outputs for WMT English French comparing the Untagged baseline with the FT clf.",
"natural decodes.",
"In the first example, avec suffisamment d'art is an incorrect word-for-word translation, as the French word art cannot be used in that context.",
"Here the word habilement , which is close to skilfully in English, sounds more natural.",
"In the second example, libre d'impot is the literal translation of tax-free, but French documents rarely use it, they prefer pas imposable , meaning not taxable.",
"The effects of translationese on MT training and evaluation have been investigated by many prior authors (Kurokawa et al., 2009; Lembersky et al., 2012; Toral et al., 2018; Zhang and Toral, 2019; Graham et al., 2019; Freitag et al., 2019; Edunov et al., 2019; Freitag et al., 2020).",
"Training classifiers to detect translationese has also been done (Kurokawa et al., 2009; Koppel and Ordan, 2011).",
"Similarly to this work, Kurokawa et al. (2009) used their classifier to preprocess MT training data; however, they completely removed target-original pairs.",
"In contrast, Lembersky et al. (2012) used both types of data (without explicitly distinguishing them with a classifier), and used entropy-based measures to cause their phrase-based system to favor phrase table entries with target phrases that are more similar to a corpus of translationese than original text.",
"In this work, we combine aspects from each of these: we train a classifier to partition the training data, and use both subsets to train a single model with a mechanism allowing control over the degree of translationese to produce in the output.",
"We also show with human evaluations that source-original test sentence pairs result in BLEU scores that do not correlate well with translation quality when evaluating models trained to produce more original output.",
"In addition to the methods in Caswell et al. (2019), tagging training data and using the tags to control output is a technique that has been growing in popularity.",
"Tags on the source sentence have Source Sorry she didn't phrase it artfully enough for you.",
"been used to indicate target language in multilingual models (Johnson et al., 2016), formality level in English Japanese (Yamagishi et al., 2016), politeness in English German (Sennrich et al., 2016a), gender from a gender-neutral language (Kuczmarski and Johnson, 2018), as well as to produce domain-targeted translation (Kobus et al., 2016).",
"Shu et al. (2019) use tags at training and inference time to increase the syntactic diversity of their output while maintaining translation quality; similarly, Agarwal and Carpuat (2019) and Marchi-sio et al. (2019) use tags to control the reading level (e.g. simplicity/complexity) of the output.",
"Overall, tagging can be seen as domain adaptation (Freitag and Al-Onaizan, 2016; Luong and Manning, 2015).",
"We have demonstrated that translationese and original text can be treated as separate target languages in a multilingual model, distinguished by a classifier trained using only monolingual and synthetic data.",
"The resulting model has improved performance in the ideal, zero-shot scenario of original original translation, as measured by human evaluation of adequacy and fluency.",
"However, this is associated with a drop in BLEU score, indicating that better automatic evaluation is needed."
] | [
"abstain",
"objective",
"method",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"method",
"objective",
"abstain",
"result",
"other",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"other",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"other",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain"
] |
[
"Knowledge graph (KG) representation learning techniques that learn continuous embeddings of entities and relations in the KG have become popular in many AI applications.",
"With a large KG, the embeddings consume a large amount of storage and memory.",
"This is problematic and prohibits the deployment of these techniques in many real world settings.",
"Thus, we propose an approach that compresses the KG embedding layer by representing each entity in the KG as a vector of discrete codes and then composes the embeddings from these codes.",
"The approach can be trained end-to-end with simple modifications to any existing KG embedding technique.",
"We evaluate the approach on various standard KG embedding evaluations and show that it achieves 50-1000x compression of embeddings with a minor loss in performance.",
"The compressed embeddings also retain the ability to perform various reasoning tasks such as KG inference.",
"Knowledge graphs (KGs) are a popular way of storing world knowledge, lending support to a number of AI applications such as search (Singhal, 2012), question answering (Lopez et al., 2013; Berant et al., 2013) and dialog systems (He et al., 2017; Young et al., 2018).",
"Typical KGs are huge, consisting of millions of entities and relations.",
"With the growth in use of KGs, researchers have explored ways to learn better representations of KGs in order to improve generalization and robustness in downstream tasks.",
"In particular, there has been interest in learning embeddings of KGs in continuous vector spaces (Bordes et al., 2011, 2013; Socher et al., 2013).",
"KG embedding approaches represent entities as learnable continuous vectors while each relation is modeled as an operation in the same space such as translation, projection, etc. (Bordes et al., 2013; Wang et al., 2014; Lin et al., 2015; Ji et al., 2015).",
"These approaches give us a way to perform reasoning in KGs with simple numerical computation in continuous spaces.",
"Despite the simplicity and wide-applicability of KG embedding approaches, they have a few key issues.",
"A major issue is that the number of embedding parameters grow linearly with the number of entities.",
"This is challenging when we have millions or billions of entities in the KG, especially when there are a lot of sparse entities or relations in the KG.",
"There is a clear redundancy in the continuous parameterization of embeddings given that many entities are actually similar to each other.",
"This over-parameterization can lead to a drop in performance due to overfitting in downstream models.",
"The large memory requirement of continuous representations also prevents models that rely on them from being deployed on modest user-facing computing devices such as mobile phones.",
"To address this issue, we propose a coding scheme that replaces the traditional KG embedding layer by representing each entity in the KG with a K -way D dimensional code (KD code) (van den Oord et al., 2017; Chen et al., 2018; Chen and Sun, 2019).",
"Each entity in the KG is represented as a sequence of D codes where each code can take values in { 1 . . . K } .",
"The codes for each entity are learnt in such a way that they capture the semantics and the relational structure of the KG i.e., the codes that represent similar or related entities are typically also similar 1 .",
"The coding scheme is much more compact than traditional KG embedding schemes.",
"We learn the discrete codes for entities using an autoencoder style model which learns a discretization function that maps continuous entity representations to discrete codes and a reverse-discretization function that maps the discrete codes 1 For example, Barack Obama = 2-1-3-3 and Michelle Obama = 2-1-3-2 (for D = 4 and K = 3 ) back to continuous entity representations.",
"The discretization and reverse-discretization functions are jointly learnt end-to-end.",
"The inherent discreteness of the representation learning problem poses several learning issues.",
"We tackle these issues by resorting to the straight-through estimator (Bengio et al., 2013) or the tempering softmax (Maddison et al., 2016; Jang et al., 2016) and using guidance from existing KG embeddings to smoothly guide learning of the discrete representations.",
"We evaluate our approach on various standard KG embedding evaluations and we find that we can massively reduce the size of the KG embedding layer while suffering only a minimal loss in performance (if at all).",
"We show that the proposed approach for learning discrete KG representations leads to a good performance in the task of link prediction (cloze entity prediction) as well as in the task of KG reasoning and inference.",
"A knowledge graph (KG) G E R E can be formalized as a set of triplets ( e i , r, e j ) composed of head and tail entities e i and e j ( e i , e j E , E being the set of entities) and a relation r R ( R being the set of relations) n e = |E| , n r = |R| .",
"The goal of learning KG embeddings is to learn vector embeddings e R d e for each entity e E (and possibly also relation embeddings r R d r ).",
"Typical KG embedding approaches are multilayer neural networks which consist of an embedding component and a scoring component.",
"The embedding component maps each entity to its corresponding embedding.",
"The scoring component learns a scoring function f : E R E R where f ( e i , r, e j ) defines the score of the triplet ( e i , r, e j ) .",
"KG embeddings are learnt by defining a loss function L and solving the following optimization problem: min (cid:88) ( e i ,r,e j ) G L ( e i , r, e j ) (1) Here includes all embedding parameters and any other neural network parameters.",
"The loss function typically encourages the score of a positive triplet ( e i , r, e j ) to be higher than that of a (corrupted) negative triplet.",
"In Table 1, we summarize the scoring function for several existing KG embedding approaches as well as their corresponding entity (and relation) representation parameters.",
"In all the KG embedding models, the number of parameters grow super-linearly with the number of entities and relations in the KG as well as the size of their representations.",
"This number can be very large and learning KG embeddings can be a challenge for large, sparse KGs.",
"In this paper, we present a novel coding scheme that significantly reduces the number of embedding parameters.",
"We do so by leveraging recent advances in discrete representation learning.",
"We summarize them below.",
"Typical deep learning methods define an embedding function as F : V R d , where V denotes the vocabulary such as words, sub-words, entities, relations, etc. and each symbol in the vocabulary is mapped to a continuous vector in R d .",
"The embedding function can be trained separate from the task in a completely unsupervised manner or jointly with other neural net parameters to optimize the target loss function.",
"A common specification of the embedding function in NLP is a lookup table L R n d with n = |V| .",
"The total number of bits used to represent this table is O ( nd ) ( 32 nd if each real number is represented by 32-bit floating point).",
"This is problematic for large n and/or d .",
"Thus, various approaches have been proposed to compress embedding layers in neural networks.",
"These include weight-tying (Press and Wolf, 2016; Inan et al., 2016; Li et al., 2018), matrix-factorization based approaches (Acharya et al., 2019), and approaches that rely on gumbel softmax (Baevski and Auli, 2018), vector quantization (Chen and Sun, 2019) and codebook learning (Shu and Nakayama, 2017).",
"In this work, we build on discrete representation learning approaches (van den Oord et al., 2017; Chen et al., 2018; Chen and Sun, 2019).",
"Discrete representation learning gives us a way to mitigate this issue by representing each symbol v in the vocabulary as a discrete vector z v = [ z (1) v , . . . z ( D ) v ] .",
"Discrete representations have another clear benefit that they are interpretable and are a natural fit for complex reasoning, planning and predictive learning tasks.",
"Learning discrete representations is challenging due to the inherent non-differentiability in the embedding layer.",
"Thus, a number of solutions such as the gumbel softmax trick (Maddison et al., 2016; Jang et al., 2016) and the straight-through estimator (Bengio et al., 2013) have been proposed to tackle this issue.",
"Making the discrete representation learn-KG Embedding Model Scoring function f # Ent.",
"ing process differentiable enables end-to-end learning of discrete representations via optimizing some task-specific objectives from language modelling and machine translation.",
"In this work, we use discrete representation learning to compress KG embeddings.",
"We describe it below.",
"In order to learn discrete KG representations, we define a quantization function Q : R d R d , which (during training) takes raw KG embeddings and produces their quantized representations.",
"Q = D R is composed of two functions: 1. A discretization function D : R d e ZD that maps the continuous KG embedding into a K -way D -dimensional discrete code with cardinality | Z | = K (we call this KD code) 2. A reverse-discretization function R : ZD R d e that maps the KD code back to the continuous embedding.",
"During training, both D and R are learned.",
"Then, every entity in the KG is represented by a KD code via applying the discretization function D to save space (compression).",
"The continuous embeddings and the parameters of the discretization function are then no longer needed.",
"In the test/inference stage, the reverse-discretization function R is used to decode the KD codes into regular embedding vectors for every entity.",
"We use vector quantization (Chen et al., 2018; Chen and Sun, 2019) and codebook learning (Cai et al., 2010) to define the discretization and reverse-discretization functions D and R .",
"We describe them below.",
"The goal of the discretization function is to map continuous KG embedding vectors into KD codes.",
"We model the discretization function using nearest neighbor search (Cayton, 2008).",
"Given continuous KG embeddings { e i | i = 1 . . . n e } as query vectors, we define a set of K key vectors { k k | k = 1 . . . K } where k k R d e .",
"In order to learn D -dimensional discrete codes, we partition the query and key vectors into D partitions where each partition corresponds to one of the D discrete codes e ( j ) i R n d e /D and k ( j ) k RK d e /D , j = 1 . . . D .",
"Vector Quantization (VQ): Our first alternative for discretization is vector-quantization (Ballard, 1997), a classical quantization technique for data compression.",
"We assume that the j th discrete code of the i th entity z ( j ) i can be computed by calculating distances between the corresponding query vector partition e ( j ) i and various corresponding key vector partitions { k ( j ) k } , and choosing the one with the minimum distance: z ( j ) i = arg min k dist (cid:16) e ( j ) i , k ( j ) k (cid:17) (2) We use the Euclidean distance function: dist ( a , b ) = || a b || 22 in our experiments.",
"Note that the argmin operation is inherently nondifferentiable.",
"The resulting quantization function Q has no gradient towards the input query vectors.",
"Thus, we use the straight-through estimator (Ben-gio et al., 2013) to compute a pseudo gradient.",
"This means that during the forward pass, we compute Q as defined here, but during the backward pass, we use the gradient of the query vectors.",
"Tempering Softmax (TS): Vector quantization is a popular method for learning discrete representations.",
"Yet another popular approach is continuous relaxation of (2) via the tempering softmax (Maddi-son et al., 2016; Jang et al., 2016).",
"We again use dot product and softmax for computing the proximity between query and key vectors: z ( j ) i = arg max k exp (cid:16) (cid:104) e ( j ) i , k ( j ) k (cid:105) / (cid:17) (cid:80) k (cid:48) exp (cid:16) (cid:104) e ( j ) i , k ( j ) k (cid:48) (cid:105) / (cid:17) Here, is the temperature and (cid:104) a , b (cid:105) = a T b denotes the dot product operation.",
"Note that this function still carries an inherent non-differentiability.",
"Hence, we relax the above and compute probability vectors z ( j ) i which represent the probability distribution of the j th dimension of the discrete code for the i th entity taking a particular value (say k ).",
"Given probabilistic vectors z ( j ) i , we can compute the discrete codes z ( j ) i simply by taking the argmax.",
"To compute discrete KD codes, we set a small value of .",
"As 0 , the softmax becomes spiky concentrated on the true z ( j ) i -th dimension.",
"We again estimate pseudo gradients by setting a very small in the forward pass (i.e. close to the discrete case (eq. 1)) and = 1 in the backward pass.",
"The goal of the reverse-discretization function is to map discrete KD codes into continuous KG embedding vectors.",
"We model the reverse-discretization process first by a simple linear model which maps the discrete codes to continuous vectors by looking up a learnt codebook.",
"Then, we present an alternative a non-linear model for reverse-discretization based on recurrent neural networks.",
"Codebook Lookup (CL): We first define the reverse-discretization function in a simple manner where we substitute every discrete code with a continuous vector from a codebook.",
"Let C be a set of codebooks.",
"C consists of a number of codebooks a separate codebook C ( j ) for each position j = 1 . . . D in the KD code.",
"We model each codebook simply as a set of vectors: C ( j ) = { c ( j ) i | i = 1 . . . K } where c ( j ) i R d e /D .",
"We simply compute the embedding vector for the j th dimension of the i th entity as: e ( j ) i = c ( j ) i The final entity embedding vector e i is achieved by the concatenation of the embedding vectors for each dimension: e i = [ e (1) i . . . e ( D ) i ] .",
"Non-linear Reconstruction (NL): While the codebook lookup approach is simple and efficient, due to its linear nature, the capacity of the generated KG embedding may be limited.",
"Thus, we also employ neural network based non-linear approaches for embedding reconstruction.",
"We propose a non-linear embedding reconstruction approach based on the Bi-LSTM network.",
"Given the KD code z i as a sequence of codes z (1) i , . . . , z ( D ) i , we map the KD code to a continuous embedding vector by feeding the code to a Bi-LSTM followed by mean pooling.",
"Let ( h (1) i , . . . , h ( D ) i ) = Bi-LSTM (cid:16) z (1) i , . . . , z ( D ) i (cid:17) be the hidden state representations for the various Bi-LSTM cells.",
"Finally, we reconstruct the entity embedding e i by mean-pooling the code embedding vectors followed by a linear transformation: e i = W Trev (cid:16)(cid:80) j h ( j ) i (cid:17) .",
"We also tried to map the KD code to a continuous embedding vector by feeding the code to variations of a character level CNN (Kim et al., 2016).",
"However, the Char CNN model always performed worse than the Bi-LSTM model in our experiments.",
"This was because our discretization function which discretizes contiguous partitions of the continuous representation better suits the Bi-LSTM reconstruction model.",
"In the future, we would like to consider more complex discretization functions with other complex non-linear reconstruction models.",
"Storage Efficiency: A key motivation of learning discrete representations is that we can significantly compress the embedding layer at test time.",
"The size of the embedding layer for typical KG representations is 32 n e d e (assuming a 32 bit representation) this can be very large.",
"In contrast, with discrete representation learning, we only need to store code embeddings { z i } and the parameters used in the reverse-discretization function such as the codebooks C or the parameters of the embedding reconstruction Bi-LSTM { LSTM , W rev } .",
"The entity codes require n e D log 2 K bits.",
"The codebook lookup approach needs to also maintain codebooks which require 32 Kd e parameters and the non-linear reconstruction approach requires Dd (cid:48) 6 parameters (two set of parameter matrices each for the input, output and forget gates) for the Bi-LSTM and d e d (cid:48) parameters for storing W rev a total of (6 D + d e ) d (cid:48) parameters.",
"Here, d (cid:48) is the size of the code embedding vectors.",
"In both codebook lookup and non-linear reconstruction formulations, discrete representation learning neatly decouples the KG size (number of entities) and dimensionality of the continuous embeddings.",
"Thus, the discrete embedding layer can be compactly stored as typically D and log 2 K are smaller than 32 d e (considering only the dominating term n e ).",
"Test Time Inference of Embeddings: At test time, we retrieve continuous embeddings for an entity by looking up the codebook or running inference on the reconstruction model using its discrete representation.",
"For codebook lookup, the steps involved are",
"(a) looking up a simple index for each code, and",
"(b) concatenation.",
"Since only index lookups and concatenation are needed, the extra computation complexity and memory footprint are very small O ( D ) time and memory.",
"In the nonlinear reconstruction setting, we need to run inference on the Bi-LSTM model.",
"This requires O ( D ) matrix vector multiplications (to compute various LSTM gates) which takes O ( Dd e d (cid:48) ) time.",
"Finally, we have another linear transformation W rev this takes O ( d e d (cid:48) ) time.",
"We can further cache the embedding lookups and various intermediate results such as matrix vector products to improve performance.",
"We show in our results that the test time inference overhead is typically very small.",
"Learning: Similar to previous continuous KG representation learning methods, we learn discrete entity representations by minimizing the triplet loss function.",
"We extend equation 1 as: min { z e } ,, (cid:88) ( e i ,r,e j ) G L { z e } ,, ( e i , r, e j | , ) (3) Here, z e are code embeddings, are the parameters of the reverse-discretization function ( C or { LSTM , W rev } ) and denotes parameters of the KG embedding approaches (listed in Table 1).",
"The aforementioned loss function (eq 3) is differentiable w.r.t. the embedding parameters and parameters of entity representation learning methods.",
"However, the discrete codes introduce a non-differentiability.",
"Thus, we use straight-through (Bengio et al., 2013) or the tempering softmax (Maddison et al., 2016; Jang et al., 2016) to estimate pseudo-gradients as described before (section 3.1).",
"Guidance from KG embeddings: We find that even with sophisticated discrete representation learning methods, solving the above optimization problem can be challenging in practice.",
"Due to discreteness of the problem, this can lead to a suboptimal solution where discrete codes are not as good.",
"Therefore, we also use guidance from continuous KG embeddings to solve (3) when provided 2 .",
"The key idea is that in addition to optimizing (3), we can encourage the reconstructed embeddings from the learnt discrete codes to mimic continuous embeddings.",
"In order to provide this guidance from continuous embeddings, during the training, instead of using the reconstructed embedding vector generated from the discrete code, we use a weighted average of the reconstructed embeddings and continuous embeddings obtained using methods described in Table 1: (1 ) D R ( e ) + e .",
"Here (0 , 1) is a linear interpolant for selecting between reconstructed embeddings and pre-learnt continuous embeddings.",
"We initialize to 1 and gradually decrease as training proceeds.",
"This enables the method to gradually rely more and more on reconstruction from discrete embeddings.",
"We also add a regularization term ||D R ( e ) e || 22 during the training to encourage the reconstructed embeddings to match the pre-learnt continuous embeddings.",
"This procedure is similar to knowledge-distillation guidance (Hinton et al., 2015) in previous discrete representation learning works (Chen et al., 2018).",
"Here (0 , 1) is a linear interpolant for selecting between reconstructed embeddings and pre-learnt continuous embeddings.",
"We initialize to 1 and gradually decrease as training proceeds.",
"This enables the method to gradually rely more and more on reconstruction from discrete embeddings.",
"We also add a regularization term ||D R ( e ) e || 22 during the training to encourage the reconstructed embeddings to match the pre-learnt continuous em-2 We show in our experiments that this guidance, while helpful, is not always needed.",
"We compare the baseline continuous representations described earlier in Table 1 with four discrete representation learning techniques described in this paper:",
"VQ-CL: D = VQ and R = CL VQ-NL: D = VQ and R = NL TS-CL: D = TS and R = CL TS-NL: D = TS and R = NL",
"We evaluate our approach on four standard link prediction datasets:",
"FB15k (Bordes et al., 2013) is a subset of Freebase.",
"FB15k-237 (Toutanova et al., 2015) is a subset of the FB15k dataset created by removing inverse relations that cause test leakage.",
"WN18 (Bordes et al., 2013) is a subset of WordNet.",
"WN18RR (Dettmers et al., 2018) is a subset of the WN18 dataset created by removing inverse relations.",
"We summarize all the data statistics in Table 2. We also use the Countries dataset (Bouchard et al., 2015) for some in-depth analysis of inference abilities of discrete representations.",
"We implement discrete KG representation learning by extending OpenKE (Han et al., 2018), an open-source framework for learning KG embeddings implemented on PyTorch 3 .",
"We train and test all our models on a single 2080Ti system.",
"We set K = 32 and D = 10 in our experiments unless stated otherwise.",
"For the linear embedding transformation function in the non-linear reconstruction approach, we use a hidden layer of 100 hidden units.",
"We 3 https://github.com/thunlp/OpenKE set as = 1 t at the t th epoch.",
"We tune the regularization coefficient using grid search on the validation set.",
"Link Prediction: We learn discrete representations corresponding to various continuous KG representations (described in Table 1) and compare the obtained discrete representations with their continuous counterparts.",
"We use the same hyper-parameter settings as in the original KG embedding papers.",
"We generate n e candidate triples for each test triple by combining the test entity-relation pair with all possible entities E .",
"We use the filtered setting (Bor-des et al., 2013), i.e. all known true triples are removed from the candidate set except for the current test triple.",
"We use standard evaluation metrics previously used in the literature: mean reciprocal rank (MRR) and hits@10 (H@10).",
"Mean reciprocal rank is the average of the inverse of the mean rank assigned to the true triple over all candidate triples.",
"Hits@10 measures the percentage of times a true triple is ranked within the top 10 candidate triples.",
"In addition, in order to report the compression efficiency of the discrete representations, we also report the compression ratio which is computed as follows: CR = Storage(continuous) Storage(discrete) Here, Storage(continuous) is the storage used to store full continuous KG representations.",
"Stor-age(discrete) is the storage used in the discrete representation learning method (during the testing stage).",
"This includes discrete KG representations as well as parameters of the reverse-discretization function (i.e. codebook or Bi-LSTM parameters).",
"Tables 3, 4, 5 and 6 show our results on the link prediction task on the four datasets respectively.",
"In Table 3, we compare various continuous representations with the four discrete representation learning techniques described in this paper.",
"We find that the discrete representations sustain only minor losses in performance (and are sometimes actually better than their continuous counterparts) in terms of both evaluation metrics: MRR and H@10, while being able to obtain significant embedding compression (42x-585x).",
"Table 3 also compares the different discrete representation learning approaches.",
"We observe that TS-NL which uses tempering softmax and non-linear reconstruction performs the best in most of the settings.",
"This observation was also Continuous CR VQ-CL TS-CL CR VQ-NL TS-NL MRR H@10 (CL) MRR H@10 MRR H@10 (NL) MRR H@10 MRR H@10 TransE 0.463 0.749 46.3 0.462 0.748 0.467 0.749 42.6 0.463 0.746 0.477 0.755 DistMult 0.798 0.893 77.6 0.750 0.859 0.775 0.864 71.4 0.756 0.868 0.790 0.882 HolE 0.524 0.739 112.6 0.515 0.708 0.517 0.711 103.8 0.517 0.717 0.525 0.726 ComplEx 0.692 0.840 262.3 0.651 0.802 0.653 0.814 228.4 0.670 0.818 0.678 0.833 ConvE 0.657 0.831 77.6 0.618 0.774 0.620 0.798 71.4 0.626 0.793 0.644 0.820 RotatE 0.797 0.884 585.3 0.765 0.840 0.782 0.876 495.2 0.789 0.878 0.798 0.881 HypER 0.790 0.734 177.5 0.743 0.706 0.754 0.715 161.1 0.758 0.718 0.763 0.726 TuckER 0.795 0.741 177.5 0.773 0.714 0.782 0.729 161.1 0.787 0.723 0.783 0.726 Table 3: Results of several models and our proposed discrete counterparts evaluated on the FB15K dataset Continuous Discrete (TS-NL) MRR H@10 CR MRR H@10 TransE 0.495 0.943 103.3 0.499 0.940 DistMult 0.797 0.946 143.2 0.774 0.921 HolE 0.938 0.949 228.6 0.938 0.929 ComplEx 0.941 0.947 437.1 0.934 0.936 ConvE 0.943 0.956 143.2 0.933 0.936 RotatE 0.949 0.959 952.6 0.946 0.952 HypER 0.951 0.947 327.9 0.946 0.942 TuckER 0.953 0.949 327.9 0.924 0.920 Table 4: Results of several models and our proposed discrete counterpart (TS-NL) evaluated on the WN18 dataset Continuous Discrete (TS-NL) MRR H@10 CR MRR H@10 TransE 0.294 0.465 43.1 0.298 0.463 DistMult 0.241 0.419 71.8 0.241 0.422 HolE 0.318 0.430 104.0 0.316 0.428 ComplEx 0.247 0.428 228.5 0.238 0.411 ConvE 0.325 0.501 71.8 0.321 0.488 RotatE 0.338 0.533 495.2 0.336 0.528 HypER 0.341 0.252 161.3 0.332 0.286 TuckER 0.358 0.266 161.3 0.331 0.279 Table 5: Results of several models and our proposed discrete counterpart (TS-NL) evaluated on the FB15K-237 dataset.",
"made on the other three datasets.",
"Hence, in Tables 4, 5 and 6, we only compare TS-NL with the continuous representations.",
"We again observe that TS-NL compresses the KG embeddings (71x-952x) while suffering only a minor loss in performance.",
"Logical Inference with Discrete representations: KG embeddings give us a way to perform logical inference and reason about knowledge.",
"In this experiment, we explore if discrete representations retain the ability to perform inference and reasoning in KGs.",
"We evaluate our models on the countries dataset (Bouchard et al., 2015) which was designed to test the logical inference capabilities of KG embedding models.",
"We use the same evaluation protocol as in (Nickel et al., 2016) for our Continuous Discrete (TS-NL) MRR H@10 CR MRR H@10 TransE 0.226 0.501 105.2 0.230 0.498 DistMult 0.430 0.490 143.8 0.423 0.476 HolE 0.338 0.438 228.8 0.346 0.435 ComplEx 0.440 0.510 437.2 0.433 0.494 ConvE 0.430 0.520 143.8 0.431 0.500 RotatE 0.476 0.571 952.6 0.452 0.546 HypER 0.465 0.436 328.0 0.460 0.437 TuckER 0.470 0.443 328.0 0.452 0.442 Table 6: Results of several models and our proposed discrete counterpart (TS-NL) evaluated on the WN18RR dataset experiments.",
"The countries dataset contains 2 relations and 272 entities (244 countries, 5 regions and 23 subregions) and 3 tasks are posed, requiring subsequently longer and harder inference than the previous one: 1. Task S1 poses queries of the form locatedIn(c; ?), and the answer is one of the five regions.",
"2. Task S2 poses queries of the form neighbo-rOf(c1; c2) locatedIn(c2;",
"r) = locate-dIn(c1;",
"r) 3. Task S3 poses queries of the form neighbo-rOf(c1; c2) locatedIn(c2;",
"s) locatedIn(s;",
"r) = locatedIn(c1;",
"r): We use the AUC-PR metric, which was also used in previous works (Bouchard et al., 2015; Nickel et al., 2016).",
"Table 7 shows our results.",
"We find that TS-NL is a very good KG representation for KG inference.",
"Infact, we find that TS-NL outperforms many of their continuous counterparts.",
"Additional Inference Cost: A tradeoff in learning discrete KG representations is that the inference time increases as we need to decompress discrete representations into continuous embeddings for every entity before using them by looking up the S1 S2 S3 Ct.",
"codebook or running inference on the LSTM reconstruction model.",
"In practice, we found that this additional inference cost was very small.",
"For example, the additional inference cost of running TransE on the entire FB15K test set was 1 minute for codebook lookup and 2 .",
"5 minutes for non-linear reconstruction approach on our single 2080Ti system.",
"The additional inference cost for the other continuous KG representations were similarly low.",
"Varying K and D : There is an evident tradeoff between the extent of compression (which is dictated by the choice of K and D) and model performance.",
"In order to explore this tradeoff, we plot heatmaps of performance (MRR) and compression ratio (CR) on the FB15K test set as we vary K and D for TransE in Figure 1. Not surprisingly, the performance drops as the compression increases.",
"Plotting these heat maps would allow the end user to pick K and D depending on their tolerance to loss in performance.",
"Dependence on guidance from continuous embeddings: We evaluate the contribution of the guidance from continuous embeddings in learning discrete KG representations.",
"Figure 2 compares the test MRR for TS-NL as training proceeds on the FB-15K dataset when we do or do not have guidance from the continuous representation (TransE).",
"We observe that learning in the unguided model is much slower than the guided model.",
"However, the guided model achieves almost similar performance in the end.",
"Thus, we conclude that while guidance helps us achieve faster and more stable convergence, it is not necessary to learn discrete representations.",
"Quality of the Discrete representations: We also assess the quality of the learnt discrete entity representations directly as features for the link predic-5 10 25 D 2 8 32 128 K 0.451 0.456 0.462 0.465 0.468 0.480 0.473 0.477 0.483 0.477 0.480 0.486 0.455 0.460 0.465 0.470 0.475 0.480 0.485 MRR 5 10 25 D 2 8 32 128 K 440.2 217.7 100.2 148.3 72.4 39.7 82.4 42.6 29.8 51.8 26.6 14.7 50 100 150 200 250 300 350 400 MRR Figure 1: Heatmaps of performance (MRR) and CR for TS-NL on FB15K dataset as we vary K and D darker is better.",
"tion task.",
"In this case, we only retain the discrete entity representations learnt by TS-NL and learn a new LSTM based non-linear reverse-discretization on the validation set.",
"Then, we obtain the link-prediction performance on the test set as before (see Table 8 for transfer results on the FB15K dataset).",
"We observe that the performance of this transfer model is close to that of the original model which used a pre-trained reverse-discretization model (compare Table 8 with the shaded part of Table 3).",
"Note that, in the transfer setting, we can achieve much higher compression as we do not even need to store the reverse-discretization model.",
"Interpretability of discrete representations: The discrete codes provide us with additional interpretability which continuous representations can lack.",
"In Table 9, we show a sample of learned codes for the two datasets.",
"We observe that semantically similar entities are assigned to close-by codes.",
"Deep learning model compression has attracted many research efforts in the last few years (Han et al., 2015).",
"These efforts include network pruning (Reed, 1993; Castellano et al., 1997), weight sharing (Ullrich et al., 2017), quantization (Lin et al., 2016), low-precision computation (Hwang and Sung, 2014; Courbariaux et al., 2015) and knowledge distillation (Hinton et al., 2015) These techniques can also be used for embedding compression.",
"Press and Wolf (2016) and Inan et al. (2016) propose weight-tying approaches that learn input and output representations jointly.",
"Matrix factorization-based methods (Acharya et al., 2019; Shu and Nakayama, 2017; Li et al., 2018) have also been proposed which approximate an embedding matrix with smaller matrices or clusters.",
"Closest to our work are (Shu and Nakayama, 2017; Chen et al., 2018; Chen and Sun, 2019) who present similar approaches to learn discrete codings for word embeddings using multiple codebooks, i.e. product quantization (Jegou et al., 2010).",
"Similar techniques have used been used by van den Oord et al. (2017) who extend VAEs to learn discrete representations using vector quantization in the image domain.",
"This allows the VAE model to circumvent its well known issues of posterior collapse.",
"All these previous works have been applied to the image domain, and sometimes in language to learn discrete word embeddings.",
"In this work, we present the first results on compressing KG embeddings and also show how the compressed embeddings can be used to support various knowledge based applications such as KG inference.",
"The embedding layer contains majority of the parameters in any representation learning approach on knowledge graphs.",
"This is a barrier in successful deployment of models using knowledge graphs at scale on user-facing computing devices.",
"In this work, we proposed novel and general approaches for KG embedding compression.",
"Our approaches learn to represent entities in a KG as a vector of discrete codes in an end-to-end fashion.",
"At test time, the discrete KG representation can be cheaply and efficiently converted to a dense embedding and then used in any downstream application requiring the use of a knowledge graph.",
"We evaluated our proposed methods on different link prediction and KG inference tasks and show that the proposed methods for KG embedding compression can effectively compress the KG embedding table without suffering any significant loss in performance.",
"In this work, we only considered the problem of learning discrete entity representations.",
"In the future, we would like to jointly learn discrete representations of entities as well as relations.",
"MS would like to thank the anonymous reviewers, along with Karen Livescu, Kevin Gimpel and Shun-ing Jin for their valuable comments and suggestions on this work."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"result",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"other"
] |
[
"Natural language processing covers a wide variety of tasks predicting syntax, semantics, and information content, and usually each type of output is generated with specially designed architectures.",
"In this paper, we provide the simple insight that a great variety of tasks can be represented in a single unified format consisting of labeling spans and relations between spans, thus a single task-independent model can be used across different tasks.",
"We perform extensive experiments to test this insight on 10 disparate tasks spanning dependency parsing (syntax), semantic role labeling (semantics), relation extraction (informa-tion content), aspect based sentiment analysis (sentiment), and many others, achieving performance comparable to state-of-the-art specialized models.",
"We further demonstrate benefits of multi-task learning, and also show that the proposed method makes it easy to analyze differences and similarities in how the model handles different tasks.",
"Finally, we convert these datasets into a unified format to build a benchmark, which provides a holistic testbed for evaluating future models for generalized natural language analysis.",
"A large number of natural language processing (NLP) tasks exist to analyze various aspects of human language, including syntax (e.g., constituency and dependency parsing), semantics (e.g., semantic role labeling), information content (e.g., named entity recognition and relation extraction), or sentiment (e.g., sentiment analysis).",
"At first glance, these tasks are seemingly very different in both the structure of their output and the variety of information that they try to capture.",
"To handle these different characteristics, researchers usually use specially designed neural network architectures.",
"In this paper we ask the simple questions: are the Figure 1: An example from BRAT, consisting of POS, NER, and RE.",
"task-specific architectures really necessary?",
"Or with the appropriate representational methodology, can we devise a single model that can perform and achieve state-of-the-art performance on a large number of natural language analysis tasks ?",
"Interestingly, in the domain of efficient human annotation interfaces , it is already standard to use unified representations for a wide variety of NLP tasks.",
"Figure 1 shows one example of the BRAT (Stenetorp et al., 2012) annotation interface, which has been used for annotating data for tasks as broad as part-of-speech tagging, named entity recognition, relation extraction, and many others.",
"Notably, this interface has a single unified format that consists of spans (e.g., the span of an entity), labels on the spans (e.g., the variety of entity such as per-son or location), and labeled relations between the spans (e.g., born-in).",
"These labeled relations can form a tree or a graph structure, expressing the linguistic structure of sentences (e.g., dependency tree).",
"We detail this BRAT format and how it can be used to represent a wide number of natural language analysis tasks in Section 2. The simple hypothesis behind our paper is: if humans can perform natural language analysis in a single unified format, then perhaps machines can as well .",
"Fortunately, there already exist NLP models that perform span prediction and prediction of relations between pairs of spans, such as the end-to-end coreference model of Lee et al. (2017).",
"We extend this model with minor architectural mod-ifications (which are not our core contributions) and pre-trained contextualized representations (e.g., Information Extraction POS Parsing SRL Sentiment NER RE Coref.",
"BERT; Devlin et al. (2019) 1 ) then demonstrate the applicability and versatility of this single model on 10 tasks, including named entity recognition (NER), relation extraction (RE), coreference resolution (Coref.), open information extraction (Ope-nIE), part-of-speech tagging (POS), dependency parsing (Dep.), constituency parsing (Consti.), semantic role labeling (SRL), aspect based sentiment analysis (ABSA), and opinion role labeling (ORL).",
"While previous work has used similar formalisms to understand the representations learned by pre-trained embeddings (Tenney et al., 2019a,b), to the best of our knowledge this is the first work that uses such a unified model to actually perform analysis .",
"Moreover, we demonstrate that despite the model's simplicity, it can achieve comparable performance with special-purpose state-of-the-art models on the tasks above (Table 1).",
"We also demonstrate that this framework allows us to easily perform multi-task learning (MTL), leading to improvements when there are related tasks to be learned from or data is sparse.",
"Further analysis shows that dissimilar tasks exhibit divergent attention patterns, which explains why MTL is harmful on certain tasks.",
"We have released our code and the G eneral L anguage A nalysis D atasets (GLAD) benchmark with 8 datasets covering 10 tasks in the BRAT format 1 In contrast to work on pre-trained contextualized representations like ELMo (Peters et al., 2018) or BERT (Devlin et al., 2019) that learn unified features to represent the input in different tasks, we propose a unified representational methodology that represents the output of different tasks.",
"Analysis models using BERT still use special-purpose output predictors for specific tasks or task classes.",
"at https://github.com/neulab/cmu-multinlp , and provide a leaderboard to facilitate future work on generalized models for NLP.",
"In this section, we explain how the BRAT format can be used to represent a large number of tasks.",
"There are two fundamental types of annotations: span annotations and relation annotations.",
"Given a sentence x = [ w 1 , w 2 , ..., w n ] of n tokens, a span annotation ( s i , l i ) consists of a contiguous span of tokens s i = [ w b i , w b i +1 , ..., w e i ] and its label l i ( l i L ), where b i / e i are the start/end indices respectively, and L is a set of span labels.",
"A relation annotation ( s j , s k , r jk ) refers to a relation r jk ( r jk R ) between the head span s j and the tail span s k , where R is a set of relation types.",
"This span-relation representation can easily express many tasks by defining L and R accordingly, as summarized in Table 2a and Table 2b.",
"These tasks fall in two categories: span-oriented tasks , where the goal is to predict labeled spans (e.g., named entities in NER) and relation-oriented tasks , where the goal is to predict relations between two spans (e.g., relation between two entities in RE).",
"For example, constituency parsing (Collins, 1997) is a span-oriented task aiming to produce a syntactic parse tree for a sentence, where each node of the tree is an individual span associated with a constituent label.",
"Coreference resolution (Pradhan et al., 2012) is a relation-oriented task that links an expression to its mentions within or beyond a single sentence.",
"Dependency parsing (Kubler et al., Task Spans annotated with labels NER Barack Obama person was born in Hawaii location . Consti. And their suspicions NP of each other NP PP NP run deep ADVP VP . SPOS What WP kind NN of IN memory NN ? ABSA Great laptop that offers many great features positive ! Table 2a: Span-oriented tasks. Spans are annotated by underlines and their labels. Task Spans and relations annotated with labels RE The burst has been caused by pressure. cause-effect Coref. I voted for Tom because he is clever. coref. SRL We brought you the tale of two cities. ARG0 ARG2 ARG1 OpenIE The four lawyers climbed out from under a table. ARG0 ARG1 Dep. The entire division employs about 850 workers. det amod nsubj advmod nummod dobj ORL We therefore as MDC do not accept this result. holder target Table 2b: Relation-oriented tasks. Directed arcs indicate the relations between spans. 2009) is also a relation-oriented task that aims to relate a word (single-word span) to its syntactic parent word with the corresponding dependency type.",
"Detailed explanations of all tasks can be found in Appendix A. While the tasks above represent a remarkably broad swath of NLP, it is worth mentioning what we have not covered, to properly scope this work.",
"Notably, sentence-level tasks such as text classification and natural language inference are not covered, although they can also be formulated using this span-relation representation by treating the entire sentence as a span.",
"We chose to omit these tasks because they are already well-represented by previous work on generalized architectures (Lan and Xu, 2018) and multi-task learning (Devlin et al., 2019; Liu et al., 2019), and thus we mainly focus on tasks using phrase-like spans.",
"In addition, the span-relation representations described here are designed for natural language analysis , and cannot handle tasks that require generation of text, such as machine translation (Bojar et al., 2014), dialog response generation (Lowe et al., 2015), and summarization (Nallapati et al., 2016).",
"There are also a small number of analysis tasks such as semantic parsing to logical forms (Banarescu et al., 2013) where the outputs are not directly associated with spans in the input, and handling these tasks is beyond the scope of this work.",
"Now that it is clear that a very large number of analysis tasks can be formulated in a single format, we turn to devising a single model that can solve these tasks.",
"We base our model on a span-based model first designed for end-to-end coreference resolution (Lee et al., 2017), which is then adapted for other tasks (He et al., 2018; Luan et al., 2018, 2019; Dixit and Al-Onaizan, 2019; Zhang and Zhao, 2019).",
"At the core of the model is a module to represent each span as a fixed-length vector, which is used to predict labels for spans or span pairs.",
"We first briefly describe the span representation used and proven to be effective in previous works, then highlight some details we introduce to make this model generalize to a wide variety of tasks.",
"Span Representation Given a sentence x = [ w 1 , w 2 , ..., w n ] of n tokens, a span s i = [ w b i , w b i +1 , ..., w e i ] is represented by concatenating two components: a content representation z ci calculated as the weighted average across all token embeddings in the span, and a boundary representation z ui that concatenates the embeddings at the start and end positions of the span.",
"Specifically, c 1 , c 2 , ..., c n = TokenRepr ( w 1 , w 2 , ..., w n ) , (1) u 1 , u 2 , ..., u n = BiLSTM ( c 1 , c 2 , ..., c n ) , (2) z ci = SelfAttn ( c b i , c b i +1 , ..., c e i ) , (3) z ui = [ u b i ; u e i ] , z i = [ z ci ; z ui ] , (4) where TokenRepr could be non-contextualized, such as GloVe (Pennington et al., 2014), or contextualized, such as BERT (Devlin et al., 2019).",
"We refer to Lee et al. (2017) for further details.",
"Span and Relation Label Prediction Since we extract spans and relations in an end-to-end fashion, we introduce two additional labels NEG SPAN and NEG REL in L and R respectively.",
"NEG SPAN indicates invalid spans (e.g., spans that are not named entities in NER) and NEG REL indicates invalid span pairs without any relation between them (i.e., no relation exists between two arguments in SRL).",
"We first predict labels for all spans up to a length Dataset Domain #Sent.",
"of l words using a multilayer perceptron (MLP): softmax( MLP span ( z i )) |L| , where |L| is a |L| -dimensional simplex.",
"Then we keep the top K = n spans with the lowest NEG SPAN probability in relation prediction for efficiency, where smaller pruning threshold indicates more aggressive pruning.",
"Another MLP is applied to pairs of the remaining spans to produce their relation scores: o jk = MLP rel ([ z j ; z k ; z j z k ]) R |R| , where j and k index two spans.",
"Application to Disparate Tasks For most of the tasks, we can simply maximize the probability of the ground truth relation for all pairs of the remaining spans .",
"However, some tasks might have different requirements, e.g., coreference resolution aims to cluster spans referring to the same concept and we do not care about which antecedent a span is linked to if there are multiple ones.",
"Thus, we provide two training loss functions: 1. Pairwise Maximize the probabilities of the ground truth relations for all pairs of the remaining spans independently: softmax( o jk ) r jk , where r jk indexes the ground truth relation.",
"2. Head Maximize the probability of ground truth head spans for a specific span s j : (cid:80) k head ( s j ) softmax([ o j 1 , o j 2 , ..., o jK ]) k , where head ( ) returns indices of one or more heads and o j is the corresponding scalar from o j indicating how likely two spans are related.",
"We use option 1 for all tasks except for coreference resolution which uses option 2. Note that the above loss functions only differ in how relation scores are normalized and the other parts of the model remain the same across different tasks.",
"At test time, we follow previous inference methods to generate valid outputs.",
"For coreference resolution, we link a span to the antecedent with highest score (Lee et al., 2017).",
"For constituency parsing, we use greedy top-down decoding to generate a valid parse tree (Stern et al., 2017).",
"For dependency parsing, each word is linked to exactly one parent with the highest relation probability.",
"For other tasks, we predict relations for all span pairs and use those not predicted as NEG REL to construct outputs.",
"Our core insight is that the above formulation is largely task-agnostic , meaning that a task can be modeled in this framework as long as it can be formulated as a span-relation prediction problem with properly defined span labels L and relation labels R .",
"As shown in Table 1, this unified Span Rel ation (SpanRel) model makes it simple to scale to a large number of language analysis tasks, with breadth far beyond that of previous work.",
"Multi-task Learning The SpanRel model makes it easy to perform multi-task learning (MTL) by sharing all parameters except for the MLPs used for label prediction.",
"However, because different tasks capture different linguistic aspects, they are not equally beneficial to each other.",
"It is expected that jointly training on related tasks is helpful, while forcing the same model to solve unrelated tasks might even hurt the performance (Ruder, 2017).",
"Compared to manually choosing source tasks based on prior knowledge, which might be sub-optimal when the number of tasks is large, SpanRel offers a systematic way to examine relative benefits of source-target task pairs by either performing pairwise MTL or attention-based analysis, as we will show in Section 4.3.",
"We first describe our G eneral L anguage A nalysis D atasets (GLAD) benchmark and evaluation metrics, then conduct experiments to (1) verify that SpanRel can achieve comparable performance across all tasks (Section 4.2), and (2) demonstrate its benefits in multi-task learning (Section 4.3).",
"GLAD Benchmark and Evaluation Metrics As summarized in Table 3, we convert 8 widely used datasets with annotations of 10 tasks into the BRAT format and include them in the GLAD benchmark.",
"It covers diverse domains, providing a holistic testbed for natural language analysis evaluation.",
"The major evaluation metric is span-based F 1 (denoted as F 1 ), a standard metric for SRL.",
"Precision is the proportion of extracted spans (spans not predicted as NEG SPAN ) that are consistent with 2 The small version of Lee et al. (2017)'s method with 100 antecedents and no speaker features.",
"(cid:63)",
"For OpenIE and ORL, we use span-based F 1 instead of syntactic-head-based F 1 and binary coverage F 1 used in the original papers because they are biased towards extracting long spans.",
"For SRL, we choose to compare with He et al. (2018) because they also extract predicates and arguments in an end-to-end way.",
"(cid:47)",
"We follow Xu et al. (2019) to report accuracy of restaurant and laptop domain separately in ABSA.",
"the ground truth.",
"Recall is the proportion of ground truth spans that are correctly extracted.",
"Span F 1 is also applicable to relations, where an extracted relation (relations not predicted as NEG REL ) is correct iff both head and tail spans have correct boundaries and the predicted relation is correct.",
"To make fair comparisons with existing works, we also compute standard metrics for different tasks, as listed in Table 3. Implementation Details We attempted four token representation methods (Equation 1), namely GloVe (Pennington et al., 2014), ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), and SpanBERT (Joshi et al., 2019).",
"We use BERT base in our main results and report BERT large in Appendix B. A three-layer BiLSTM with 256 hidden units is used (Equation 2).",
"Both span and relation prediction MLPs have two layers with 128 hidden units.",
"Dropout (Srivastava et al., 2014) of 0.5 is applied to all layers.",
"For GloVe and ELMo, we use Adam (Kingma and Ba, 2015) with learning rate of 1e-3 and early stop with patience of 3. For BERT and SpanBERT, we follow standard fine-tuning with learning rate of 5e-5, 1 = 0 .",
"9 , 2 = 0 .",
"999 , L2 weight decay of 0.01, warmup over the first 10% steps, and number of epochs tuned on development set.",
"Task-specific hyperparameters maximal span length and pruning ratio are tuned on development set and listed in Appendix C. 4.2 Comparison with Task-specific SOTA We compare the SpanRel model with state-of-the-art task-specific models by training on data from a single task.",
"By doing so we attempt to answer the research question can a single model with minimal task-specific engineering achieve competitive or superior performance to other models that have been specifically engineered?",
"We select competitive SOTA models mainly based on settings, e.g., single-task learning and end-to-end extraction of spans and relations.",
"To make fair comparisons, token embeddings (GloVe, ELMo, BERT) and other hyperparameters (e.g., the number of antecedents in Coref. and the maximal span length in SRL) in our method are set to match those used by SOTA models, to focus on differences brought about by the model architecture.",
"As shown in Table 4, the SpanRel model achieves comparable performances as task-specific SOTA methods (regardless of whether the token representation is contextualized or not).",
"This indicates that the span-relation format can generically represent a large number of natural language analysis tasks and it is possible to devise a single unified model that achieves strong performance on all of them.",
"It provides a strong and generic baseline for natural language analysis tasks and a way to examine the usefulness of task-specific designs.",
"To demonstrate the benefit of the SpanRel model in MTL, we perform single-task learning (STL) and MTL across all tasks using end-to-end settings.",
"3 Following Liu et al. (2019), we perform MTL+fine-tuning and show the results in separate columns of Table 5. Contextualized token representations yield significantly better results than GloVe on all tasks, indicating that pre-training on large corpora is almost universally helpful to NLP tasks.",
"Comparing the results of MTL+fine-tuning with STL, we found that performance with GloVe drops on 8 out of 15 tasks, most of which are tasks with relatively sparse data.",
"It is probably because the capacity of the GloVe-based model is too small to store all the patterns required by different tasks.",
"The results of contextualized representations are mixed, with some tasks being improved and others remaining the same or degrading.",
"We hypothesize that this is because different tasks capture different linguistic aspects, thus are not equally helpful to each other.",
"Reconciling these seemingly different tasks in the same model might be harmful to some tasks.",
"3 Span-based F 1 is used as the evaluation metric in SemEval-2010 Task 8 and SemEval-2014 Task 4 as opposed to macro F 1 and accuracy reported in the original papers because we aim at end-to-end extractions.",
"Notably, as the contextualized representations become stronger, the performance of MTL+FT becomes more favorable.",
"5 out of 15 tasks (NER, RE, OpenIE, SRL, ORL) observe statistically sig-nificant improvements (p-value < 0 . 05 with paired bootstrap re-sampling) with SpanBERT, a contextualized embedding pre-trained with span-based training objectives, while only one task degrades (ABSA), indicating its superiority in reconciling spans from different tasks.",
"The GLAD benchmark provides a holistic testbed for evaluating natural language analysis capability.",
"Task Relatedness Analysis To further investigate how different tasks interact with each other, we choose five source tasks (i.e., tasks used to improve other tasks, e.g., POS, NER, Consti., Dep., and SRL) that have been widely used in MTL (Hashimoto et al., 2017; Strubell et al., 2018) and six target tasks (i.e., tasks to be improved, e.g., OpenIE, NER, RE, ABSA, ORL, and SRL) to perform pairwise multi-task learning.",
"We hypothesize that although language modeling pre-training is theoretically orthogonal to MTL (Swayamdipta et al., 2018), in practice their benefits tends to overlap.",
"To analyze these two factors separately, we start with a weak representation GloVe to study task relatedness, then move to BERT to demonstrate how much we can still improve with MTL given strong and contextualized representations.",
"As shown in Table 6 (GloVe), tasks are not equally useful to each other.",
"Notably, (1) for OpenIE and ORL, multi-task learning with SRL improves the performance significantly, while other tasks lead to less or no improvements.",
"(2) Dependency parsing and SRL are generic source tasks that are beneficial to most of the target tasks.",
"This unified SpanRel makes it easy to perform MTL and decide beneficial source tasks.",
"Next, we demonstrate that our framework also provides a platform for analysis of similarities and differences between different tasks.",
"Inspired by the intuition that the attention coefficients are somewhat indicative of a model's internal focus (Li et al., 2016; Vig, 2019; Clark et al., 2019), we hypothesize that the similarity or difference between attention mechanisms may be correlated with similarity between tasks, or even the success or failure of MTL.",
"To test this hypothesis, we extract the attention maps of two BERT-based SpanRel models (trained on a source t (cid:48) and a target task t separately) over sentences X t from the target task, and compute GloVe ELMo BERT base SpanBERT base Category Task Metric Dataset STL MTL +FT STL MTL +FT STL MTL +FT STL MTL +FT IE NER F 1 CoNLL03 88.4 86.2 87.5 91.9 91.6 91.6 91.0 88.6 90.2 91.3 90.4 91.2 WLP 77.6 71.5 76.5 79.2 77.4 78.2 78.1 78.2 78.5 77.9 78.6 78.5 RE F 1 SemEval10 50.7 15.2 33.0 61.8 30.6 42.9 61.7 55.1 59.8 62.1 54.6 61.8 WLP 64.9 38.5 53.9 65.5 52.0 55.1 64.7 65.9 66.5 64.1 67.2 67.2 Coref Avg F 1 OntoNotes 56.3 50.3 53.0 62.2 62.9 63.3 66.2 65.5 65.8 70.0 68.9 69.7 OpenIE F 1 OIE2016 28.3 6.8 19.6 35.2 30.0 32.9 36.7 37.1 38.5 36.5 37.3 38.6 SRL F 1 OntoNotes 78.0 77.9 78.6 82.4 82.3 82.4 83.3 82.9 83.4 83.1 83.3 83.8 Parsing Dep.",
"where A tk ( x ) is the attention map extracted from the k -th head by running the model trained from task t on sentence x .",
"We select OpenIE as the target task because it shows the largest performance variation when paired with different source tasks (34.0 38.8) in Table 6. We visualize the attention similarity of all heads in BERT (12 layers 12 heads) between two mutually harmful tasks (OpenIE/POS on the left) and between two mutually helpful tasks (OpenIE/SRL on the right) in Figure 2a.",
"A common trend is that heads in higher layers exhibit more divergence, probably because they are closer to the prediction layer, thus easier to be affected by the end task.",
"Overall, it can be seen that Ope-nIE/POS has much more attention divergence than OpenIE/SRL.",
"A notable difference is that almost all heads in the last two layers of the OpenIE/POS models differ significantly, while some heads in the last two layers of the OpenIE/SRL models still behave similarly, providing evidence that failure of MTL can be attributed to the fact that dissimilar tasks requires different attention patterns.",
"We further compute average attention similarities for all source tasks in Figure 2b, and we can see that there is a strong correlation (Pearson correlation of 0.97) between the attentions similarity and the -5 0 12 l a y e r s heads 1 heads 12 1 12 1",
"OpenIE/SRL (right) for all heads.",
"MTL under Different Settings We analyze how token representations and sizes of the target dataset affect the performance of MTL.",
"Comparing BERT and GloVe in Table 6, the improvements become smaller or vanish as the token representation becomes stronger, e.g., improvement on OpenIE with SRL reduces from 5.8 to 1.6.",
"This is expected because both large-scale pre-training and MTL aim to learn general representations and their benefits tend to overlap in practice.",
"Interestingly, some helpful source tasks become harmful when we shift from GloVe to BERT, such as OpenIE paired with POS.",
"We conjecture that the gains of MTL might have already been achieved by BERT, but the task-specific characteristics of POS hurt the performance of OpenIE.",
"We did not observe many tasks benefitting from MTL for the GloVe-based model in Table 5 GloVe BERT base Target Source STL POS NER Consti.",
"because it is trained on all tasks (instead of two ), which is beyond its limited model capacity.",
"The improvements of MTL shrink as the size of the SRL datasets increases, as shown in Figure 3, indicating that MTL is useful when the target data is sparse.",
"Time Complexity Analysis Time complexities of span and relation prediction are O ( l n ) and O ( K 2 ) = O ( 2 n 2 ) respectively for a sentence of n tokens (Section 3).",
"The time complexity of BERT is O ( L n 2 ) , dominated by its L self-attention layers.",
"Since the pruning threshold is usually less than 1, the computational overhead introduced by the span-relation output layer is much less than BERT.",
"In practice, we observe that the training/testing time is mainly spent by BERT.",
"For SRL, one of the most computation-intensive tasks with long spans and dense span/relation annotations, 85.5% of the time is spent by BERT.",
"For POS, a less heavy task, the time spent by BERT increases to 98.5%.",
"Another option for span prediction is to formulate it as a sequence labeling task, as in previous works (Lample et al., 2016; He et al., 2017), where time complexity is O ( n ) .",
"Although slower than token-based labeling models, span-based models offer the advantages of being able to model overlapping spans and use span-level information for label prediction (Lee et al., 2017).",
"General Architectures for NLP There has been a rising interest in developing general architectures for different NLP tasks, with the most prominent examples being sequence labeling framework (Col-lobert et al., 2011; Ma and Hovy, 2016) used for tagging tasks and sequence-to-sequence framework (Sutskever et al., 2014) used for generation tasks.",
"Moreover, researchers typically pick related tasks, motivated by either linguistic insights or empirical results, and create a general framework to perform MTL, several of which are summarized in Table 1. For example, Swayamdipta et al. (2018) and Strubell et al. (2018) use constituency and dependency parsing to improve SRL.",
"Luan et al. (2018, 2019); Wadden et al. (2019) use a span-based model to jointly solve three information-extraction-related tasks (NER, RE, and Coref.).",
"Li et al. (2019) formulate both nested NER and flat NER as a machine reading comprehension task.",
"Compared to existing works, we aim to create an output representation that can solve nearly every natural language analysis task in one fell swoop, allowing us to cover a far broader range of tasks with a single model.",
"In addition, NLP has seen a recent burgeoning of contextualized representations pre-trained on large corpora (e.g., ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019)).",
"These methods focus on learning generic input representations, but are agnostic to the output representation, requiring different predictors for different tasks.",
"In contrast, we present a methodology to formulate the output of different tasks in a unified format.",
"Thus our work is orthogonal to those on contextualized embeddings.",
"Indeed, in Section 4.3, we demonstrate that the SpanRel model can benefit from stronger contextualized representation models, and even provide a testbed for their use in natural language analysis.",
"Understanding Due to the rapid development of NLP models, large-scale benchmarks, such as Sen-tEval (Conneau and Kiela, 2018), GLUE (Wang et al., 2019b), and SuperGLUE (Wang et al., 2019a) have been proposed to facilitate fast and holistic evaluation of models' understanding ability.",
"They mainly focus on sentence-level tasks, such as natural language inference, while our GLAD benchmark focuses on token/phrase-level analysis tasks with diverse coverage of different linguistic structures.",
"New tasks and datasets can be conveniently added to our benchmark as long as they are in the BRAT standoff format, which is one of the most commonly used data format in the NLP community, e.g., it has been used in the BioNLP shared tasks (Kim et al., 2009) and the Universal Dependency project (McDonald et al., 2013).",
"We provide the simple insight that a large number of natural language analysis tasks can be represented in a single format consisting of spans and relations between spans.",
"As a result, these tasks can be solved in a single modeling framework that first extracts spans and predicts their labels, then predicts relations between spans.",
"We attempted 10 tasks with this SpanRel model and show that this generic task-independent model can achieve competitive performance as state-of-the-art methods tailored for each tasks.",
"We merge 8 datasets into our GLAD benchmark for evaluating future models for natural language analysis.",
"Future directions include (1) devising hierarchical span representations that can handle spans of different length and diverse content more effectively and efficiently; (2) robust multitask learning or meta-learning algorithms that can reconcile very different tasks.",
"This work was supported by gifts from Bosch Research.",
"We would like to thank Hiroaki Hayashi, Bohan Li, Pengcheng Yin, Hao Zhu, Paul Michel, and Antonios Anastasopoulos for their insightful comments and suggestions."
] | [
"abstain",
"method",
"result",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"other",
"abstain",
"objective",
"objective",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"objective",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"method",
"abstain",
"result",
"method",
"abstain",
"other",
"other"
] |
[
"Dynamic topic modeling facilitates the iden-tification of topical trends over time in temporal collections of unstructured documents.",
"We introduce a novel unsupervised neural dynamic topic model named as Recurrent Neural Network-Replicated Softmax Model (RNN-RSM), where the discovered topics at each time influence the topic discovery in the subsequent time steps.",
"We account for the temporal ordering of documents by explicitly modeling a joint distribution of latent topical dependencies over time, using distributional estimators with temporal recurrent connections.",
"Applying RNN-RSM to 19 years of articles on NLP research, we demonstrate that compared to state-of-the art topic models, RNN-RSM shows better generalization, topic interpretation, evolution and trends.",
"We also introduce a metric (named as SPAN) to quantify the capability of dynamic topic model to capture word evolution in topics over time.",
"Topic Detection and Tracking (Allan et al., 1998) is an important area of natural language processing to find topically related ideas that evolve over time in a sequence of text collections and exhibit temporal relationships.",
"The temporal aspects of these collections can present valuable insight into the topical structure of the collections and can be quantified by modeling the dynamics of the underlying topics discovered over time.",
"Problem Statement : We aim to generate temporal topical trends or automatic overview time-lines of topics for a time sequence collection of documents.",
"This involves the following three tasks in dynamic topic analysis: (1) Topic Structure Detection (TSD): Identifying main topics in the document collection.",
"(2) Topic Evolution Detection (TED): Detecting the emergence of a new topic Language Model Distributional Semantics Word Vectors Linear Model Rule Set Neural Language Model Word Embedding Semantic Representation Distributional models Neural Network 1996 2014 ... h (0) h (t-1) RSM h (t) RSM h (T) RSM ...",
"(3) Temporal Topic Characterization (TTC): Identifying the characteristics for each of the main topics in order to track the words' usage ( keyword trends ) for a topic over time i.e. topical trend analysis for word evolution (Fig 1, Left).",
"Probabilistic static topic models, such as Latent Dirichlet Allocation (LDA) (Blei et al., 2003) and its variants (Wang and McCallum, 2006; Hall et al., 2008; Gollapalli and Li, 2015) have been investigated to examine the emergence of topics from historical documents.",
"Another variant known as Replicated Softmax (RSM) (Hinton and Salakhutdinov, 2009) has demonstrated better generalization in log-probability and retrieval, compared to LDA.",
"Prior works (Iwata et al., 2010; Pruteanu-Malinici et al., 2010; Saha and Sind-hwani, 2012; Schein et al., 2016) have investigated Bayesian modeling of topics in time-stamped documents.",
"Particularly, Blei and Lafferty (2006) developed a LDA based dynamic topic model (DTM) to capture the evolution of topics in a time sequence collection of documents; however they do not capture explicitly the topic popularity and usage of specific terms over time.",
"We propose a family of probabilistic time series models with distributional estimators to explicitly model the dynamics of the underlying topics, introducing temporal latent topic dependencies (Fig 1, Right).",
"To model temporal dependencies in high dimen-1079 u (0) u (1) b h( 1) b v( 1) h (1) W uh W uv W uu W vu u (2) h (2) W vu W vh b h(2 ) b v(2 ) W uu u (T) h (T) W vu b h(T ) b v(T ) ... ... ...",
"The Temporal RBM (Taylor et al., 2007; Sutskever and Hinton, 2007), Recurrent Temporal RBM (RTRBM) (Sutskever et al., 2009) and RNN-RBM (Boulanger-Lewandowski et al., 2012) show success in modeling the temporal dependencies in such symbolic sequences.",
"In addition, RNNs (Gupta et al., 2015a; Vu et al., 2016a,b; Gupta et al., 2016) have been recognized for sentence modeling in natural language tasks.",
"We aspire to build neural dynamic topic model called RNN-RSM to model document collections over time and learn temporal topic correlations.",
"We consider RSM for TSD and introduce the explicit latent topical dependencies for TED and TTC tasks.",
"Fig 1 illustrates our motivation , where temporal ordering in document collection b V ( t ) at each time step t , is modeled by conditioning the latent topic h ( t ) on the sequence history of latent topics h (0) , ..., h ( t 1) , accumulated with temporal lag.",
"Each RSM discovers latent topics, where the introduction of a bias term in each RSM via the time-feedback latent topic dependencies enables to explicitly model topic evolution and specific topic term usage over time.",
"The temporal connections and RSM biases allow to convey topical information and model relation among the words, in order to deeply analyze the dynamics of the underlying topics.",
"We demonstrate the applicability of proposed RNN-RSM by analyzing 19 years of scientific articles from NLP research.",
"The contributions in this work are: (1) Introduce an unsupervised neural dynamic topic model based on recurrent neural network and RSMs, named as RNN-RSM to explicitly model discovered latent topics (evolution) and word relations (topic characterization) over time.",
"(2) Demonstrate better generalization (log-probability and time stamp prediction), topic interpretation (coherence), evolution and characterization, compared to the state-of-the-art.",
"(3) It is the first work in dynamic topic modeling using undirected stochastic graphical models and deterministic recurrent neural network to model collections of different-sized documents over time, within the generative and neural network framework.",
"The code and data are available at https://github.com/pgcool/RNN-RSM .",
"RSM (Fig 2, Left) models are a family of different-sized Restricted Boltzmann Machines (RBMs) (Gehler et al., 2006; Xing et al., 2005; Gupta et al., 2015b,c) that models word counts by sharing the same parameters with multinomial distribution over the observable i.e. it can be interpreted as a single multinomial unit (Fig 2, Middle) sampled as many times as the document size.",
"This facilitates in dealing with the documents of different lengths.",
"1 Notations: U = { U n } Nn =1 ; U :2D-Matrix; l :vector; U / l : U pper/ l ower-case; Scalars in unbold",
"and b h ( t ) depend on the output of a deterministic RNN with hidden layer u ( t 1) in the previous time step, t 1 .",
"Similar to RNN-RBM (Boulanger-Lewandowski et al., 2012), we constrain RNN hidden units ( u ( t ) ) to convey temporal information, while RSM hidden units ( h ( t ) ) to model conditional distributions.",
"Therefore, parameters ( b v ( t ) , b h ( t ) ) are time-dependent on the sequence history at time t (via a series of conditional RSMs) denoted by ( t ) { b V ( ) , u ( ) | < t } , that captures temporal dependencies.",
"The RNN-RSM is defined by its joint probability distribution: P ( b V , H ) = P ( { b V ( t ) , h ( t ) } Tt =1 ) = TY t =1 P ( b V ( t ) , h ( t ) | ( t ) ) where b V = [ b V (1) , ... b V ( T ) ] and H = [ h (1) , ... h ( T ) ].",
"Each h ( t ) { 0 , 1 } F be a binary stochastic hidden topic vector with size F and b V ( t ) = { V ( t ) n } N ( t ) n =1 be a collection of N documents at time step t .",
"Let V ( t ) n be a K D ( t ) n observed binary matrix of the n th document in the collection where, D ( t ) n is the document size and K is the dictionary size over all the time steps.",
"The conditional distribution (for each unit in hidden or visible) in each RSM at time step, is given by softmax and logistic functions: P ( v k, ( t ) n,i = 1 | h ( t ) n ) = exp( b v,ik, ( t ) + P Fj =1 h ( t ) n,j W kij ) P Kq =1 exp( b v,iq, ( t ) + P Fj =1 h ( t ) n,j W qij ) P ( h ( t ) n,j = 1 | V ( t ) n ) = ( b ( t ) h,j + D ( t ) n X i =1 KX k =1 v k, ( t ) n,i W kij ) where P ( v k, ( t ) n,i = 1 | h ( t ) n ) and P ( h ( t ) n,j = 1 | V ( t ) n ) are conditional distributions for i th visible v n,i and j th hidden unit h n,j for the n th document at t .",
"W kij is a symmetric interaction term between i that takes on value k and j .",
"v k, ( t ) n is sampled D ( t ) n times with identical weights connected to binary hidden units, resulting in multinomial visibles, therefore the name Replicated Softmax .",
"The conditionals across layers are factorized as: P ( V ( t ) n | h ( t ) n ) = QD ( t ) n i =1 P ( v ( t ) n,i | h ( t ) n ) ; P ( h ( t ) n | V ( t ) n ) = Q j P ( h ( t ) n,j | V ( t ) n ) .",
"Since biases of RSM depend on the output of RNN at previous time steps, that allows to propagate the estimated gradient at each RSM backward through time (BPTT).",
"The RSM biases and RNN hidden state u ( t ) at each time step t are given by-b v ( t ) = b v + W uv u ( t 1) b h ( t ) = b h + W uh u ( t 1) (1) u ( t ) = tanh( b u + W uu u ( t 1) + W vu N ( t ) X n =1 v ( t ) n ) (2) Algorithm 1 Training RNN-RSM with BPTT Input : Observed visibles , b V = { b V (0) , b V (1) , ..., b V ( t ) , ..., b V ( T ) } RNN-RSM Parameters : = { W uh , W vh , W uv , W vu , W uu , b v , b u , b h , b v ( t ) , b h ( t ) , u (0) } 1: Propagate u ( t ) in RNN portion of the graph using eq 2.",
"where W uv , W uh and W vu are weights connecting RNN and RSM portions (Figure 2).",
"b u is the bias of u and W uu is the weight between RNN hidden units.",
"v ( t ) n is a vector of v kn (de-notes the count for the k th word in n th document).",
"PN ( t ) n =1 v ( t ) n refers to the sum of observed vectors across documents at time step t where each document is represented asv ( t ) n = [ { v k, ( t ) n } Kk =1 ] and v k, ( t ) n = D ( t ) n X i =1 v k, ( t ) n,i (3) where v k, ( t ) n,i =1 if visible unit i takes on k th value.",
"In each RSM, a separate RBM is created for each document in the collection at time step t with D ( t ) n softmax units, where D ( t ) n is the count of words in the n th document.",
"Consider a document of D ( t ) n words, the energy of the state { V ( t ) n , h ( t ) n } at time step, t is given byE ( V ( t ) n , h ( t ) n ) = FX j =1 KX k =1 h ( t ) n,j W kj v k, ( t ) n KX k =1 v k, ( t ) n b kv D ( t ) n FX j =1 b h,j h ( t ) n,j Observe that the bias terms on hidden units are scaled up by document length to allow hidden units to stabilize when dealing with different-sized documents.",
"The corresponding energy-probability relation in the energy-based model is-P ( V ( t ) n ) = 1 Z ( t ) n X h ( t ) n exp( E ( V ( t ) n , h ( t ) n )) (4) where Z ( t ) n = PV ( t ) n P h ( t ) n exp( E ( V ( t ) n , h ( t ) n )) is the normalization constant.",
"where H ( ) is the entropy and Q is the approximating posterior.",
"Similar to Deep Belief Networks (Hinton et al., 2006), adding an extra layer improves lower bound on the log probability of data, we introduce the extra layer via RSM biases that propagates the prior via RNN connections.",
"The dependence analogy follows-E ( V ( t ) n , h ( t ) n ) 1 b v ( t ) and E ( V ( t ) n , h ( t ) n ) 1 b h ( t ) ln P ( V ( t ) n ) 1 E ( V ( t ) n , h ( t ) n ) ; ln P ( b V ( t ) n ) ln P ( { b V n } <t ) Observe that the prior is seen as the deterministic hidden representation of latent topics and injected into each hidden state of RSMs, that enables the likelihood of the data to model complex temporal densities i.e. heteroscedasticity in document collections ( b V ) and temporal topics ( H ).",
"Gradient Approximations: The cost in RNN-RSM is: C = P Tt =1 C t P Tt =1 ln P ( b V ( t ) ) Due to intractable Z , the gradient of cost at time step t w.r.t. (with respect to) RSM parameters are approximated by k-step Contrastive Divergence (CD) (Hinton, 2002).",
"The gradient of the negative log-likelihood of a document collection { V ( t ) n } N ( t ) n =1 w.r.t. RSM parameter W vh , 1 N ( t ) N ( t ) X n =1 ( ln P ( V ( t ) n )) W vh = 1 N ( t ) N ( t ) X n =1 F ( V ( t ) n ) W vh ( ln Z ( t ) n ) W vh = EP data [ F ( V ( t ) n ) W vh ] | {z } data-dependent expectation EP model [ F ( V ( t ) n ) W vh ] | {z } model's expectation ' 1 N ( t ) N ( t ) X n =1 F ( V ( t ) n ) W vh F ( V ( t ) n ) W vh The second term is estimated by negative samples V ( t ) n obtained from k-step Gibbs chain starting at V ( t ) n samples.",
"P data ( b V ( t ) , h ( t ) ) = P ( h ( t ) | b V ( t ) ) P data ( b V ( t ) ) and P data ( b V ( t ) ) = 1 N ( t ) PN ( t ) n ( b V ( t ) V ( t ) n ) is the empirical distribution on the observable.",
"P model ( V ( t ) n , h ( t ) n ) is defined in eq.",
"4. The free energy F ( V ( t ) n ) is related to normalized probability of V ( t ) n as P ( V ( t ) n ) exp F ( V ( t ) n ) /Z ( t ) n and as follows-F ( V ( t ) n ) = KX k =1 v k, ( t ) n b kv FX j =1 log(1+ exp( D ( t ) n b h,j + KX k =1 v k, ( t ) n W kj )) Gradient approximations w.r.t. RSM parameters, C t b v ( t ) ' N ( t ) X n =1 v ( t ) n v ( t ) n C t b h ( t ) ' N ( t ) X n =1 ( W vh v ( t ) n D ( t ) n b h ( t ) ) ( W vh v ( t ) n D ( t ) n b h ( t ) ) C t W vh ' TX t =1 N ( t ) X n =1 ( W vh v ( t ) n D ( t ) n b h ( t ) ) v ( t ) T n ( W vh v ( t ) n D ( t ) n b h ( t ) ) v ( t ) T n (5) The estimated gradients w.r.t. RSM biases are back-propagated via hidden-to-bias parameters (eq 1) to compute gradients w.r.t. RNN connections ( W uh , W uv , W vu and W uu ) and biases ( b h , b v and b u ).",
"C W uh = TX t =1 C t b h ( t ) u ( t 1) T C W uv = TX t =1 C t b v ( t ) u ( t 1) T C W vu = TX t =1 C t u ( t ) u ( t ) (1 u ( t ) ) N ( t ) X n =1 v ( t ) T n C b h = TX t =1 C t b h ( t ) and C b v = TX t =1 C t b v ( t ) C b u = TX t =1 C t u ( t ) u ( t ) (1 u ( t ) ) C W uu = TX t =1 C t u ( t ) u ( t ) (1 u ( t ) ) u ( t 1) T (6) 1082 Parameter",
"For the single-layer RNN-RSM, the BPTT recurrence relation for 0 t < T is given by-C",
"by-C t u ( t ) = W uu C t +1 u ( t +1) u ( t +1) (1 u ( t +1) ) + W uh C t +1 b h ( t +1) + W uv C t +1 b v ( t +1)",
"where u (0) being a parameter and C T u ( T ) = 0 .",
"See Training RNN-RSM with BPTT in Algo 1.",
"We use the processed dataset (Gollapalli and Li, 2015), consisting of EMNLP and ACL conference papers from the year 1996 through 2014 (Table 1).",
"We combine papers for each year from the two venues to prepare the document collections over time.",
"We use ExpandRank (Wan and Xiao, 2008) to extract top 100 keyphrases for each paper, including unigrams and bigrams.",
"We split the bi-grams to unigrams to create a dictionary of all unigrams and bigrams.",
"The dictionary size ( K ) and word count are 3390 and 5.19 M, respectively.",
"We evaluate RNN-RSM against static (RSM, LDA) and dynamic (DTM) topics models for topic and keyword evolution in NLP research over time.",
"Individual 19 different RSM and LDA models are trained for each year, while DTM 2 and RNN-RSM are trained over the years with 19 time steps, where paper collections for a year is input at each time step.",
"RNN-RSM is initialized with RSM ( W vh , b v , b h ) trained for the year 2014.",
"We use perplexity to choose the number of topics (=30).",
"b PN 2 https://radimrehurek.com/gensim/models/dtmmodel.html",
"where t is the time step.",
"N ( t ) is the number of documents in a collection ( b V ( t ) ) at time t .",
"Better models have lower perplexity values, suggesting less uncertainties about the documents.",
"For held-out documents, we take 10 documents from each time step i.e. total 190 documents and compute perplexity for 30 topics.",
"Fig 3d shows the comparison of perplexity values for unobserved documents from DTM and RNN-RSM at each time step.",
"The SumPPL (Table 3) is the sum of PPL values for the held-out sets of each time step.",
"Document Time Stamp Prediction: To further assess the dynamic topics models, we split the document collections at each time step into 80-20% train-test, resulting in 1067 held-out documents.",
"We predict the time stamp (dating) of a document by finding the most likely (with the lowest perplexity) location over the time line.",
"See the mean absolute error ( Err ) in year for the held-out in Table 3. Note, we do not use the time stamp as observables during training.",
"Topic Detection: To extract topics from each RSM, we compute posterior P ( b V ( t ) | h j = 1) by activating a hidden unit and deactivating the rest in a hidden layer.",
"We extract the top 20 terms for every 30 topic set from 1996-2014, resulting in | Q | max = 19 30 20 possible topic terms.",
"Topic Popularity: To determine topic popularity , we selected three popular topics ( Sentiment Analysis , Word Vector and Dependency Parsing ) in NLP research and create a set 3 of key-terms (including unigrams and bigrams) for each topic.",
"We compute cosine similarity of the key-terms defined for each selected topic and topics discovered by the topic models over the years.",
"We consider the discovered topic that is the most similar to the key-terms in the target topic and plot the similarity values in Figure 3a, 3b and 3b.",
"Observe that RNN-RSM shows better topic evolution for the three emerging topics.",
"LDA and RSM show 3 topic-terms to be released with code 1083 96979899000102030405060708091011121314 0 0 .",
"Topic Drift (Focus Change): To compute the topic focus change over the years, we first split the time period 1996-2014 into five parts: { 1996, 2000, 2005, 2010, 2014 } .",
"The cosine similarity scores are computed between the topic sets discovered in a particular year and the years preceding it in the above set, for example the similarity scores between the topic-terms in (1996, 2000), (1996, 2005), (1996, 2010) and (1996, 2014), respectively.",
"Figure 3i, 3j, 3k and 3l demonstrate that RNN-RSM shows higher convergence in topic focus over the years, compared to LDA and RSM.",
"In RNN-RSM, the topic similarity is gradually increased over time, however not in DTM.",
"The higher similarities in the topic sets indicate that new/existing topics and words do not appear/disappear over time.",
"We compute topic-term drift ( T T D ) to show the changing topics from initial to final year, as T T D = 1 .",
"where Q is the set of all topic-terms for time step t .",
"Table 3 shows that T T D (where t = 1996 and t 0 = 2014 ) are 0 .",
"268 and 0 .",
"084 for RNN-RSM and DTM, respectively.",
"It suggests that the higher number of new topic-terms evolved in RNN-RSM, compared to DTM.",
"Qualitatively, the Table 4 shows the topics observed with the highest and lowest cosine drifts in DTM and RNN-RSM.",
"In Figure 3g and 3h, we also illustrate the temporal evolution (drift) in the selected topics by computing cosine similarity on their adjacent topic vectors over time.",
"The topic vectors are selected similarly as in computing topic popularity.",
"We observe better TED in RNN-RSM than DTM for the three emerging topics in NLP research.",
"For instance, for the selected topic Word Vector , the red line in DTM (Fig 3h) shows no drift (for x-axis 00-05, 05-10 and 10-14), suggesting the topic-terms in the adjacent years are similar and does not evolve.",
"Beyond perplexities, we also compute topic coherence (Chang et al., 2009; Newman et al., 2009; Das et al., 2015) to determine the meaningful topics captured.",
"We use the coherence measure proposed by Aletras and Stevenson (2013) that retrieves co-occurrence counts for the set of topic words using Wikipedia as a reference corpus to identify context features (window=5) for each topic word.",
"Relatedness between topic words and context features is measured using normalized pointwise mutual information (NPMI), resulting in a single vector for every topic word.",
"The coherence ( COH ) score is computed as the arithmetic mean of the cosine similarities between all word pairs.",
"Higher scores imply more coherent topics.",
"We use Palmetto 4 library to estimate coherence.",
"Quantitative: We compute mean and median coherence scores for each time step using the corresponding topics, as shown in Fig 3e and 3f.",
"Table 3 shows mean-COH and median-COH scores, computed by mean and median of scores from Fig 3e and 3f, respectively.",
"Observe that RNN-RSM captures topics with higher coherence.",
"Qualitative: Table 5 shows topics (top-10 words) with the highest and lowest coherence scores.",
"We demonstrate the capability of RNN-RSM to capture word evolution (usage) in topics over time.",
"We define: keyword-trend and SPAN.",
"The keyword-trend is the appearance/disappearance of the keyword in topic-terms detected over time, while SPAN is the length of the longest sequence of the keyword appearance in its keyword trend.",
"4 github.com/earthquakesan/palmetto-py DTM (2001) RNN-RSM (2001) DTM (2012) RNN-RSM (1997) semantic words discourse parse frame models relation cluster argument grammar relations clustering syntactic trees structure results structure dependency parsing sentence query lexical parsers class pos tag example dependency trees lexical queries information parsing argument retrieval annotation parse trees corpus coreference lexicon dependency parse other logical form COH: 0.268 0.284 0.064 0.071 Table 5: Topics with the highest and lowest coherence Let b Q model = { Q ( t ) model } Tt =1 be a set of sets 5 of topic-terms discovered by the model (LDA, RSM, DTM and RNN-RSM) over different time steps.",
"Let Q ( t ) b Q model be the topic-terms at time step t .",
"The keyword-trend for a keyword k is a time-ordered sequence of 0s and 1s, as trend k ( b Q ) = [ find ( k, Q ( t ) )] Tt =1 where ; find ( k, Q ( t ) ) = ( 1 if k Q ( t ) 0 otherwise (7) And the SPAN ( S k ) for the k th keyword is-S k ( b Q ) = length (cid:0) longestOnesSeq ( trend k ( b Q ) (cid:1) We compute keyword-trend and SPAN for each term from the set of some popular terms.",
"We de-fine average-SPAN for all the topic-terms appearing in the topics discovered over the years, avg-SPAN ( b Q ) = 1 || b Q || X { k | Q ( t ) b Q k Q ( t ) } S k ( b Q ) v k = 1 || b Q || X { k | Q ( t ) b Q k Q ( t ) } S dictk ( b Q ) 5 a set by bold and set of sets by d bold 1085 Textual Entailment 96 97 98 99 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 Sentiment Analysis LDA model Dependency Parsing LatentSemantics RelationExtraction Word Embedding Neural Language Machine Translation Language Model RNN-RSM RSM LDA DTM Rule-set Graphical Model Seedwords Figure 4: Keyword-trend by RNN-RSM, DTM, RSM, LDA.",
"In Figure 4, the keyword-trends indicate emergence (appearance/disappearance) of the selected popular terms in topics discovered in ACL and EMNLP papers over time.",
"Observe that RNN-RSM captures longer SPANs for popular keywords and better word usage in NLP research.",
"For example: Word Embedding is one of the top keywords, appeared locally (Figure 5) in the recent years.",
"RNN-RSM detects it in the topics from 2010 to 2014, however DTM does not.",
"Similarly, for Neural Language .",
"However, Machine Translation and Language Model are globally appeared in the input document collections over time and captured in the topics by RNN-RSM and DTM.",
"We also show keywords ( Rule-set and Seed Words ) that disappeared in topics over time.",
"Higher SPAN suggests that the model is capable in capturing trending keywords.",
"Table 6 shows corresponding comparison of SPANs for the 13 96 97 98 99 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 Years 0 500 1000 1500 2000 2500 3000 3500 4000 K e y t e r m f r e q u e n c y relation extraction word embedding neural language machine translation language model Figure 5: Key-term frequency in the input over years Term v k LDA RSM DTM RNN-RSM S k S dictk S k S dictk S k S dictk S k S dictk Textual entailment 918 0 .000 1 .001 0 .000 11 .011 Sentiment analysis 1543 6 .004 3 .002 5 .0032 11 0.007 Lda model 392 1 .003 1 .002 0 .000 8 .020 Dependency parsing 3409 9 .003 5 .001 11 .0032 18 .005 Latent semantic 974 1 .001 2 .002 0 .000 18 .018 Relation extraction 1734 4 .002 1 .001 9 .0052 12 .007 Word embedding 534 1 .002 1 .002 0 .000 5 .009 Neural language 121 0 .000 3 .025 0 .000 5 .041 Machine translation 11741 11 .001 7 .001 19 .0016 19 .002 Language model 11768 13 .001 3 .000 19 .0016 19 .002 Graphical model 680 0 .000 1 .001 0 .000 11 .016 Rule set 589 1 .0017 4 .0068 0 .000 2 .0034 Seed words 396 1 .0025 1 .0025 0 .000 4 .0101 avg-SPAN ( b Q ) .002 .007 .003 .018 || b Q model || 926 2274 335 731 Table 6: SPAN ( S k ) for selected terms, avg-SPAN and set || b Q || by LDA, RSM, DTM and RNN-RSM selected keywords.",
"The SPAN S k for each keyword is computed from Figure 4. Observe that || b Q || DTM < || b Q || RNN RSM suggests new topics and words emerged over time in RNN-RSM, while higher SPAN values in RNN-RSM suggest better trends.",
"Figure 6 shows how the word usage, captured by DTM and RNN-RSM for the topic Word Vector , changes over 19 years in NLP research.",
"RNN-RSM captures popular terms Word Embedding and Word Representation emerged in it.",
"Architecture: RNN-RSM treats document's stream as high dimensional sequences over time and models the complex conditional probability distribution i.e. heteroscedasticity in document collections and topics over time by a temporal stack of RSMs (undirected graphical model), conditioned on time-feedback connections using RNN (Rumelhart et al., 1985).",
"It has two hidden layers: h (stochastic binary) to capture topical information, while u (deterministic) to convey temporal information via BPTT that models the topic dependence at a time step t on all the previous steps < t .",
"In contrast, DTM is built upon 1086 1996 Model Language Model Words Language Probability 2000 Model Language Model Words Language Probability 2005 Model Language Model Words Language Probability 2014 Model Language Model Words Language Neural Network 1996 Neural Network Language Models Word Representation Linear Model Rule Set 2000 Neural Network Language Models Supervised Approach Linear Model Rule Set 2005 Neural Network Language Models Word Representation Joint Model Word Representations 2014 Neural Network Language Models Word Embedding Word Embeddings Word Representation",
"RNN-RSMLDA (directed model), where Dirichlet distribution on words is not amenable to sequential modeling, therefore its natural parameters (topic and topic proportion distributions) for each topic are chained, instead of latent topics that results in intractable inference in topic detection and chaining.",
"Topic Dynamics: The introduction of explicit connection in latent topics in RNN-RSM allow new topics and words for the underlying topics to appear or disappear over time by the dynamics of topic correlations.",
"As discussed, the distinction of h and u permits the latent topic h ( t ) to capture new topics, that may not be captured by h ( t 1) .",
"DTM assumes a fixed number of global topics and models their distribution over time.",
"However, there is no such assumption in RNN-RSM.",
"We fixed the topic count in RNN-RSM at each time step, since W vh is fixed over time and RSM biases turn off/on terms in each topic.",
"However, this is fundamentally different for DTM.",
"E.g. a unique label be assigned to each of the 30 topics at any time steps t and t 0 .",
"DTM follows the sets of topic labels: { T opicLabels ( t ) } 30 k =1 = { T opicLabels ( t 0 ) } 30 k =1 , due to eq (1) in Blei and Lafferty (2006) (discussed in section 5) that limits DTM to capture new (or local) topics or words appeared over time.",
"It corresponds to the keyword-trends (section 3.5).",
"Optimization: The RNN-RSM is based on Gibbs sampling and BPTT for inference while DTM employs complex variational methods, since applying Gibbs sampling is difficult due to the nonconjugacy of the Gaussian and multinomial distributions.",
"Thus, easier learning in RNN-RSM.",
"For all models, approximations are solely used to compute the likelihood, either using variational approaches or contrastive divergence; perplexity was then computed based on the approximated likelihood.",
"More specifically, we use variational approximations to compute the likelihood for DTM (Blei and Lafferty, 2006).",
"For RSM and RNN-RSM, the respective likelihoods are approximated using the standard Contrastive Divergence (CD).",
"While there are substantial differences between variational approaches and CD, and thus in the manner the likelihood for different models is estimated both approximations work well for the respective family of models in terms of approximating the true likelihood.",
"Consequently, perplexities computed based on these approximated likelihoods are indeed comparable.",
"We have proposed a neural temporal topic model which we name as RNN-RSM, based on probabilistic undirected graphical topic model RSM with time-feedback connections via deterministic RNN, to capture temporal relationships in historical documents.",
"The model is the first of its kind that learns topic dynamics in collections of different-sized documents over time, within the generative and neural network framework.",
"The experimental results have demonstrated that RNN-RSM shows better generalization (perplexity and time stamp prediction), topic interpretation (co-herence) and evolution (popularity and drift) in scientific articles over time.",
"We also introduced SPAN to illustrate topic characterization.",
"In future work, we forsee to investigate learning dynamics in variable number of topics over time.",
"It would also be an interesting direction to investigate the effect of the skewness in the distribution of papers over all years.",
"Further, we see a potential application of the proposed model in learning the time-aware i.e. dynamic word embeddings (Aitchison, 2001; Basile et al., 2014; Bamler and Mandt, 2017; Rudolph and Blei, 2018; Yao et al., 2018) in order to capture language evolution over time, instead of document topics.",
"We thank Sujatha Das Gollapalli for providing us with the data sets used in the experiments.",
"We express appreciation for our colleagues Florian Buettner, Mark Buckley, Stefan Langer, Ulli Waltinger and Usama Yaseen, and anonymous reviewers for their in-depth review comments.",
"This research was supported by Bundeswirtschaftsmin-isterium ( bmwi.de ), grant 01MD15010A (Smart Data Web) at Siemens AGCT Machine Intelligence, Munich Germany."
] | [
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Most research in reading comprehension has focused on answering questions based on individual documents or even single paragraphs.",
"We introduce a neural model which integrates and reasons relying on information spread within documents and across multiple documents.",
"We frame it as an inference problem on a graph.",
"Mentions of entities are nodes of this graph while edges encode relations between different mentions (e.g., withinand cross-document coreference).",
"Graph convolutional networks (GCNs) are applied to these graphs and trained to perform multi-step reasoning.",
"Our Entity-GCN method is scalable and compact, and it achieves state-of-the-art results on a multi-document question answering dataset, WIKIHOP (Welbl et al., 2018).",
"The long-standing goal of natural language understanding is the development of systems which can acquire knowledge from text collections.",
"Fresh interest in reading comprehension tasks was sparked by the availability of large-scale datasets, such as SQuAD (Rajpurkar et al., 2016) and CNN/Daily Mail (Hermann et al., 2015), enabling end-to-end training of neural models (Seo et al., 2016; Xiong et al., 2016; Shen et al., 2017).",
"These systems, given a text and a question, need to answer the query relying on the given document.",
"Recently, it has been observed that most questions in these datasets do not require reasoning across the document, but they can be answered relying on information contained in a single sentence (Weis-senborn et al., 2017).",
"The last generation of large-scale reading comprehension datasets, such as a NarrativeQA (Kocisky et al., 2018), Trivi-aQA (Joshi et al., 2017), and RACE (Lai et al., 2017), have been created in such a way as to address this shortcoming and to ensure that systems query: country Thorildsplan candidates: {Denmark, Finland, Sweden, Italy, ...} answer: Sweden Thorildsplan is a small park in Kristineberg in Stockholm , named in 1925 after the writer [..] Stockholm is the capital of Sweden and the most populous city in [..] Figure 1: A sample from WIKIHOP where multi-step reasoning and information combination from different documents is necessary to infer the correct answer.",
"relying only on local information cannot achieve competitive performance.",
"Even though these new datasets are challenging and require reasoning within documents, many question answering and search applications require aggregation of information across multiple documents.",
"The WIKIHOP dataset (Welbl et al., 2018) was explicitly created to facilitate the development of systems dealing with these scenarios.",
"Each example in WIKIHOP consists of a collection of documents, a query and a set of candidate answers (Figure 1).",
"Though there is no guarantee that a question cannot be answered by relying just on a single sentence, the authors ensure that it is answerable using a chain of reasoning crossing document boundaries.",
"Though an important practical problem, the multi-hop setting has so far received little attention.",
"The methods reported by Welbl et al. (2018) approach the task by merely concatenating all documents into a single long text and training a standard RNN-based reading comprehension model, namely, BiDAF (Seo et al., 2016) and FastQA (Weissenborn et al., 2017).",
"Document concatenation in this setting is also used in Weaver (Raison et al., 2018) and MHPGM (Bauer et al., 2018).",
"The only published paper which goes beyond concatenation is due to Dhingra et al. (2018), where they augment RNNs with jump-links corresponding to co-reference edges.",
"Though these edges provide a structural bias, the RNN states are still tasked with passing the information across the document and performing multihop reasoning.",
"Instead, we frame question answering as an inference problem on a graph representing the document collection.",
"Nodes in this graph correspond to named entities in a document whereas edges encode relations between them (e.g., cross-and within-document coreference links or simply co-occurrence in a document).",
"We assume that reasoning chains can be captured by propagating local contextual information along edges in this graph using a graph convolutional network (GCN) (Kipf and Welling, 2017).",
"The multi-document setting imposes scalabil-ity challenges.",
"In realistic scenarios, a system needs to learn to answer a query for a given collection (e.g., Wikipedia or a domain-specific set of documents).",
"In such scenarios one cannot afford to run expensive document encoders (e.g., RNN or transformer-like self-attention (Vaswani et al., 2017)), unless the computation can be preprocessed both at train and test time.",
"Even if (similarly to WIKIHOP creators) one considers a coarse-to-fine approach, where a set of potentially relevant documents is provided, re-encoding them in a query-specific way remains the bottleneck.",
"In contrast to other proposed methods (e.g., (Dhingra et al., 2018; Raison et al., 2018; Seo et al., 2016)), we avoid training expensive document encoders.",
"In our approach, only a small query encoder, the GCN layers and a simple feed-forward answer selection component are learned.",
"Instead of training RNN encoders, we use contextualized embeddings (ELMo) to obtain initial (local) representations of nodes.",
"This implies that only a lightweight computation has to be performed online, both at train and test time, whereas the rest is preprocessed.",
"Even in the somewhat contrived WIKIHOP setting, where fairly small sets of candidates are provided, the model is at least 5 times faster to train than BiDAF.",
"1 Interestingly, when we substitute ELMo with simple pre-trained word embeddings, Entity-GCN still performs on par 1 When compared to the small' and hence fast BiDAF model reported in Welbl et al. (2018), which is 25% less accurate than our Entity-GCN.",
"Larger RNN models are problematic also because of GPU memory constraints.",
"with many techniques that use expensive question-aware recurrent document encoders.",
"Despite not using recurrent document encoders, the full Entity-GCN model achieves over 2% improvement over the best previously-published results.",
"As our model is efficient, we also reported results of an ensemble which brings further 3.6% of improvement and only 3% below the human performance reported by Welbl et al. (2018).",
"Our contributions can be summarized as follows: we present a novel approach for multi-hop QA that relies on a (pre-trained) document encoder and information propagation across multiple documents using graph neural networks; we provide an efficient training technique which relies on a slower offline and a faster on-line computation that does not require expensive document processing; we empirically show that our algorithm is effective, presenting an improvement over previous results.",
"In this section we explain our method.",
"We first introduce the dataset we focus on, WIKIHOP by Welbl et al. (2018), as well as the task abstraction.",
"We then present the building blocks that make up our Entity-GCN model, namely, an entity graph used to relate mentions to entities within and across documents, a document encoder used to obtain representations of mentions in context, and a relational graph convolutional network that propagates information through the entity graph.",
"Data The WIKIHOP dataset comprises of tuples (cid:104) q, S q , C q , a (cid:63) (cid:105) where: q is a query/question, S q is a set of supporting documents, C q is a set of candidate answers (all of which are entities mentioned in S q ), and a (cid:63) C q is the entity that correctly answers the question.",
"WIKIHOP is assembled assuming that there exists a corpus and a knowledge base (KB) related to each other.",
"The KB contains triples (cid:104) s, r, o (cid:105) where s is a subject entity, o an object entity, and r a unidirectional relation between them.",
"Welbl et al. (2018) used WIKIPEDIA as corpus and WIKIDATA (Vrandecic, 2012) as KB.",
"The KB is only used for constructing WIKIHOP : Welbl et al. (2018) retrieved the supporting documents S q from the corpus looking at mentions of subject and object entities in the text.",
"Note that the set S q (not the KB) is provided to the QA system, and not all of the supporting documents are relevant for the query but some of them act as distractors.",
"Queries, on the other hand, are not expressed in natural language, but instead consist of tuples (cid:104) s, r, ?",
"(cid:105) where the object entity is unknown and it has to be inferred by reading the support documents.",
"Therefore, answering a query corresponds to finding the entity a (cid:63) that is the object of a tuple in the KB with subject s and relation r among the provided set of candidate answers C q .",
"Task The goal is to learn a model that can identify the correct answer a (cid:63) from the set of supporting documents S q .",
"To that end, we exploit the available supervision to train a neural network that computes scores for candidates in C q .",
"We estimate the parameters of the architecture by maximizing the likelihood of observations.",
"For prediction, we then output the candidate that achieves the highest probability.",
"In the following, we present our model discussing the design decisions that enable multi-step reasoning and an efficient computation.",
"Entity graph In an offline step, we organize the content of each training instance in a graph connecting mentions of candidate answers within and across supporting documents.",
"For a given query q = (cid:104) s, r, ?",
"(cid:105) , we identify mentions in S q of the entities in C q { s } and create one node per mention.",
"This process is based on the following heuristic: 1. we consider mentions spans in S q exactly matching an element of C q { s } .",
"Admittedly, this is a rather simple strategy which may suffer from low recall.",
"2. we use predictions from a coreference resolution system to add mentions of elements in C q { s } beyond exact matching (including both noun phrases and anaphoric pronouns).",
"In particular, we use the end-to-end coreference resolution by Lee et al. (2017).",
"3. we discard mentions which are ambiguously resolved to multiple coreference chains; this may sacrifice recall, but avoids propagating ambiguity.",
"To each node v i , we associate a continuous annotation x i RD which represents an entity in the context where it was mentioned (details in Section 2.3).",
"We then proceed to connect these mentions",
"i) if they co-occur within the same document (we will refer to this as DOC-BASED edges),",
"ii) if the pair of named entity mentions is identical ( MATCH edgesthese may connect nodes across and within documents), or",
"iii) if they are in the same coreference chain, as predicted by the external coreference system ( COREF edges).",
"Note that MATCH edges when connecting mentions in the same document are mostly included in the set of edges predicted by the coreference system.",
"Having the two types of edges lets us distinguish between less reliable edges provided by the coreference system and more reliable (but also more sparse) edges given by the exact-match heuristic.",
"We treat these three types of connections as three different types of relations.",
"See Figure 2 for an illustration.",
"In addition to that, and to prevent having disconnected graphs, we add a fourth type of relation ( COMPLEMENT edge) between any two nodes that are not connected with any of the other relations.",
"We can think of these edges as those in the complement set of the entity graph with respect to a fully connected graph.",
"Multi-step reasoning Our model then approaches multi-step reasoning by transforming node representations (Section 2.3 for details) with a differentiable message passing algorithm that propagates information through the entity graph.",
"The algorithm is parameterized by a graph convolutional network (GCN) (Kipf and Welling, 2017), in particular, we employ relational-GCNs (Schlichtkrull et al., 2018), an extended version that accommodates edges of different types.",
"In Section 2.4 we describe the propagation rule.",
"Each step of the algorithm (also referred to as a hop ) updates all node representations in parallel.",
"In particular, a node is updated as a function of messages from its direct neighbours, and a message is possibly specific to a certain relation.",
"At the end of the first step, every node is aware of every other node it connects directly to.",
"Besides, the neighbourhood of a node may include mentions of the same entity as well as others (e.g., same-document relation), and these mentions may have occurred in different documents.",
"Taking this idea recursively, each further step of the algorithm allows a node to indirectly interact with nodes already known to their neighbours.",
"After L layers of R-GCN, information has been propagated through paths connecting up to L + 1 nodes.",
"We start with node representations { h (0) i } Ni =1 , and transform them by applying L layers of R-GCN obtaining { h ( L ) i } Ni =1 .",
"Together with a representation q of the query, we define a distribution over candidate answers and we train maximizing the likelihood of observations.",
"The probability of selecting a candidate c C q as an answer is then P ( c | q, C q , S q ) exp (cid:18) max i M c f o ([ q , h ( L ) i ]) (cid:19) , (1) where f o is a parameterized affine transformation, and M c is the set of node indices such that i M c only if node v i is a mention of c .",
"The max operator in Equation 1 is necessary to select the node with highest predicted probability since a candidate answer is realized in multiple locations via different nodes.",
"Keeping in mind we want an efficient model, we encode words in supporting documents and in the query using only a pre-trained model for contextualized word representations rather than training our own encoder.",
"Specifically, we use ELMo 2 (Pe-ters et al., 2018), a pre-trained bi-directional lan-2 The use of ELMo is an implementation choice, and, in principle, any other contextual pre-trained model could be used (Radford et al., 2018; Devlin et al., 2018).",
"guage model that relies on character-based input representation.",
"ELMo representations, differently from other pre-trained word-based models (e.g., word2vec (Mikolov et al., 2013) or GloVe (Pen-nington et al., 2014)), are contextualized since each token representation depends on the entire text excerpt (i.e., the whole sentence).",
"We choose not to fine tune nor propagate gradients through the ELMo architecture, as it would have defied the goal of not having specialized RNN encoders.",
"In the experiments, we will also ablate the use of ELMo showing how our model behaves using non-contextualized word representations (we use GloVe).",
"Documents pre-processing ELMo encodings are used to produce a set of representations { x i } Ni =1 , where x i RD denotes the i th candidate mention in context.",
"Note that these representations do not depend on the query yet and no trainable model was used to process the documents so far, that is, we use ELMo as a fixed pre-trained encoder.",
"Therefore, we can pre-compute representation of mentions once and store them for later use.",
"Query-dependent mention encodings ELMo encodings are used to produce a query representation q RK as well.",
"Here, q is a concatenation of the final outputs from a bidirectional RNN layer trained to re-encode ELMo representations of words in the query.",
"The vector q is used to compute a query-dependent representation of mentions { x i } Ni =1 as well as to compute a probability distribution over candidates (as in Equation 1).",
"Query-dependent mention encodings x i = f x ( q , x i ) are generated by a trainable function f x which is parameterized by a feed-forward neural network.",
"Our model uses a gated version of the original R-GCN propagation rule.",
"At the first layer, all hidden node representation are initialized with the query-aware encodings h (0) i = x i .",
"Then, at each layer 0 (cid:96) L , the update message u ( (cid:96) ) i to the i th node is a sum of a transformation f s of the current node representation h ( (cid:96) ) i and transformations of its neighbours: u ( (cid:96) ) i = f s ( h ( (cid:96) ) i ) + 1 |N i | (cid:88) j N i (cid:88) r R ij f r ( h ( (cid:96) ) j ) , (2) where N i is the set of indices of nodes neighbouring the i th node, R ij is the set of edge annotations between i and j , and f r is a parametrized function specific to an edge type r R .",
"Recall the available relations from Section 2.2, namely, R = { DOC-BASED , MATCH , COREF , COMPLEMENT } .",
"A gating mechanism regulates how much of the update message propagates to the next step.",
"This provides the model a way to prevent completely overwriting past information.",
"Indeed, if all necessary information to answer a question is present at a layer which is not the last, then the model should learn to stop using neighbouring information for the next steps.",
"Gate levels are computed as a ( (cid:96) ) i = (cid:16) f a (cid:16) [ u ( (cid:96) ) i , h ( (cid:96) ) i ] (cid:17)(cid:17) , (3) where ( ) is the sigmoid function and f a a parametrized transformation.",
"Ultimately, the updated representation is a gated combination of the previous representation and a non-linear transformation of the update message: h ( (cid:96) +1) i = ( u ( (cid:96) ) i ) (cid:12) a ( (cid:96) ) i + h ( (cid:96) ) i (cid:12) (1 a ( (cid:96) ) i ) , (4) where ( ) is any nonlinear function (we used tanh ) and (cid:12) stands for element-wise multiplication.",
"All transformations f are affine and they are not layer-dependent (since we would like to use as few parameters as possible to decrease model complexity promoting efficiency and scalability).",
"In this section, we compare our method against recent work as well as preforming an ablation study using the WIKIHOP dataset (Welbl et al., 2018).",
"See Appendix A in the supplementary material for a description of the hyper-parameters of our model and training details.",
"WIKIHOP We use WIKIHOP for training, val-idation/development and test.",
"The test set is not publicly available and therefore we measure performance on the validation set in almost all experiments.",
"WIKIHOP has 43,738/ 5,129/ 2,451 query-documents samples in the training, validation and test sets respectively for a total of 51,318 samples.",
"Authors constructed the dataset as described in Section 2.1 selecting samples with a graph traversal up to a maximum chain length of 3 documents (see Table 1 for additional dataset statistics).",
"WIKIHOP comes in two versions, a Min Max Avg.",
"standard (unmasked) one and a masked one.",
"The masked version was created by the authors to test whether methods are able to learn lexical abstraction.",
"In this version, all candidates and all mentions of them in the support documents are replaced by random but consistent placeholder tokens.",
"Thus, in the masked version, mentions are always referred to via unambiguous surface forms.",
"We do not use coreference systems in the masked version as they rely crucially on lexical realization of mentions and cannot operate on masked tokens.",
"In this experiment, we compare our Enitity-GCN against recent prior work on the same task.",
"We present test and development results (when present) for both versions of the dataset in Table 2. From Welbl et al. (2018), we list an oracle based on human performance as well as two standard reading comprehension models, namely BiDAF (Seo et al., 2016) and FastQA (Weissenborn et al., 2017).",
"We also compare against Coref-GRU (Dhingra et al., 2018), MHPGM (Bauer et al., 2018), and Weaver (Rai-son et al., 2018).",
"Additionally, we include results of MHQA-GRN (Song et al., 2018), from a recent arXiv preprint describing concurrent work.",
"They jointly train graph neural networks and recurrent encoders.",
"We report single runs of our two best single models and an ensemble one on the unmasked test set (recall that the test set is not publicly available and the task organizers only report unmasked results) as well as both versions of the validation set.",
"Entity-GCN (best single model without coreference edges) outperforms all previous work by over 2% points.",
"We additionally re-ran BiDAF baseline to compare training time: when using a single Titan X GPU, BiDAF and Entity-GCN process 12.5 and 57.8 document sets per second, respectively.",
"Note that Welbl et al. (2018) had to use BiDAF with very small state dimensionalities Model Unmasked Masked Test Dev Test Dev Human (Welbl et al., 2018) 74.1 FastQA (Welbl et al., 2018) 25.7 35.8 BiDAF (Welbl et al., 2018) 42.9 54.5 Coref-GRU (Dhingra et al., 2018) 59.3 56.0 MHPGM (Bauer et al., 2018) 58.2 Weaver / Jenga (Raison et al., 2018) 65.3 64.1 MHQA-GRN (Song et al., 2018) 65.4 62.8 Entity-GCN without coreference (single model) 67.6 64.8 70.5 Entity-GCN with coreference (single model) 66.4 65.3 Entity-GCN* (ensemble 5 models) 71.2 68.5 71.6 Table 2: Accuracy of different models on WIKIHOP closed test set and public validation set.",
"(20), and smaller batch size due to the scalabil-ity issues (both memory and computation costs).",
"We compare applying the same reductions.",
"3 Eventually, we also report an ensemble of 5 independently trained models.",
"All models are trained on the same dataset splits with different weight initializations.",
"The ensemble prediction is obtained as arg max c 5 (cid:81) i =1 P i ( c | q, C q , S q ) from each model.",
"To help determine the sources of improvements, we perform an ablation study using the publicly available validation set (see Table 3).",
"We perform two groups of ablation, one on the embedding layer, to study the effect of ELMo, and one on the edges, to study how different relations affect the overall model performance.",
"Embedding ablation We argue that ELMo is crucial, since we do not rely on any other context encoder.",
"However, it is interesting to explore how our R-GCN performs without it.",
"Therefore, in this experiment, we replace the deep contextualized embeddings of both the query and the nodes with GloVe (Pennington et al., 2014) vectors (insensi-tive to context).",
"Since we do not have any component in our model that processes the documents, we expect a drop in performance.",
"In other words, in this ablation our model tries to answer questions 3 Besides, we could not run any other method we compare with combined with ELMo without reducing the dimensionality further or having to implement a distributed version.",
"without reading the context at all .",
"For example, in Figure 1, our model would be aware that Stock-holm and Sweden appear in the same document but any context words, including the ones encoding relations (e.g., is the capital of) will be hidden.",
"Besides, in the masked case all mentions become unknown' tokens with GloVe and therefore the predictions are equivalent to a random guess.",
"Once the strong pre-trained encoder is out of the way, we also ablate the use of our R-GCN component, thus completely depriving the model from inductive biases that aim at multi-hop reasoning.",
"The first important observation is that replacing ELMo by GloVe (GloVe with R-GCN in Table 3) still yields a competitive system that ranks far above baselines from (Welbl et al., 2018) and even above the Coref-GRU of Dhingra et al. (2018), in terms of accuracy on (unmasked) validation set.",
"The second important observation is that if we then remove R-GCN (GloVe w/o R-GCN in Table 3), we lose 8.0 points.",
"That is, the R-GCN component pushes the model to perform above Coref-GRU still without accessing context, but rather by updating mention representations based on their relation to other ones.",
"These results highlight the impact of our R-GCN component.",
"Graph edges ablation In this experiment we investigate the effect of the different relations available in the entity graph and processed by the R-GCN module.",
"We start off by testing our stronger encoder (i.e., ELMo) in absence of edges connecting mentions in the supporting documents (i.e., us-Model unmasked masked full (ensemble) 68.5 71.6 full (single) 65.1 0.11 70.4 0.12 GloVe with R-GCN 59.2 11.1 GloVe w/o R-GCN 51.2 11.6 No R-GCN 62.4 63.2 No relation types 62.7 63.9 No DOC-BASED 62.9 65.8 No MATCH 64.3 67.4 No COREF 64.8 No COMPLEMENT 64.1 70.3 Induced edges 61.5 56.4 Table 3: Ablation study on WIKIHOP validation set.",
"ing only self-loops No R-GCN in Table 3).",
"The results suggest that WIKIPHOP genuinely requires multihop inference, as our best model is 6.1% and 8.4% more accurate than this local model, in unmasked and masked settings, respectively.",
"4 However, it also shows that ELMo representations capture predictive context features, without being explicitly trained for the task.",
"It confirms that our goal of getting away with training expensive document encoders is a realistic one.",
"We then inspect our model's effectiveness in making use of the structure encoded in the graph.",
"We start naively by fully-connecting all nodes within and across documents without distinguishing edges by type (No relation types in Table 3).",
"We observe only marginal improvements with respect to ELMo alone (No R-GCN in Table 3) in both the unmasked and masked setting suggesting that a GCN operating over a naive entity graph would not add much to this task and a more informative graph construction and/or a more sophisticated parameterization is indeed needed.",
"Next, we ablate each type of relations independently, that is, we either remove connections of mentions that co-occur in the same document ( DOC-BASED ), connections between mentions matching exactly ( MATCH ), or edges predicted by the coreference system ( COREF ).",
"The 4 Recall that all models in the ensemble use the same local representations, ELMo.",
"first thing to note is that the model makes better use of DOC-BASED connections than MATCH or COREF connections.",
"This is mostly because",
"i) the majority of the connections are indeed between mentions in the same document, and",
"ii) without connecting mentions within the same document we remove important information since the model is unaware they appear closely in the document.",
"Secondly, we notice that coreference links and complement edges seem to play a more marginal role.",
"Though it may be surprising for coreference edges, recall that the MATCH heuristic already captures the easiest coreference cases, and for the rest the out-of-domain coreference system may not be reliable.",
"Still, modelling all these different relations together gives our Entity-GCN a clear advantage.",
"This is our best system evaluating on the development.",
"Since Entity-GCN seems to gain little advantage using the coreference system, we report test results both with and without using it.",
"Surprisingly, with coreference, we observe performance degradation on the test set.",
"It is likely that the test documents are harder for the coreference system.",
"5 We do perform one last ablation, namely, we replace our heuristic for assigning edges and their labels by a model component that predicts them.",
"The last row of Table 3 (Induced edges) shows model performance when edges are not predetermined but predicted.",
"For this experiment, we use a bilinear function f e ( x i , x j ) = (cid:0) x (cid:62) i W e x j (cid:1) that predicts the importance of a single edge connecting two nodes i, j using the query-dependent representation of mentions (see Section 2.3).",
"The performance drops below No R-GCN' suggesting that it cannot learn these dependencies on its own.",
"Most results are stronger for the masked settings even though we do not apply the coreference resolution system in this setting due to masking.",
"It is not surprising as coreferred mentions are labeled with the same identifier in the masked version, even if their original surface forms did not match (Welbl et al. (2018) used WIKIPEDIA links for masking).",
"Indeed, in the masked version, an entity is always referred to via the same unique surface form (e.g., MASK1 ) within and across documents.",
"In the unmasked setting, on the other hand, mentions to an entity may differ (e.g., US vs United States) and they might not be retrieved by the coreference system we are employing, mak-5 Since the test set is hidden from us, we cannot analyze this difference further.",
"ing the task harder for all models.",
"Therefore, as we rely mostly on exact matching when constructing our graph for the masked case, we are more effective in recovering coreference links on the masked rather than unmasked version.",
"6 4 Error Analysis In this section we provide an error analysis for our best single model predictions.",
"First of all, we look at which type of questions our model performs well or poorly.",
"There are more than 150 query types in the validation set but we filtered the three with the best and with the worst accuracy that have at least 50 supporting documents and at least 5 candidates.",
"We show results in Table 4. We observe that questions regarding places (birth and death) are considered harder for Entity-GCN.",
"We then inspect samples where our model fails while assigning highest likelihood and noticed two principal sources of failure",
"i) a mismatch between what is written in WIKIPEDIA and what is annotated in WIKIDATA , and",
"ii) a different degree of granularity (e.g., born in London vs UK could be considered both correct by a human but not when measuring accuracy).",
"See Table 6 in the supplement material for some reported samples.",
"Secondly, we study how the model performance degrades when the input graph is large.",
"In particular, we observe a negative Pearson's correlation (-0.687) between accuracy and the number of candidate answers.",
"However, the performance does not decrease steeply.",
"The distribution of the number of candidates in the dataset peaks at 5 and has an average of approximately 20.",
"Therefore, the model 6 Though other systems do not explicitly link matching mentions, they similarly benefit from masking (e.g., masks essentially single out spans that contain candidate answers).",
"does not see many samples where there are a large number of candidate entities during training.",
"Differently, we notice that as the number of nodes in the graph increases, the model performance drops but more gently (negative but closer to zero Pearson's correlation).",
"This is important as document sets can be large in practical applications.",
"See Figure 3 in the supplemental material for plots.",
"In previous work, BiDAF (Seo et al., 2016), FastQA (Weissenborn et al., 2017), Coref-GRU (Dhingra et al., 2018), MHPGM (Bauer et al., 2018), and Weaver / Jenga (Raison et al., 2018) have been applied to multi-document question answering.",
"The first two mainly focus on single document QA and Welbl et al. (2018) adapted both of them to work with WIKIHOP .",
"They process each instance of the dataset by concatenating all d S q in a random order adding document separator tokens.",
"They trained using the first answer mention in the concatenated document and evaluating exact match at test time.",
"Coref-GRU, similarly to us, encodes relations between entity mentions in the document.",
"Instead of using graph neural network layers, as we do, they augment RNNs with jump links corresponding to pairs of corefereed mentions.",
"MHPGM uses a multi-attention mechanism in combination with external commonsense relations to perform multiple hops of reasoning.",
"Weaver is a deep co-encoding model that uses several alternating bi-LSTMs to process the concatenated documents and the query.",
"Graph neural networks have been shown successful on a number of NLP tasks (Marcheggiani and Titov, 2017; Bastings et al., 2017; Zhang et al., 2018a), including those involving document level modeling (Peng et al., 2017).",
"They have also been applied in the context of asking questions about knowledge contained in a knowledge base (Zhang et al., 2018b).",
"In Schlichtkrull et al. (2018), GCNs are used to capture reasoning chains in a knowledge base.",
"Our work and unpublished concurrent work by Song et al. (2018) are the first to study graph neural networks in the context of multi-document QA.",
"Besides differences in the architecture, Song et al. (2018) propose to train a combination of a graph recurrent network and an RNN encoder.",
"We do not train any RNN document encoders in this work.",
"We designed a graph neural network that operates over a compact graph representation of a set of documents where nodes are mentions to entities and edges signal relations such as within and cross-document coreference.",
"The model learns to answer questions by gathering evidence from different documents via a differentiable message passing algorithm that updates node representations based on their neighbourhood.",
"Our model outperforms published results where ablations show substantial evidence in favour of multistep reasoning.",
"Moreover, we make the model fast by using pre-trained (contextual) embeddings.",
"We would like to thank Johannes Welbl for helping to test our system on WIKIHOP .",
"This project is supported by SAP Innovation Center Network, ERC Starting Grant BroadSem (678254) and the Dutch Organization for Scientific Research (NWO) VIDI 639.022.518.",
"Wilker Aziz is supported by the Dutch Organisation for Scientific Research (NWO) VICI Grant nr. 277-89-002."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"other",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"method",
"abstain",
"result",
"method",
"other",
"other",
"other"
] |
[
"Machine Translation Quality Estimation (QE) aims to build predictive models to assess the quality of machine-generated translations in the absence of reference translations.",
"While state-of-the-art QE models have been shown to achieve good results, they over-rely on features that do not have a causal impact on the quality of a translation.",
"In particular, there appears to be a partial input bias, i.e., a tendency to assign high-quality scores to translations that are fluent and grammatically correct, even though they do not preserve the meaning of the source.",
"We analyse the partial input bias in further detail and evaluate four approaches to use auxiliary tasks for bias mitigation.",
"Two approaches use additional data to inform and support the main task, while the other two are adversarial, actively discouraging the model from learning the bias.",
"We compare the methods with respect to their ability to reduce the partial input bias while maintaining the overall performance.",
"We find that training a multitask architecture with an auxiliary binary classification task that utilises additional augmented data best achieves the desired effects and generalises well to different languages and quality metrics.",
"Despite the great advances of Machine Translation (MT) models over the past years, the adequacy and fluency of the translations cannot be guaranteed.",
"Without access to a gold-standard reference translation, it can be difficult to validate the reliability of the MT model's predictions.",
"To address this issue, the field of MT Quality Estimation (QE) emerged, aiming to develop models that can approximate the quality of machine-generated translations in a scalable way.",
"However, recent research suggests that state-of-the-art QE approaches tend to over-rely on features that do not have a causal impact on the quality of a translation.",
"In particular, there appears to be a partial input bias, i.e. a tendency to assign high quality scores to translations that are fluent and grammatical, even though they do not resemble the actual meaning of the source (Sun et al., 2020).",
"Building upon these findings, the objective of our work is to characterise and, most importantly, mitigate the partial input bias of QE models.",
"We focus on the use of auxiliary training tasks to specifically target the observed biases while avoiding strong modifications of the original model as well as the expensive collection and manual labelling of additional training data.",
"Our efforts concentrate on testing and improving MonoTransQuest , the best-performing architecture in the shared task on sentence-level QE hosted as part of the Fifth Conference on Machine Translation (WMT 2020) (Specia et al., 2020).",
"We work with the recently published multilingual QE dataset MLQE-PE (Fomicheva et al., 2020), allowing us to test the generalisability of our approaches across different languages and quality scores.",
"Our main contributions are as follows: Bias analysis.",
"We expand on previous research which suggested the partial input bias in QE and find that the model as well as the annotators tend to over-rate the quality of fluent but inadequate translations.",
"Bias mitigation.",
"To the best of our knowledge, we are the first to explore the mitigation of biases with auxiliary tasks in the field of QE.",
"We group our approaches into four categories: Multitask training with mixed languages, multitask training with additional augmented data, training with adversarial tasks and training with debiased focal loss.",
"New architectures.",
"We implement and compare several multitask architectures and find that iteratively training the tasks with two optimisers is better suited for our objective than backpropagating a weighted sum of the losses.",
"Further, we reformulate focal loss for regres-1475 sion tasks, a technique that is traditionally based upon the cross-entropy loss.",
"Results.",
"Utilising the multitask architecture, we successfully reduce the partial input bias while maintaining the same performance as the benchmark model and examine the best model's robustness.",
"In the subsequent sections, we first present related work, followed by the analysis of the partial input bias.",
"Building upon the findings, we explain the four bias mitigation approaches in Section 4 and discuss the results in Section 5.",
"QE is an area of research concerned with the development of models for the prediction of the quality of machine-generated translations when gold standard translations are not available.",
"QE is normally addressed as a supervised machine learning task, which may take as input general information from the source and translated texts, as well as from the MT system.",
"The quality is typically assessed at sentence level, but wordand document-level QE are also possible (Specia et al., 2018, pp. 2).",
"Sentence-level QE has evolved from the first feature-heavy prediction models (Blatz et al., 2004) to neural architectures such as RNNs and Transformers (Vaswani et al., 2017), which accelerated the developments in the field by reducing the work of manual feature engineering and improving contextual representations (Kim et al., 2017; Wang et al., 2018; Fan et al., 2019).",
"A prominent state-of-the-art QE architecture is MonoTransQuest, proposed by Ranasinghe et al. (2020).",
"It builds upon XLM-R, a popular pre-trained cross-lingual language model with a good ability to generalise to low-resource languages (Conneau et al., 2020).",
"MonoTransQuest achieved the best results for sentence-level direct assessment score prediction in the WMT 2020 shared task on QE (Specia et al., 2020).",
"Sun et al. (2020) showed that QE models like MonoTransQuest have a tendency to over-rely on spurious correlations, which is partially due to skewed label distributions and statistical artifacts in QE datasets.",
"In particular, they show the existence of a partial input bias, i.e. the tendency to predict the quality of a translation based on just the target sentence (Poliak et al., 2018).",
"While the fluency and grammatical correctness of the output is a fac-tor influencing the quality, the original meaning should be preserved, which is only possible if the model takes both source and target into consideration.",
"Following their work, in an attempt to reduce statistical artifacts, MLQE-PE (Fomicheva et al., 2020) a new QE dataset diversifying the topics and languages covered was created, which forms the basis of this work and will be described in more detail in Section 3.1.",
"We define auxiliary tasks in a broad sense, using the term to refer to settings where a main task is trained alongside one or more helper tasks used to improve the main task's performance and generalisability.",
"Most commonly, the tasks are trained in a multitask-setting, where some layers are shared across the tasks and some layers are task-specific.",
"The auxiliary tasks can either be related to the main task or adversarial (Ruder, 2017).",
"In addition, we consider the concept of debiased focal loss , where the main and auxiliary task are trained in separate models which are connected via the loss function (Karimi Mahabadi et al., 2020).",
"Related Tasks: In settings where the data is limited, noisy or high-dimensional, using additional tasks is a way of introducing an inductive bias that prevents the model from overfitting to noise (Caruana, 1997).",
"In addition, the model might be able to use new features that were learned through an auxiliary task for the main task as well (Ruder, 2019).",
"MT models, for example, have been shown to benefit from auxiliary tasks such as named entity recognition, part-of-speech tagging and dependency parsing (Niehues and Cho, 2017; Kiperwasser and Ballesteros, 2018).",
"Adversarial Tasks: Adversarial tasks can be used to actively discourage the model from overfitting to domain-specific, spurious cues.",
"The technique was introduced by Ganin and Lempitsky (2015) and used for domain adaptation.",
"More recently, it has been successfully used to reduce partial input biases in different fields of NLP, such as natural language inference (NLI) (Belinkov et al., 2019; Stacey et al., 2020) and visual question answering (VQA) (Ramakrishnan et al., 2018).",
"The core idea is to train the auxiliary task using just the partial input.",
"During backpropagation, the gradient is reversed.",
"Consequently, the shared layers are updated such that the adversary's loss is maximised; 1476 the undesired behaviour is penalised.",
"The methodology chapter illustrates the architectural design in more detail.",
"Debiased Focal Loss : Another approach that has recently been used to mitigate known biases, particularly partial input biases, is debiased focal loss.",
"The notion of focal loss was first introduced by Lin et al. (2017) as a means to improve classification results on imbalanced classes by weighing down the impact of samples that the model had already learned to classify well.",
"In the field of NLI, Karimi Mahabadi et al. (2020) have shown that it is possible to adapt the notion of focal loss to mitigate partial input biases.",
"They train the main model alongside a bias model that learns to predict the label based on the hypothesis only.",
"In this scenario, the bias model's predictions are used to weight the main model's cross-entropy loss.",
"Intuitively, samples that are classified well by the bias model are weighted down so that the main model primarily learns from less biased inputs.",
"The bias model is updated separately and discarded after training.",
"We work with the MLQE-PE dataset (Fomicheva et al., 2020) which was specifically designed for the training of MT QE models.",
"Published in 2020, it formed the basis for the WMT 2020 and 2021 shared tasks on Quality Estimation (Specia et al., 2020).",
"1 It consists of 6 high-, midand low-resource language pairs which originate from Wikipedia articles: English-German and English-Chinese, Romanian-English and Estonian-English as well as Nepalese-English and Sinhala-English.",
"A seventh dataset, Russian-English, was curated from Reddit posts and WikiQuotes.",
"The translations were generated using Transformer-based Neural MT models.",
"For each language, 9000 sentence pairs (7000 train, 1000 dev, 1000 test) were annotated on two different scales: Human-targeted Translation Edit Rate (HTER) : Each sentence-pair was edited by two independent translators.",
"HTER score is the averaged edit rate comparing the machine-generated translations and the post-edited versions.",
"The score ranges between 0 (perfect translation) to 1 (everything was edited).",
"Direct Assessment Scores (DA) : Each sentence pair was judged on a scale from 0-100 by at least 3 evaluators.",
"The reported DA score is the mean of the individual judgements.",
"Different than the HTER scores, the DA scale provides a measure of the severity of the errors, where inadequate (i.e. non-meaning preserving) translations should not receive a score higher than 70, even if only one word is incorrect.",
"We use the XLM-R based architecture MonoTransQuest as our baseline model, which fine-tunes XLM-R for sentence-level QE (Ranasinghe et al., 2020).",
"While there are alternative candidates with a good performance on QE tasks, MonoTransQuest was chosen for several reasons: State-of-the-art performance, availability and replicability (all hyperparameters and the source code are open-sourced), as well as the generic design of the architecture which is transferable to related NLP domains.",
"We train separate MonoTransQuest models for each combination of language pair and quality score using the originally proposed architecture and fine-tuned hyperparameters specified in the TransQuest GitHub repository.",
"2 All experiments were conducted on a 16GB Nvidia Tesla P100 GPU and averaged across five trainings on the seeds 555, 666, 777, 888 and 999.",
"Our results are shown in Table 3 in the Appendix.",
"In QE, the best practice is to use Pearson's r to measure performance (Specia et al., 2018, pp. 58).",
"Most notably, the Pearson correlation between the predictions and the labels is lowest for the high-resource languages English-German and English-Chinese.",
"This has also been observed in the QE shared task findings (Specia et al., 2020).",
"A possible explanation is the high average quality of the generated translations, making the labelling significantly harder and the annotations less consistent, i.e. more noisy.",
"and target and testing how the performance changes when the prediction is based on only one of the two.",
"If the performance does not significantly decrease, the model has likely learned to base its predictions mostly on one part of the input.",
"Figure 1 shows the results from this experiment.",
"A clear target sentence bias can be observed for the English-German and English-Chinese language pairs.",
"One reason could be the good quality of the translations that MT systems generate for high-resource languages: The occurrence of adequacy errors is lower, so that the target sentence may suffice for a decent prediction.",
"In contrast, the mid-resource Romanian-English model, which shows the best overall performance, appears to be most dependent on both inputs.",
"Figure 1 shows a clear performance deterioration when the model is tested on just the source or target sentence.",
"One particularity of the RO-EN dataset is the high abundance of fluent, but clearly inadequate translations and hallucinations which require both the source and translation to be detected (Specia et al., 2020).",
"The Russian-English dataset is an exception where the source sentence is a good predictor for the translation quality, most likely due to the distinct nature of Reddit data and WikiQuotes (both user-generated).",
"This source sentence bias could best be mitigated by curating a new dataset which is why we chose not to focus our efforts on the Russian-English dataset.",
"To further examine the nature of the partial input bias, an in-depth analysis of the strongly affected English-German translations was conducted.",
"In particular, the aim was to better understand how MonoTransQuest, but also the annotators, judge the quality of fluent but inadequate translations.",
"To achieve this, one of the authors, a German native speaker, manually annotated translations in the test set that are grammatically correct but do not preserve the meaning of the source.",
"3 In total, 145 out of 1000 translations were marked as fluent but inadequate.",
"A key takeaway from the labelling process was that it is not only the models that have a partial input bias human annotators clearly seem to over-rely on the target fluency, too.",
"Even if the instructions clearly specify that a DA score below 70 should be assigned to inadequate translations, 4 annotators tended to give higher scores if the sentence was fluent and appeared logical.",
"Figure 2 shows that more than half of the fluent but inadequate translations were given a score higher than 70, with an average rating of 81.",
"5 A likely reason is that adequacy-related mistakes are easy to miss when considering several quality factors, i.e. spelling, grammar and content, at the same time.",
"Based on the bias analysis, our goal is to find an effective and feasible way to reduce the impact of spurious correlations and overly dominant features.",
"As outlined in the previous section, the two high-resource datasets (EN-DE and EN-ZH) clearly show the strongest partial input bias.",
"They will therefore be at the centre of the bias mitigation efforts.",
"All four methods presented hereinafter share the core idea of using auxiliary tasks to achieve this aim: The main task QE is combined with helper tasks designed to reduce known 3 The annotated dataset is available via https:// github.com/agesb/TransQuest 4 The DA annotation guidelines used in the MLQE-PE data dictate that a score in 7090 indicates a translation that closely preserves the semantics of the source sentence.",
"biases.",
"At test time, the auxiliary tasks can be discarded.",
"Hereinafter, we introduce four approaches and the corresponding model architectures.",
"The first two methods are tailored to combat the biased behaviour by supporting the model with additional data.",
"In contrast, the two alternative, restrictive approaches actively penalise the model for learning unwanted behaviour.",
"We define three criteria to ensure comparability between the approaches: A good solution should 1) mitigate the observed biases, 2) retain the prediction quality of the benchmark model, and 3) avoid computational overhead and interference with the original model's design.",
"We experiment with two different supporting tasks, each combining the main task and the auxiliary task in a multitask setup.",
"The first approach is to train with different language pairs, aiming to transfer information between the language domains.",
"Instead of mixing the languages arbitrarily, we build upon the bias analysis and examine if using a less biased language (RO-EN) to train the auxiliary task can help to reduce biases in the main task (EN-DE or EN-ZH).",
"The bias analysis clearly showed that the models trained on the RO-EN dataset performed poorly when using just the source or target as input, indicating that the predictive power of the individual sentences is low.",
"Thus, the incentive for the multitask model to over-rely on the target should be reduced.",
"In this scenario, both tasks are regression problems and optimise the MSE loss.",
"The second approach is to collect additional translations originating from the same topic and language domain and use it as the input for the auxiliary task.",
"We choose WikiMatrix (Schwenk et al., 2021), a large parallel sentence corpus based on Wikipedia articles, as data source for the experiments.",
"Without further preprocessing, the vast majority of these sentence pairs would qualify as good translations.",
"While labelling on a continuous scale would require manual annotations, augmenting the data to achieve \"bad\" translations is more feasible.",
"Hence, we augment 50% of the data to obtain bad translations.",
"We experiment with two augmentation strategies: First, we shuffle the sentences to create mismatched sentence pairs.",
"Second, we augment the sentence to mimic fluent but inadequate translations as seen in the original MLQE-PE dataset and discussed in Section 3.3.",
"To do so, we implement a contextual augmentation pipeline that uses a language model (XLM-R) to replace 30% of the nouns, adjectives, verbs and adverbs such that the meaning of the sentence is changed while the grammatical correctness is preserved in the majority of cases.",
"6 In both cases, the main task optimises the MSE loss, and the auxiliary task is a binary classification problem using the binary cross-entropy loss.",
"We experiment with two setups that directly penalise the biased behaviour.",
"First, we combine the main task with an adversarial task in a multitask architecture.",
"Intuitively, the adversary is incentivised to predict the quality scores based on the target sentence only.",
"The shared layers, on the other hand, are penalised for learning a mapping between target sentence and scores.",
"The risk of working with an adversarial task setup is that it optimises towards eliminating all cues associated with the adversary.",
"In QE, however, the target sentence provides relevant information, such as grammar and spelling.",
"As a result, the overall model performance might suffer.",
"As an alternative to training with adversarial tasks and a multitask architecture in general, we repurpose the concept of debiased focal loss for regression.",
"While model architecture and training method are different, the underlying idea to use the partial input based predictions to influence the learning remains the same.",
"The subsequent section explains the multitask architecture used for the first three approaches as well as the re-formulated debiased focal loss technique in more detail.",
"To realise the first three approaches, we propose the architecture MultiTransQuest , expanding on the MonoTransQuest baseline.",
"The pre-trained language model XLM-R remains at the core and is entirely shared between tasks.",
"The two key changes affect the final layers and the optimisation strategy: Firstly, we exchange the original prediction head to support multiple tasks.",
"As illustrated in Figure 3, the final layers and loss functions are separate per task, thus allowing the mixing of regression and classification tasks.",
"The figure exemplarily shows the adversarial setup, where the gradients are reversed during back-propagation, i.e. weighted with",
"-1. For the two supportive tasks, we use the same setup but remove the weighted gradient layers and adjust the input and loss function for the auxiliary tasks accordingly.",
"We experiment with different numbers of shared and separate layers.",
"Secondly, we adapt the training procedure to support multiple tasks.",
"The data loader is designed so that it alternates between the tasks per training step, with each batch containing only samples for one task which are then passed through the shared layers and the corresponding task-specific layers.",
"We compare two optimisation strategies: Training the tasks in turns, where backpropagation is performed per batch and task.",
"Each task works with a separate AdamW optimizer to avoid averaging gradients across tasks.",
"Performing one forward pass for every task and combining the calculated losses as a weighted sum which is backpropagated through all layers using a single optimizer.",
"In contrast to the previously discussed multitask approaches, debiased focal loss enables a complete separation of the main model and bias model, thus requiring no changes to the core MonoTransQuest architecture.",
"To the best of our knowledge, (de-biased) focal loss has only been applied to classification tasks so far as it explicitly modifies the cross-entropy loss function.",
"Since our QE task is formulated as a regression problem, we attempt to find an equivalent strategy to weigh down biased examples when working with MSE loss.",
"In our scenario, the bias model is trained on partial inputs, receiving the translated sentence only.",
"The better Figure 3: Multitask architecture with gradient reversal.",
"the bias model's prediction, the lower the MSE and the more biased the sample.",
"In line with the original debiased focal loss idea, we can therefore use the bias model's loss as an indication for the bias per sample.",
"As the MSE loss can vary greatly during training, we decide against training both models in an end-to-end approach.",
"First, the trained bias model is used to predict the respective quality scores for the training set, using only the target.",
"Next, the absolute error for each of the training samples is calculated.",
"We use the error to approximate the partial input bias: The lower the error, the easier it is for the bias model to predict the sample's quality score correctly.",
"To control the scale of the weights, we normalise the error value between 0 and",
"1. The resulting weights w are used to scale the MSE loss of the main model f M before backpropagation.",
"We use the hyperparameter to exponentially scale the loss (Eq. 1).",
"We further experiment with a sigmoid-shaped function scaled between 0 and 1 (Eq. 2).",
"In the following, we present and discuss the results of the experiments conducted.",
"Based on the analysis in Section 3.3, the experiments concentrate on the two most biased datasets English-German and English-Chinese, each in combination with the DA and HTER scores.",
"For each of the four sections, we assess different hyperparameter configurations on the EN-DE validation set.",
"A configuration is considered to be good if the bias is reduced and the overall performance is at least maintained.",
"The most promising variant is then evaluated on the EN-DE and EN-ZH test set, to see if the method generalises across language domains.",
"Finally, we compare the four methods against one another and provide further analyses on the robustness of the best-performing model.",
"7 5.1 Hyperparameters and Design Choices Within each of the four approaches, we experiment with different hyperparameter configurations and design choices.",
"While each setup requires individual fine-tuning, observed trends, backed by Table 4, 5, 6, 7 and 8 in the Appendix, include: For the multitask architecture, training the tasks in turns with separate optimisers results in a good balance between bias reduction and maintaining performance.",
"Backpropagating 7 For reproducibility of the experiments, the source code incl.",
"configurations is published under https://github.",
"com/agesb/TransQuest .",
"All hyperparameters not explicitly mentioned in the paper were kept constant.",
"the weighted loss is also possible, but requires more task-specific fine-tuning.",
"For supportive auxiliary tasks, more separate layers, i.e. a larger degree of freedom, and a larger batch size improve the performance, for adversarial tasks the opposite is the case.",
"When augmenting additional WikiMatrix data, shuffling the sentence pairs achieves better results than mimicking fluent but inadequate translations with contextual augmentation.",
"The effect of the debiased focal loss technique is limited.",
"A sigmoid-shaped weight distribution does not improve the results.",
"Table 1 summarises the results obtained for each of the four methods.",
"With respect to the choice of architecture, MultiTransQuest, used for methods 1-3, reduces the partial input bias more effectively than MonoTransQuest trained with focal loss.",
"A key advantage of the multitask architecture is that the model is able to learn a balance between the tasks.",
"In contrast, the degree of freedom is significantly limited for the focal loss architecture, where the main hyperparameter is how to scale the weights.",
"We believe that this limitation is what makes the model even more sensitive to the inseparability of the bias and helpful features.",
"Contrasting the multitask-training with related or adversarial tasks, we find that the two supportive methods maintain a solid performance across all four constellations, while also reducing the bias.",
"Compared to this, the adversarial approach gen-1481 eralises less well, despite its successful application in NLI and VQA.",
"We hypothesise that this discrepancy is rooted in the nature of the partial inputs: In VQA as well as NLI, the task can only be solved when considering both question and image or premise and hypothesis, respectively.",
"In contrast, the translation provides information that is valuable for the QE model regardless of the source, such as the fluency of the generated sentence.",
"Hence, it is difficult to isolate the bias from valuable information, an assumption that both adversarial training and the focal loss technique rely on.",
"Without an unbiased reference dataset (which is hard to acquire due to the subjective nature of the annotation process) the line between desired information and bias is difficult to quantify.",
"The lower the correlation between the existence of the bias and the performance of the adversarial task, the noisier the feedback that is propagated into the shared layers.",
"The best trade-off between overall performance and bias reduction is achieved with MultiTransQuest when combining the main task with a binary classification task trained on shuffled WikiMatrix data.",
"The binary classification task is simple to learn, yet impossible to solve without paying equal attention to source and translation.",
"For better illustration of the model behaviour and improvements, Figure 6 in the Appendix directly compares the performance and bias reduction achieved by the best model to the benchmark.",
"In addition, Figures 7 and 8 show the distribution of DA and HTER predictions generated by the debiased model.",
"Since the reduction of the performance on the target sentence is only considering the reduction of the partial input bias, we additionally aim to test the model's ability to generalise better on datasets that barely exhibit the partial input bias.",
"As a feasible alternative to collecting an unbiased reference dataset in the same language domain, we assess the models' robustness in a zero-shot setting on less biased RO-EN data.",
"As elaborated on in Section 3.3, the RO-EN dataset provokes the partial input bias significantly less than the other language pairs.",
"Consequently, a model with reduced partial input bias should perform better when tested on the dataset, indicating improved robustness.",
"We train the MonoTransQuest benchmark and debiased MultiTransQuest architecture on the EN-DE and EN-ZH datasets and use these models to predict the respective scores on the RO-EN dataset.",
"Since this is an out-of-domain setting, we do not expect the models to reach a performance that can compete with the models trained on Romanian-English data.",
"However, the debiased MultiTransQuest models should outperform MonoTransQuest in this zero-shot scenario, which is indeed the case as can be seen from Table",
"2. EN-DE model EN-ZH model DA HTER DA HTER MonoTQ 0.3756 0.3466 0.494 0.3650 MultiTQ 0.5601 0.3543 0.5226 0.4334 Table 2: Zero-shot prediction quality on the RO-EN dataset (Measured with Person's r ).",
"Building upon the previously discussed results, we propose ideas for future work.",
"Considering the experimental design, the multitask architecture provides additional degrees of freedom that were not explored extensively, yet.",
"For example, one could vary the amount of training per task or learn the training schedule as a parameter which adapts dynamically during the training process (Kiperwasser and Ballesteros, 2018; Zaremoodi et al., 2018).",
"In addition, the number of auxiliary tasks could be increased to two or more, mixing different task types.",
"To further evaluate the generalisability of the proposed methods, experiments with additional datasets, low-resource language pairs as well as alternative QE architectures and language models could be conducted, too.",
"Going beyond the field of Machine Translation Quality Estimation, it would be interesting to see the methods applied in adjacent areas of NLP.",
"For example, this could entail closely related settings, such as quality estimation for machine-generated text summaries, as well as the fields of NLI and VQA, both of which face partial input biases.",
"Other observable biases could also be considered as candidates for the use of targeted bias reduction techniques, provided that it is possible to design a counterbalancing auxiliary task or isolate the bias well enough to deploy adversarial approaches.",
"We think that if the latter scenario applies, the adapted debiased focal loss technique for regression could be worth further exploration, too.",
"This paper expands on recent research which suggests that QE models are susceptible to learning spurious correlations.",
"Based on additional analysis, and inspired by related work in the fields of NLI and VQA, we propose a range of auxiliary tasks that inform the main Quality Estimation task during training and are discarded at test time.",
"First, we train the main Quality Estimation task together with additional, less biased data in a multitask setting.",
"Then, we explore adversarial training and debiased focal loss to directly target the partial input bias.",
"We find that the former approaches yield more stable results than the latter and conjecture that this is due to the difficulty of isolating partial input bias effects from useful predictive information encoded in the translation.",
"We show that our proposed multitask architecture MultiTransQuest, especially when trained with additional shuffled WikiMatrix data, generalises well across the two most biased language pairs and the two different quality scores.",
"Our method retains the overall prediction quality, reduces the observed biases significantly and increases the models' robustness in a zero-shot setting.",
"Marina Fomicheva and Lucia Specia were supported by funding from the Bergamot project (EU H2020 Grant No. 825303)."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"abstain",
"objective",
"result",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"objective",
"method",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"objective",
"result",
"objective",
"result",
"other"
] |
[
"Rap generation, which aims to produce lyrics and corresponding singing beats, needs to model both rhymes and rhythms.",
"Previous works for rap generation focused on rhyming lyrics but ignored rhythmic beats, which are important for rap performance.",
"In this paper, we develop DeepRapper , a Transformer-based rap generation system that can model both rhymes and rhythms.",
"Since there is no available rap dataset with rhythmic beats, we develop a data mining pipeline to collect a large-scale rap dataset, which includes a large number of rap songs with aligned lyrics and rhythmic beats.",
"Second, we design a Transformer-based autoregressive language model which carefully models rhymes and rhythms.",
"Specifically, we generate lyrics in the reverse order with rhyme representation and constraint for rhyme enhancement and insert a beat symbol into lyrics for rhythm/beat modeling.",
"To our knowledge, DeepRapper is the first system to generate rap with both rhymes and rhythms.",
"Both objective and subjective evaluations demonstrate that DeepRapper generates creative and high-quality raps with rhymes and rhythms.",
"Rap is a musical form originating from America in 1970s, and has quickly developed as one of the mainstream music genres in the world (Keyes, 2004).",
"With the rapid development of artificial intelligence, automatic rap lyrics generation has drawn attention from academia (Potash et al., 2015; Malmi et al., 2016; Liang et al., 2018; Nikolov et al., 2020).",
"Generally speaking, rap lyrics need to be semantically meaningful and fashionable to convey interesting stories or express feelings.",
"Different from natural language or other artistic genres ( e.g. , Corresponding author: Xu Tan, [email protected] lyrics or poetry), rap has distinctive characteristics: 1) it usually contains complex rhyme patterns among several consecutive sentences, which are the key to form a good flow; 2) it needs to align with the singing beat since rap lyrics are usually rapped according to some rhythmic accompaniments.",
"Therefore, how to generate rap lyrics with good rhymes and rhythms is a troublesome problem.",
"Previous works (Potash et al., 2015; Malmi et al., 2016; Liang et al., 2018; Nikolov et al., 2020) for rap generation mainly focused on lyric generation and some of them developed strategies for rhyme modeling.",
"Potash et al. (2015) directly added a < endLine > token at the end of verse lines and expected to learn rhyme patterns implicitly.",
"Nikolov et al. (2020) applied a two-step strategy, which first generates rap lyrics and then adds rhyme tokens to the end of generated lyrics.",
"However, these methods cannot guarantee the rhyme patterns for every lyric line and only care the rhyme on the last token.",
"Although many works have studied rhyming modeling in other artistic genres ( e.g. , poetry) (Li et al., 2020; Van de Cruys, 2020; Liu et al., 2020), they are not suitable for rap generation due to the complex rhyme structure in rap.",
"For example, poetry needs to rhyme with only the last word in each sentence, while rap rhymes with multiple consecutive tokens at the end of each sentence.",
"No previous works have studied rhythm modeling ( i.e. , beats in rap), to our knowledge.",
"One of the main reasons is the lack of rap datasets with beat-lyric alignment.",
"Consequently, the generation of lyrics without rhythmic beats cannot be regarded as a full rap generation.",
"In this paper, we develop DeepRapper, a Transformer (Vaswani et al., 2017) based rap generation system which can model both rhymes and rhythms.",
"To build the system, since there is no available rap datasets with aligned rhythmic beats, we design a data mining pipeline and collect a large-scale rap dataset for rhythm modeling.",
"Specifically, we first crawl many rap songs, each song with both rap lyrics and audios, from the Web.",
"For each crawled rap song, we perform a series of data preprocessing steps to extract rhythmic beats as well as beat-lyric alignment.",
"To better model rhyme, we generate the words in a rap sentence from right to left in an autoregressive manner.",
"Doing so we can easily identify the last few words of a sentence (now become the first words of the reverse sentence) to rhyme with.",
"Additionally, we incorporate several rhyme-related representations into our language model to further improve the rhyming quality, and encourage the N -gram rhyme in generated rap lyrics through rhyme constraint during inference.",
"We use a special token [ BEAT ] to represent the rhythmic beat and insert it into lyrics right before the corresponding word.",
"In this way, we can model the beat in the lyric sequence both in training and generation.",
"Inspired by the success of pre-trained language models (Devlin et al., 2019; Radford et al., 2018; Yang et al., 2019; Song et al., 2019; Liu et al., 2019), we incorporate pre-training into our system.",
"To obtain large-scale data for pre-training, we also use our data mining pipeline to collect another two datasets: 1) non-rap songs with aligned beats, which can be larger than rap dataset since non-rap songs are more general than rap songs; 2) pure lyrics, which can be even larger than non-rap songs.",
"In the pre-training stage, we pre-train our DeepRapper model based on the above two datasets.",
"Then we fine-tune our pre-trained model on the rap songs with aligned beats.",
"The fine-tuned model is used for final rap generation.",
"Both objective and subjective evaluations verify the advantages of DeepRapper in generating rap lyrics with rhymes and rhythms.",
"Our main contributions can be summarized as follows: To model rhythms in rap generation, we develop a data mining pipeline to create rap datasets with aligned rhythmic beats.",
"To better model rhymes, we design an autoregressive language model to generate rap lyrics from right to left with rhyme constraint.",
"As far as we know, DeepRapper is the first to explicitly model N -gram rhymes.",
"We elaborately insert the beat token inside lyrics to model the rhythmic beats.",
"Since DeepRapper generates rap lyrics with both rhyme and rhythm modeling, in this section, we briefly introduce the related background: lyric generation, rhyme modeling and rhythm modeling.",
"Lyric Generation Broadly speaking, lyric generation can cover rap lyric generation (Potash et al., 2015; Nikolov et al., 2020; Liang et al., 2018), song lyric generation (Watanabe et al., 2018; Lu et al., 2019; Chen and Lerch, 2020; Sheng et al., 2020), general poetry generation (Zhang and Lapata, 2014; Lau et al., 2018; Li et al., 2020; Liu et al., 2020) and etc .",
"Different from previous works that leverage language model to generate lyrics similar to natural language, in this paper, we introduce a novel language model for rap generation, with well-designed rhyme and rhythm modeling to fit the characteristics of rap lyrics.",
"Additionally, inspired by the successes of pre-trained language models (Devlin et al., 2019; Yang et al., 2019; Liu et al., 2019; Radford et al., 2019; Song et al., 2019) in NLP applications, we also incorporate pre-training into our model to further improve the quality of rap generation.",
"Rhyme Modeling Rhyme modeling plays an important role in rap generation, which requires the last few tokens in consecutive sentences to have the same rhyme pattern.",
"Existing rap generation systems either directly add a special token < endLine > at the end of rap lyric to encourage the model to learn rhyme structure (Potash et al., 2015), or introduce a two-step strategy for rhyme modeling that first generates rap lyrics and then adds rhyme tokens after the generated lyrics (Nikolov et al., 2020).",
"However, these works only focused on unigram rhyme while rap appreciates more for n-gram rhyme.",
"Although a lot of works have explored rhyme modeling in other genres, most of them cannot be directly used for rap generation.",
"For example, poetry generation (Lau et al., 2018; Zhipeng et al., 2019; Liao et al., 2019; Li et al., 2020) usually used pre-defined format to control the rhyme pattern since poetry usually has fixed number of words and only cares the rhyme pattern for the last word.",
"However, rap lyrics have diverse rhyme structures across multiple consecutive sentences and most importantly multiple con-Data Crawling Vocal and Accompaniment Separation Vocal and Lyric Alignment Beat Detection Lyric and Beat Alignment Dataset audio lyric vo ca l , l y r i c acc o m p a n i m e n t b ea t ti m e s t a m p l y r i c ti m e s t a m p Figure 1: An overview of our data mining pipeline.",
"secutive words.",
"Therefore, we introduce N -gram rhyme modeling in DeepRapper to handle the distinctive rhyme patterns in rap.",
"Besides, we also train our language model in a reverse order ( i.e. , right to left), similar to previous works (Van de Cruys, 2020), to better model rhymes since they always occur at the end of sentence.",
"Rhythm Modeling Rhythm modeling is usually used in music generation (Zhu et al., 2018; Huang and Yang, 2020; Ren et al., 2020) which generates the duration of notes along with the note pitch to form rhythmic beats in melody and accompaniment generation.",
"Different from music generation, rap cares more about rhythmic beats instead of note pitches (i.e. melody).",
"In this way, the generated rap lyrics need to align with the corresponding rhythmic beats in order to be rapped, otherwise it cannot be regarded as a complete rap.",
"However, to the best of our knowledge, none of previous works have studied the rhythm modeling in rap generation.",
"In this paper, we introduce a novel beat modeling strategy in DeepRapper for rhythm generation.",
"Previous works (Potash et al., 2015; Liang et al., 2018; Nikolov et al., 2020) for rap generation usually used rap datasets with only lyrics, without considering the rhythmic beat information.",
"To model rhythm in rap generation, the rap dataset should contain lyrics with aligned rhythmic beats.",
"However, beat alignments are quite difficult to obtain, since their annotations require musicians with professional knowledge to identify stressing syllable in rap songs.",
"To handle this problem, we design a data mining pipeline to automatically extract beat-lyric alignments.",
"In this section, we introduce the details of the data mining pipeline and our mined dataset based on this pipeline.",
"Figure 1 overviews our data mining pipeline, which consists of 5 steps: data crawling, vocal and accompaniment separation, vocal and lyric alignment, beat detection, and lyric and beat alignment.",
"Data Crawling To mine a large-scale rap dataset, we first crawl a large amount of rap songs with both lyrics and singing audios from the Web.",
"To ensure the lyric and audio can be aligned in the sentence level which is beneficial for our later word-level beat alignment, we also crawl the start and end time of each lyric sentence corresponding to the audio.",
"Vocal and Accompaniment Separation For each rap song, we utilize Spleeter (Hennequin et al., 2020) 1 , a public music separation tool, to separate the vocal (containing rap singing) and accompaniment (containing rhythmic beats) from the crawled rap audio.",
"Vocal and Lyric Alignment We split the separated vocals into the sentence level according to the crawled start and end time of each lyric sentence, and thus we can get the vocal-lyric alignments in the sentence level.",
"We convert lyrics into phonemes via Phonemizer 2 and utilize Montreal Forced Aligner 3 to obtain vocal-lyric alignments in the phoneme level.",
"Based on these phoneme-level vocal-lyric alignments, we obtain the corresponding timestamp of each word in the singing audio.",
"Beat Detection To obtain the alignments between lyrics and beats, we need to know the timestamp of each beat.",
"Therefore, we use a beat track detection tool, Librosa (McFee et al., 2020) 4 , to track the timestamp of each beat from the separated accompaniment that obtained from the second step.",
"Lyric and Beat Alignment After we obtain the timestamp of each word and each beat, we can align them together according to their timestamps.",
"However, since a rapper may not sing a word exactly following the beat, directly using the timestamp to exactly match the word and beat is inappropriate.",
"Therefore, we propose an approximate method to align them.",
"Denote the word sequence of a lyric 1 https://github.com/deezer/spleeter 2 https://github.com/bootphon/phonemizer 3 https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner 4 https://github.com/librosa/librosa sentence as W = { w 1 , w 2 , , w | W | } , and its beat sequence as B = { b 1 , b 2 , , b | B | } , where w i and b j represent i -th word and j -th beat.",
"We use T w i and T b j to represent the timestamps of w i and b j respectively.",
"For each beat b j , we first filter out a word set W = { w : (cid:12)(cid:12) T b j T w (cid:12)(cid:12) r/ 2 , w W } , where r represents the average duration of each word in the song ( i.e. , the total duration divides the number of words).",
"Next, word w i is aligned with beat b j if it satisfies the following condition: w i = min w | T b j T w | , w W .",
"Using the above data mining pipeline, we obtain a rap lyric dataset with aligned beats (named as D-RAP, where D represents dataset), which satisfies the requirements of building a rap generation system with both rhyme and rhythm modeling.",
"We split the D-RAP dataset into the training and validation set with a ratio of 4:1.",
"Since rap is only one of music genres and the number of rap songs is usually smaller compared with more general songs, we also mine another two datasets to pre-train our DeepRapper model with the same mining pipeline: 1) non-rap songs with aligned beats (named as D-SONG); 2) pure lyrics without aligned beats (named as D-LYRIC).",
"We summarize the statistics of the three datasets in Table 1 and show a rap song with aligned beats from D-Rap in Figure",
"2. 4 Rap Generation Model In this section, we introduce the architecture of our rap generation model, and the details of its rhyme modeling and rhythm modeling.",
"Figure 3 illustrates the detailed architecture of our rap generation model.",
"We use Transformer (Vaswani et al., 2017) to build an autoregressive language model (Radford et al., 2018, 2019) 01_[00:12.04]**** 02_[00:15.15]** ** 03_[00:18.01]** * 04_[00:20.82]*** 05_[00:23.98]*** 06_[00:26.67]*** 07_[00:29.63]** * 08_[00:32.38]** * 09_[00:35.24]*** 10_[00:38.42]*** 11_[00:41.36]**** 12_[00:44.28]**** LINE_[START_TIME] LYRICS Figure 2: An example of a rap song with aligned beats in our mined D-RAP dataset.",
"for rap generation, and introduce several new designs: 1) To better model rhymes, our model generates a sentence from right to left, since rhyming words are always at the end of the sentence; 2) As aforementioned, rhythms are critical for rap performance, so we insert a special token [ BEAT ] for explicit beat modeling; 3) Unlike original Transformer with only word embedding and positional embedding, we add multiple additional embeddings to better model rhymes and rhythms.",
"Next, we introduce our rhyme modeling in subsection 4.2 and rhythm modeling in subsection 4.3.",
"Rhymes are the key to form a good rap flow.",
"In DeepRapper, we model rhymes with three components: 1) reverse-order language model; 2) rhyme representation; and 3) rhyme constraint.",
"Rhyming words usually occur at the end of each lyric sentence.",
"If using a standard autoregressive language model and generating tokens from left to right, we need to identify whether the current generation step is the end of a sentence, which decides whether to generate rhyming words to be consistent with that in previous sentences.",
"Therefore, to better model rhymes, we use a reverse-order language model to generate sentences from right to left, as shown in Figure",
"3. Doing so we can easily identify the last few words of a sentence (now become the first few words of the reverse sentence) to control their rhymes.",
"Note that we only reverse Output [BEAT] [BEAT] [BEAT] [BEAT] [SEP] [SEP] [BEAT] [BEAT] [BEAT] [BEAT] [SEP] [START] [SEP] Input E [START] E E E [BEAT] E E E [BEAT] E E [SEP] E E [BEAT] E E E [BEAT] E E E [SEP] Token Embeddings S [START] S 0 S 0 S 0 S 0 S 0 S 0 S 0 S 0 S 1 S 1 S 1 S 1 S 1 S 1 S 1 S 1 Sentence Embeddings R [START] R 0 R [BEAT] R 1 R 2 R [BEAT] R 3 R 4 R [SEP] R 0 R 1 R [BEAT] R 2 R 3 R [BEAT] R 4 R [SEP] Intra-sentence Positional Embeddings P 0 P 1 P 2 P 3 P 4 P 5 P 6 P 7 P 8 P 9 P 10 P 11 P 12 P 13 P 14 P 15 P 16 Positional Embeddings Vowel Embeddings F [START] F [BEAT] F ou F [BEAT] F ang F o F [SEP] F ang F [BEAT] F ang F e F [BEAT] F ong F an F [SEP] F ang F ai 1 1 rhyme representations Lyrics:",
"words inside a sentence, and still generate different sentences in the original order.",
"Figure 4 compares the sentences in left-to-right order and right-to-left order, from which we can see that rhyming words of each sentence share the same relative positions (offset to the first token) in the reverse order, and are easy to model and control.",
"Rhyming words have two important features: 1) its vowel that used for rhyming and 2) its relative position in a sentence to decide the correspondence between the rhyming words in consecutive sentences (e.g., in the reverse order setting, the first/second word of the current sentence should be rhymed with the first/second word in the previous sentence).",
"5 Pinyin is the standard phoneme for Chinese.",
"build a vowel dictionary F ( ) to identify the vowel of each word.",
"As shown in Figure 3, we add an additional vowel embedding F and an intra-sentence relative positional embedding R to enhance rhyme representation for each token.",
"Besides, to better identify different sentences, we introduce a sentence embedding S to differentiate different sentences.",
"In addition to reverse-order language model and rhyme representation, we also introduce rhyme constraint to improve the quality of rhyme generation in inference.",
"As shown in Figure 4, sentences in rap lyrics not only rhyme with the last token, but also with multiple consecutive tokens at the end.",
"We call this phenomenon as N -gram rhymes, which mean the current sentence and the previous sentence keep the same rhyme for the last N consecutive tokens.",
"To our knowledge, no previous work has investigated N -gram rhymes ( N > 1 ), although it is important to improve rap quality.",
"Our proposed rhyme constraint enables our model to adjust the probability of next predicted token to further encourage N -gram rhyme generation.",
"The constraint is introduced as follows.",
"To generate the i -th word w i in the standard inference procedure, we usually choose the predicted token with the maximum probability, i.e. , w i = arg max p ( w | w <i ; ) , where w <i denotes the words before position i in the reverse sentence and is the model.",
"When the words before position i of the current and previous sentence have the same rhyme pattern, we will use an adjusted probability distribution p ( w | w <i ; ) to encourage the i -th generated word to be rhymed according to the i -th word in the previous sentence, so as to form N -gram rhymes.",
"The adjusted probability distribution p ( w | w <i ; ) is: p ( w | w <i ; ) = p ( w | w <i ; ) + (1 ) ( w ) (2) where ( w ) is a vowel check function and is a hyper-parameter to balance the two terms.",
"Here, ( w ) is 1 if the predicted w has the same vowel with the i -th token in the previous sentence, otherwise 0.",
"In other words, when predicting i -th token ( i N ), we encourage our model to pay more attention for these words with same vowel with the i -th token in the previous sentence.",
"In this way, the model tends to generate N -gram rhymes with large N .",
"Generating lyrics with aligned beats is necessary since rap lyrics need to be rapped with rhythmic beats.",
"Therefore, we model and generate rhythmic beats along with the lyrics with a specific symbol: we regard beat as a special token [ BEAT ] and insert it into lyric sequences for model training.",
"As shown in Figure 3, we insert [ BEAT ] before its aligned words like the following examples: [ BEAT ] [ BEAT ] [ BEAT ] [ BEAT ] .",
"Rap usually contains different beat frequencies, i.e., the ratios between the total number of words and the total number of beats in a rap song.",
"To explicitly model and generate rap with different beat frequencies, we use three tokens [ S ], [ M ], and [ F ] to represent the slow, medium and fast beat frequencies and add the corresponding tokens at the start of a rap song for training and inference.",
"In our D-RAP dataset, the distribution of beat frequency is displayed in Figure",
"5. According to the distribution, we assign [ S ], [ M ], and [ F ] to songs with beat frequency less than 3, equal to 3, and greater than 3 respectively.",
"Our DeepRapper model is built on the autoregressive Transformer decoder (Vaswani et al., 2017; Radford et al., 2018, 2019), where the hidden size, the number of attention heads and the number of",
"Transformer layers are set as 768, 12, 12.",
"The dimension of all different kinds of embedding in DeepRapper is set as 768.",
"Considering there is no existing pre-trained language model in reverse order, we do not utilize any pre-trained language models for initialization.",
"Instead, we first pre-train our model on D-LYRIC and D-SONG for 2 millions steps, and then fine-tune our model on D-RAP with 3K steps as the size of D-RAP is smaller than our pre-training corpus.",
"We convert each song to a sequence with a length of 1024 tokens by cutting longer sequence or padding shorter sequence.",
"Our model is trained with a batch size of 8 songs on 4 NVIDIA TITAN V GPUs.",
"We use Adam optimizer with a learning rate of 0 .",
"00015 , 1 = 0 .",
"9 , 2 = 0 .",
"999 , and (cid:15) = 10 6 .",
"We set the maximum value of N -gram rhyme as 3 and the hyper-parameter in Equation 2 as 0.95.",
"Samples are generated conditioned on a given sentence in reference.",
"Objective Evaluation We evaluate the generated raps in terms of the quality of language, rhyme and rhythm.",
"We choose five metrics to evaluate our model: 1) Perplexity (PPL), a standard metric to evaluate the quality of a language model; 2) Rhyme Accuracy (RA), the ratio of sentences that have correctly predicted rhymes; 3) Rhyme Density (RD), the longest rhyme of a song, averaged over all songs, which is introduced by Malmi et al. (2016) to measure the quality of rhyming fluency; 4) ComboN , the maximum number of consecutive sentences with the same N -gram rhyme in a rap song, averaged over all songs, where we study N = 1 , 2 , 3 ; 5) Beat Accuracy (BA), the accuracy of our model in beat prediction, under the teacher-forcing mode.",
"Subjective Evaluation Similar to previous works (Zhang and Lapata, 2014; Nikolov et al., 2020) in artistic creation, we also use human evaluation to accurately evaluate the quality of the generated raps.",
"We invite 10 participants with professional knowledge in music as human annotators to evaluate 100 sampled raps.",
"Each annotator is required to score from 1 (Poor) to 5 (Perfect) on the following perspectives: 1) the clearness of the theme of the rap lyrics; 2) the fluency of the rap lyrics; 3) the quality of the rhyme; 4) the diversity of the rhyme.",
"The averaged score of all annotators on all sampled raps is used as the evaluation score for each perspective.",
"Results Table 2 shows the objective and subjective results of DeepRapper compared with two baselines: 1) Baseline: a standard autoregressive language model with the same model configuration with DeepRapper but without our proposed rhyme and rhythm modeling; 2) Baseline + PT, using pretraining on Baseline.",
"We have several observations from Table 2: 1) DeepRapper achieves better perplexity, rhyme accuracy and rhyme density than the two baselines, which demonstrates the advantages of our method in generating high-quality rap lyrics with accurate and diverse rhymes.",
"2) DeepRapper achieves better scores in all subjective metrics, demonstrating that DeepRapper can generate high-quality and rhyming raps that accord with human taste.",
"3) Pre-training improves the performance of baseline in both objective and subjective metrics, which indicates the importance of pre-training.",
"However, its performance is still worse than DeepRapper.",
"Ablation Studies To further validate the necessity of each component in DeepRapper, we conduct a series of ablation studies, including remov-Table 3: The ablation studies on each component in DeepRapper.",
"means removing the corresponding component.",
"Rhyme, Rhythm and PT represent rhyme modeling, rhythm modeling and pretraining.",
"RO, VE, IPE and SE mean reverse-order, vowel embedding, intra-sentence position embedding and sentence embedding.",
"ing rhyme modeling, rhythm modeling and pretraining, respectively.",
"The results are reported in Table",
"3. We have several observations: 1) Removing rhyme modeling affects rhyme quality a lot as it results in a dramatic drop in rhyme accuracy and rhyme density; 2) Removing each specific design in rhyme modeling (i.e., RO: reverse order language model, VE: vowel embedding, IPE: intra-sentence position embedding, SE: sentence embedding) causes worse rhyme accuracy and rhyme density.",
"Specifically, while removing RO leads to a better PPL since left-to-right order can be more easily modeled than right-to-left order according to the analysis in Wu et al. (2018), it causes large accuracy drop in rhyme quality.",
"3) Apparently, DeepRapper without rhythm modeling cannot produce any beat information; 4) DeepRapper without pre-training affects the perplexity and rhyme accuracy a lot, however, obtains a higher rhyme density.",
"The reason is that without pre-training, DeepRapper tends to copy previous rhyme tokens due to the lack of generalization (larger PPL).",
"To verify this, we count the repetitive rate of rhyming words and found that the rate of DeepRapper is 23 .",
"8% while without pre-training is 42 .",
"5% , which is higher than using pre-training.",
"The above results verify the effectiveness of each component in DeepRapper.",
"N -gram Rhyme To highlight the advantage of DeepRapper in modeling N -gram rhyme, we use ComboN to measure the ability of each design in DeepRapper to model N -gram rhyme.",
"The results are reported in Table",
"4. We can find that 1) The model without rhyme modeling can hardly generate good rhyme, regardless of the value of N in N -gram; 2) Removing rhyme constraint also weakens the capacity of generating N -gram rhyme.",
"These results further demonstrate the importance of our rhyme modeling and rhyme constraint in generating multiple consecutive rhymes.",
"Beat Frequency To better measure the beat quality, we randomly generate about 5,000 samples by DeepRapper and DeepRapper with beat frequency control.",
"We propose the First Order Distribution (FOD) and the Second Order Distribution (SOD) and measure the distance (via Wasserstein Distance (Vallender, 1974)) of these distributions between the generated samples and our DRAP dataset.",
"We define the interval of the current [BEAT] as the number of words between the current [BEAT] and the next [BEAT].",
"Therefore, the FOD is defined as the distribution of the interval of the current [BEAT].",
"Similarly, the SOD is defined the distribution of the difference between the interval of the current [BEAT] and the next [BEAT].",
"The results of the distance are normalized into [0 , 1] and are reported in Table",
"5. It can be seen that DeepRapper with beat frequency control achieves better performance in beat modeling, which indicates the importance of beat frequency control in beat modeling.",
"Case Analyses on Generated Raps We list a sample case from our generated raps in Figure 6 to demonstrate the good quality of the raps generated by DeepRapper.",
"The sample is generated by feeding the first sentence of the example in Figure 2 to DeepRapper.",
"As we can see, the generated sample exhibits good theme, fluency and rhyme.",
"The sample is a rap with a number of 1-gram, 2-gram, 3-gram, and even 4-gram rhyme.",
"The generated lyrics depicts the fond memories o ang a e i ang ang i e an u e ai ong i e i a e an ang an i i e ao ao e ai ou er an an ao i a ang i en e ai an ang e an en e u ang ai i an en e ai i ang u a e u i a eng e e ai ang ai e e ao ai o ing e ang e ai i ai ong i o en ang ue ao ei ai ai a u in e a o e ai an e en an a ai a i ang e ai ie ong en e an ai en e i ou i ai ao en ai a an ai u ong er i e a en u ong en e i ai ao an ai i an ai ei en e i ai Figure 6: A rap generated by DeepRapper .",
"of childhood and the beautiful visions for the futures.",
"We also provide a group of samples generated with beat frequency control.",
"To save space, we put them and the translation of all the samples to Appendix.",
"More samples are provided in https://deeprapper.github.io.",
"In this paper, we develop DeepRapper , a novel Transformer-based rap generation system, which leverages rhyme modeling, rhythm modeling and pre-training for rap generation.",
"Considering there is no available rap dataset with aligned rhythmic beats for rhythm modeling, we propose a data mining pipeline to mine a rap dataset with beat-lyric alignments.",
"We leverage right-to-left generation, rhyme representation and rhyme constraint to better model rhyme and encourage N -gram rhyme, and explicitly model beat information by insert beat token beside the corresponding word in the lyric sequence.",
"To our knowledge, DeepRapper is the first system to generate rap with both rhymes and rhythms.",
"Both objective and subjective evaluations demonstrate that DeepRapper generates high-quality raps with good rhymes and rhythms.",
"Thanks to the design of DeepRapper, we can further build another rap singing system to sing out the raps according to the rhymes and rhythms, which we leave as future work.",
"We also leave Multilingual DeepRapper as future work.",
"We would like to acknowledge the anonymous reviewers for their insightful comments.",
"Research on this paper was supported by Hong Kong Research Grants Council under grant 16204920.",
"The proposed framework can be considered a novel language model for rap generation in automatic artistic creation.",
"Specifically, the proposed framework has been configured with novel rhyme modeling as rhyme is quite important in music genres.",
"Therefore, our proposed framework is also beneficial for generating other music genres.",
"On the other hand, although we collect large-scale lyric data for pre-training, it still cannot fully utilize the potential of pre-training.",
"In the future, we expect to employ more large-scale data in the open domain plus the music domain for pre-training to improve the capacity of the language model.",
"In addition, our training datasets may have biases, which may bring some potential risks of model bias.",
"Hence, we encourage future works to study how to apply other techniques in mitigating similar problems in our framework."
] | [
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"method",
"objective",
"result",
"method",
"abstain",
"method",
"result",
"method",
"method",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"method",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"other",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"objective",
"method",
"objective",
"abstain",
"method",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method"
] |
[
"The written forms of Semitic languages are both highly ambiguous and morphologically rich: a word can have multiple interpretations and is one of many inflected forms of the same concept or lemma.",
"This is further exacerbated for dialectal content, which is more prone to noise and lacks a standard orthography.",
"The morphological features can be lexicalized, like lemmas and diacritized forms, or non-lexicalized, like gender, number, and part-of-speech tags, among others.",
"Joint modeling of the lexicalized and non-lexicalized features can identify more intricate morphological patterns, which provide better context modeling, and further disambiguate ambiguous lexical choices.",
"However, the different modeling granularity can make joint modeling more difficult.",
"Our approach models the different features jointly, whether lexicalized (on the character-level), or non-lexicalized (on the word-level).",
"We use Arabic as a test case, and achieve state-of-the-art results for Modern Standard Arabic with 20% relative error reduction, and Egyptian Arabic with 11% relative error reduction.",
"Morphological modeling in Semitic languages is challenging.",
"Their optional short vowels (diacrit-ics) increase the overall ambiguity of surface forms; and their morphological richness results in large target spaces, which increase model sparsity.",
"The different morphological features can be modeled through combined feature tags, using a single (but very large) target space, or through having separate models for each of the features.",
"The combined features approach models the relationships between the different features explicitly, but the large target spaces for morphologically rich languages further increase sparsity.",
"On the other hand, separate feature modeling guarantees smaller target spaces for the individual features, but the hard separation between the features prevents modeling any inter-feature dependencies.",
"The set of morphological features includes lexicalized and non-lexicalized features, which further exacerbates joint modeling.",
"Non-lexicalized features, like gender, and number, among others, have limited target spaces, and usually modeled as tagging tasks.",
"Lexicalized features, like lemmas and diacritized forms (for Semitic lan-guages), are open-ended, with large target vocabularies.",
"Moreover, non-lexicalized features are modeled on the word level, whereas lexicalized features are optimally modeled on the character level.",
"This difference in the modeling granularity can be challenging for joint models.",
"In this paper we present a model for handling lexicalized and non-lexicalized features jointly.",
"We use a sequence-to-sequence architecture, with different parameter sharing strategies at the encoder and decoder sides for the different features.",
"The non-lexicalized features are handled with a tagger, which shares several parameters with the encoder, and uses a multitask-learning architecture to model the different non-lexicalized features jointly.",
"The lexicalized features, on the other hand, are handled with a specific decoder for each feature, sharing the same encoder.",
"Our architecture models the non-lexicalized features on the word level, with a context representation that spans the entire sentence.",
"The lexicalized features are modeled on the character level, with a fixed character context window.",
"The character level modeling is also suitable for surface form normalization, which is important for noisy texts common in dialectal content.",
"We use Modern Standard Arabic (MSA) and Egyptian Arabic (EGY ) as test cases.",
"Our joint model achieves 20% relative error reduction (1.9% absolute improvement) for MSA, and 11% relative error reduction (2.5% absolute improvement) for EGY , compared to a baseline that models the different morphological features separately.",
"The rest of the paper is structured as follows.",
"We present a brief background and a survey of related work in Section 2.",
"We introduce the approach and various models in Section 3, and discuss the experimental setup and results in Section",
"4. We conclude and provide some directions for future work in Section",
"5. 2 Background and Related Work In this section we present a brief linguistic overview of the challenges facing morphological modeling in Semitic and morphologically rich languages.",
"We then discuss related contributions in literature, and how our model compares to them.",
"Morphologically rich languages (MRLs) tend to have more fully inflected words than other languages, realized through many morphemes that represent several morphological features.",
"The target space for the combined morphological features therefore tends to be large, which increases sparsity.",
"MRLs also can be highly ambiguous, with different interpretations of the same surface forms.",
"Ambiguity is further exacerbated for Semitic languages, like Arabic and Hebrew, at which the short vowels (diacritics) can be kept or dropped.",
"The high degree of ambiguity in Arabic results in having about 12 analyses per word on average (Pasha et al., 2014).",
"1 Both morphological richness and ambiguity can be modeled with morphological analyzers , or morphological dictionaries, which are used to encode all potential word inflections in the language.",
"Morphological analyzers should ideally return all the possible analyses of a surface word (to model am-biguity), and cover all the inflected forms of a word lemma (to model morphological richness), covering all related features.",
"The best analysis can then be chosen through morphological disambiguation ; by predicting the different morphological feature values and use them to rank the relevant analyses from the analyzer.",
"The morphological features that we model for Arabic include: Lexicalized features: lemmas (lex) and diacritized forms (diac).",
"Non-lexicalized features: aspect (asp), case (cas), gender (gen), person (per), part-of-1 For more information on Arabic natural language processing, see (Habash, 2010).",
"speech (POS), number (num), mood (mod), state (stt), voice (vox).",
"Clitics: enclitics, like pronominal enclitics, negative particle enclitics; proclitics, like article proclitic, preposition proclitics, conjunction proclitics, question proclitics.",
"Table 1 shows an example highlighting the different morphological features.",
"The example presents a subset of the possible analyses for the word (cid:209)(cid:238) (cid:16)(cid:68)(cid:214)(cid:207) lmthm .",
"2 Disambiguation using the non-lexicalized features only is not conclusive enough, as we see in the last two analyses, where the lemma and diacritization only can disambiguate the right analysis.",
"Dialectal Arabic (DA) includes several dialects of Arabic, like EGY , that vary by the geographical location in the Arab world.",
"DA is also Semitic and an MRL, but it is mainly spoken, and lacks a standard orthography (Habash et al., 2012a).",
"The lack of a standard orthography further increases sparsity and ambiguity, hence requiring explicit normalization.",
"Habash et al. (2012a, 2018) proposed CODA, a Conventional Orthography for Dialectal Arabic, which aims to provide a conventionalized orthography across the various Arabic dialects.",
"We use CODA as the reference for the normalization task.",
"Arabic morphological tagging and disambiguation have been studied extensively in literature, with contributions for MSA (Khalifa et al., 2016; Ab-delali et al., 2016; Habash and Rambow, 2005; Diab et al., 2004), and DA (Habash et al., 2013; Al-Sabbagh and Girju, 2012; Duh and Kirchhoff, 2005).",
"There are also several recent contributions that showed significant accuracy improvement using deep learning models (Zalmout et al., 2018; Inoue et al., 2017; Zalmout and Habash, 2017; Heigold et al., 2016).",
"In addition to other deep learning contributions that showed limited success for Arabic (Shen et al., 2016).",
"Most of these contributions model the different morphological features separately, or focus on a limited feature subset.",
"We elaborate on the contributions with some joint modeling aspects later in the section.",
"Diacritization and lemmatization are very useful for tasks like information retrieval, machine translation, and text-to-speech, among others.",
"2 Arabic transliteration is presented in the Habash-Soudi-Buckwalter scheme (Habash et al., 2007).",
"POS Prc3 Prc2 Prc1 Prc0 Per Asp Vox Mod Gen Num Stt Cas Enc0",
"Diacritization has generally been an active area of research (Darwish et al., 2017; Zitouni et al., 2006; Nelken and Shieber, 2005).",
"More recent contributions use Deep Learning models in different configurations; Belinkov and Glass (2015) model diacritization as a classification task, using Long Short Term Memory (LSTM) cells.",
"And Abandah et al. (2015) use LSTMs to model diacritization as a sequence transcription task, similar to Mubarak et al. (2019) who model diacritization as a sequence-to-sequence task.",
"Early contributions for lemmatization used finite state machines (Schmid et al., 2004; Minnen et al., 2001), which had a limited capacity for modeling unseen words or lemmas.",
"There were also several contributions that utilize a joint tagging and lemmatization approach, using CRFs and Maximum Entropy models (Mller et al., 2015; Chru-pala et al., 2008).",
"Other contributions approached lemmatization as a lemma selection task (Ezeiza et al., 1998), where the goal is to select the correct lemma from a set of lemmas provided by a morphological analyzer.",
"Many of the lemmatization models for Arabic use a similar approach (Pasha et al., 2014; Roth et al., 2008).",
"More recently, sequence-to-sequence models with attention (Bahdanau et al., 2014) have been shown useful in several NLP tasks, with several lemmatization contributions (Malaviya et al., 2019; Bergmanis and Goldwater, 2018; Ptz et al., 2018).",
"Other contributions use additional morphosyntactic features as part of the modeling architecture (Kanerva et al., 2019; Kondratyuk et al., 2018), somewhat similar to our approach.",
"There are also several contributions for the joint modeling of the different morphological features in Arabic.",
"However, most of these contributions use separate models for each of the features, and usually use a ranking step to select the best overall morphological analysis from an external morphological analyzer (Roth et al., 2008; Habash and Rambow, 2007).",
"MADAMIRA (Pasha et al., 2014) is a popular system for Arabic morphological tagging and disambiguation.",
"It uses SVMs for the different non-lexicalized features, and n-gram language models for the lemmas and diacritized forms.",
"Zalmout and Habash (2017) presented a neural extension of this model, with LSTM taggers for the individual features, and neural language models for the lexicalized features.",
"Inoue et al. (2017) used multi-task learning for fine-grained POS tagging, modeling the different morphological features jointly, but they do not model lemmas or diacritized forms.",
"Zalmout and Habash (2019) also used multitask learning for the different non-lexicalized morphological features, and neural language models for lemmas and diacritized forms.",
"This model currently provides state-of-the-art results for Arabic.",
"In the models that rely on morphological analyzers (Zalmout and Habash, 2019, 2017; Pasha et al., 2014) surface form normalization are byproducts of selecting the correct analysis, rather than being explicitly modeled.",
"Non-lexicalized features are usually modeled on the word level, whereas lexicalized features are better handled through character level models.",
"Moreover, the context representation for morphological tagging of the non-lexicalized features usually spans the entire sentence, using LSTMs for example.",
"The optimal context representation for the lexicalized features, on the other hand, is through a fixed number of characters before and after the target word (Bergmanis and Goldwater, 2018).",
"This difference in modeling granularity, in terms of context representation or word/character level modeling, can be very challenging for joint modeling.",
"We use a modified sequence-to-sequence architecture, where some components of the encoder are shared between a tagger, for the non-lexicalized features, and the encoder-decoder architecture, for the lexicalized features.",
"We also use separate decoders for the different lexicalized features, that share the same encoder and trained jointly using a shared loss function.",
"The remainder of this section discusses the architecture in more detail.",
"The tagging architecture is similar to the architecture presented by Zalmout and Habash (2019).",
"We use two Bi-LSTM layers on the word level to model the context for each direction of the target word.",
"The context in the tagging network spans the entire input sentence.",
"For each sentence of length L { w 1 , w 2 , ..., w L }, every word w j is represented by vector v j , which is comprised of the concatenation: v j = [ w j ; s j ; a j ] , where w j is the word embedding vector, s j is a vector representation of the characters within the word, and a j is a vector representing all the candidate morphological tags (from an analyzer), for all the non-lexicalized morphological features.",
"To obtain the vector s j , we use an LSTM-based model, applied to the character sequence in each word separately.",
"We use the last state vector as the embedding representation of the word's characters.",
"Whereas to get the a j vector, for each morphological feature f , we use a morphological analyzer to obtain all possible feature values of the word to be analyzed.",
"We then embed each value separately (with separate embedding tensors for each feature, learnt within the model), then sum all the resulting vectors to to get a fj (since these tags are alternatives and do not constitute a sequence) (Zalmout and Habash, 2019).",
"We concatenate the individual a fj vectors for each morphological feature f of each word, to get a single representation, a j , for all the features: a fj = N f (cid:88) n =1 a fj,n a j = [ a posj ; ... ; a numj ; ... ; a voxj ] Where N f is the set of possible candidate values for each feature f (from the analyzer).",
"The a j vector does not constitute a hard constraint and can be discarded if a morphological analyzer is not used.",
"Several previous contributions for Arabic showed that pretraining the word embeddings is very useful (Erdmann et al., 2018; Watson et al., 2018; Zalmout and Habash, 2017), including the baselines used in this paper.",
"We therefore pre-train the word embeddings with FastText (Bojanowski et al., 2017), using a large external dataset.",
"The pre-trained embeddings are fixed during the model training.",
"The character and tag embeddings are learnt within the model.",
"We use a multitask learning setup to train the different morphological features jointly, through sharing the parameters of the hidden layers in the Bi-LSTM network.",
"The input is also shared, through the v j vector.",
"The output of the network is then fed to a separate non-linearity function, output layer, and softmax, for a probability distribution of each of the features separately.",
"Figure 1 shows the overall tagging architecture.",
"We share the character and word embeddings from the tagger network in the encoder.",
"The input context is modeled through a sliding window of a fixed number of characters around the target word, as in the Lematus model (Bergmanis and Goldwater, 2018).",
"We also use additional special symbols for the whitespace and target word boundaries.",
"In addition to the character embeddings, we also condition on the word level embedding of the word containing the characters.",
"We concatenate the word embedding vector with the input character embeddings.",
"Each character embedding c i is replaced by the concatenation [ c i ; w j ] , where w j is the d w -dimensional word embedding of the word j in which character i appears in.",
"Given the characters of input sentence c and its lemmatized equivalent y , the goal is to model P ( y k | c i , w j ) .",
"We use separate decoders for lemmatization and diacritization, with two LSTM layers for each.",
"Both decoders share the same input and parameters of the encoder Bi-LSTM network.",
"For each decoder, we condition on the decoder output of the previous step, along with Luong attention (Luong et al., 2015) over the encoder outputs h i , and the predicted tags from the tagger.",
"We use the last encoder output as the initial states for the decoder layers.",
"We use scheduled sampling (Bengio et al., 2015) during training, and feed the d c -dimensional character embeddings at every time step.",
"But we found empirically that using a constant sampling probability instead of scheduling provides better results.",
"We also use dropout on the non-recurrent connections of both the encoder and decoder layers during training.",
"The decoder outputs are fed to a softmax layer that reshapes the vectors to dimension d voc , then argmax to yield an output sequence y one character at a time.",
"Conditioning on the Predicted Tags In addition to the attention distribution and the previous time step, we also condition on the predicted tags from the tagger during decoding.",
"The goal is to provide an additional contextual signal to the decoders, and to disambiguate the possible lexical choices.",
"We use the output of the argmax (over the softmax distribution) for each feature, and concatenate the different tags as in the a j vector: t j = [ t aspj ; ... ; t posj ; ... ; t voxj ] Bi-LSTM z !",
"Preventing Backpropagation to Tagger The decoder produces the lexicalized features at the character level, whereas the predicted tags are on the word level.",
"The different granularities might create some biases, and we found that backpropagating gradients from the decoder to the tagger network leads to instability at the tagger.",
"Therefore, we prevent the decoder from backpropagating gradients to the tagger during training.",
"This is consistent with the model of Kondratyuk et al. (2018).",
"We use the term normalization in the sense of enriched normalization introduced by El Kholy and Habash (2012) for MSA; and in the sense of spelling conventionalization (into CODA) for DA as described by Eskander et al. (2013).",
"Both Bi-LSTM Encoder Lex Decoder \" # $ . Attention Diac Decoder \" # & ' .",
"are non-trivial tasks comparable to true-casing or spelling correction for other languages.",
"The normalization task is particularly important for dialectal content, which lack a standardized orthography.",
"The training data that we use has the diacritized annotations already in the CODA normalized form for EGY .",
"So the output sequence of the diacritization task should be both the diacritized and CODA normalized version of the input sequence.",
"This normalization is learnt explicitly in our character level sequence-to-sequence model.",
"For MSA there is no need for CODA normalization, so the normalized output includes any error correction that might happen in the training dataset.",
"Normalization is assessed as part of the overall diacritization accuracy.",
"We use a small held out tuning set of about 5% of the training data to save the best model during training.",
"We did not use the development set here to be consistent with other contributions in literature, where the development set is primarily used to evaluate high level design decisions only.",
"We train the model for a fixed number of epochs and select the model that performs best on the tuning set.",
"This method provided the most stable results, compared to early stopping or other methods.",
"The loss function is based on minimizing cross entropy H for each feature f .",
"The overall loss is the average of the individual losses for the different features, whether lexicalized or non-lexicalized: H ( y, y ) = 1 | F | (cid:88) f FH ( y f , y f ) Where F is the set of features that we model.",
"y represents the true feature value, and y is the predicted value.",
"We experimented with having different optimizers for the lexicalized and non-lexicalized features.",
"We also experimented with a weighted average for the different features, where the weights are learnt as part of the end-to-end system.",
"None of these modifications provided any improvement.",
"We use Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.0005, and we run the various models for 50 epochs.",
"Morphological disambiguation involves predicting the right combination of morphological features for each word in context.",
"We can either present the predicted features from the model directly, or use a morphological analyzer to guarantee more consistent feature values.",
"If a morphological analyzer is used, the disambiguation system selects the optimal analysis for the word from the set of analyses returned by the analyzer.",
"We use the predicted tags to rank the analyses, and select the analysis with highest number of matched feature values.",
"The different features can be assigned different weights during ranking.",
"Refer to other contributions that use a similar approach for more details (Zalmout and Habash, 2019, 2017; Pasha et al., 2014).",
"We use the Penn Arabic Treebank (PATB parts 1,2, and 3) (Maamouri et al., 2004) for MSA, and the ARZ dataset (Maamouri et al., 2012) from the Linguistic Data Consortium (LDC), parts 15, for EGY .",
"We use the same datasets as used in MADAMIRA (Pasha et al., 2014), which involves synchronizing the datasets with morphological analyzers, using the process described by Habash and Rambow (2005).",
"We follow the data splits recommended by Diab et al. (2013) for TRAIN , DEVTEST , and BLINDTEST .",
"3 Both datasets include gold annotations for the diacritized forms, lemmas, and the remaining 14 features.",
"The diacritized forms are normalized following the CODA guidelines for EGY .",
"We use Alif/Ya and Hamza normalization, which is commonly used for morphological modeling in Arabic (Zalmout et al., 2018; Pasha et al., 2014; Habash et al., 2013).",
"Table 2 shows the data sizes.",
"The TUNE dataset is used during the model training process, for early stopping or to keep the best performing model.",
"TUNE is extracted randomly from the original TRAIN split (almost 5% of TRAIN ), so the other splits are consistent with the splits used in literature.",
"The DEVTEST dataset is used during the system development to assess design choices.",
"The BLINDTEST dataset is used to evaluate the system after finalizing the architecture design, and to report the overall performance.",
"We use the same morphological analyzers that were used in MADAMIRA (Pasha et al., 2014), and the other baselines, for both MSA and EGY .",
"For MSA we use SAMA (Graff et al., 2009), and the combination of SAMA, CALIMA (Habash et al., 2012b), and ADAM (Salloum and Habash, 2014) for EGY .",
"We use the LDC's Gigaword corpus (Parker et al., 2011) to pretrain the MSA word embeddings, and the BOLT Arabic Forum Discussions corpus (Tracey et al., 2018) for EGY , as used in the reported baselines.",
"We preprocessed both datasets with Alif/Ya and Hamza normalization, as we did for the training dataset.",
"Tagger We use a similar setup as used by Zalmout and Habash (2019).",
"We use two Bi-LSTM hidden layers of size 800, and dropout probability of 0.4, with peephole connections.",
"The LSTM 3 We use the LDC datasets because their annotations cover many of the tasks that are relevant to morphological disambiguation, and they are often used for benchmarking purposes.",
"Other available datasets are usually limited to a particular task, like diacritization or POS tagging (Darwish et al., 2017, 2018; Abandah et al., 2015).",
"Evaluating our model using these datasets is also not straightforward, since they often use different tagsets or representations (especially for diacritization), for which automatic conversion would require extensive post-processing.",
"character embedding architecture uses two LSTM layers of size 100, and embedding size 50.",
"We use FastText (Bojanowski et al., 2017) to pretrain the word embeddings, with embedding dimension of 250, and an embedding window of size two.",
"Encoder-Decoder We use two LSTM layers of size 400 for both the encoder and decoder (bidirec-tional for the encoder), dropout value of 0.4, fixed sampling probability of 0.4 (Bengio et al., 2015).",
"We use the same word and character embeddings as the tagger.",
"We use beam decoding with beam size of 5, and a context window of 10 characters before and after the target word.",
"POS accuracy (POS): The accuracy of the POS tags, of a tagset comprised of 36 tags (Habash et al., 2013).",
"Non-lexicalized morphological features accuracy (TAGS ): The accuracy of the combined 14 morphological features we model, excluding lemmas and diacritized forms.",
"Diacritization accuracy (DIAC ): The accuracy of the diacritized forms, for MSA only.",
"CODA-based normalization accuracy (CODA): The accuracy of the CODA-normalized, and diacritized, EGY forms.",
"MSA does not need CODA normalization.",
"Lemmatization accuracy (LEMMA ): Lemma accuracy.",
"The lemmas are also fully diacritized in the LDC datasets, so this metric reflects the fully diacritized lemmas.",
"Full Analysis Accuracy (FULL ): Accuracy over the full analysis the strictest metric.",
"Baselines The first baseline is MADAMIRA (Pasha et al., 2014), which is one of the most commonly used morphological disambiguation models for Arabic.",
"We also use the model suggested by Zalmout and Habash (2017), which is based on a similar architecture, but uses LSTM taggers instead of the SVM models in MADAMIRA, and LSTM-based language models instead of the n-gram models.",
"The last baseline uses a multitask learning architecture to model the different non-lexicalized features jointly, but neural language models for the lexicalized features (Zalmout and Habash, 2019).",
"We use the same feature weights during the disambiguation process as this baseline.",
"Table 3 presents the results for the baselines, and the joint modeling architecture.",
"The results show a significant accuracy improvement for the joint modeling approach, compared to all baselines.",
"Diacritization The diacritization task seems to have benefited the most of the joint modeling architecture, with about 16% relative error reduction for MSA.",
"This is probably due to the relatively large target space for diacritized forms when using the language modeling approach in the baseline, compared to lemmatization for example, which has a smaller overall types count.",
"The character level sequence-to-sequence architecture is more suitable to this task, with a small character target space.",
"Normalization In the baseline model normalization is a byproduct of selecting the right analysis, rather than a modeling goal.",
"However, character level models provide for an explicit and direct normalization capability, as the model learns to map the erroneous sequence to the normalized target sequence.",
"Our model results in 12% relative error reduction for EGY .",
"Overall Feature Consistency An analysis is consistent if all the feature values are linguistically acceptable to co-occur with each other.",
"For example, case is undefined for verbs, so if a verb analysis had a defined case value, this analysis is inconsistent.",
"The same applies to consistency between the tags and the corresponding lemma (or diacritized form).",
"The TAGS metric, which represents the accuracy of the combined non-lexicalized features, also shows noticeable improvement for MSA.",
"The fact that TAGS improved, along with FULL , while the POS accuracy remained somewhat similar, indicates that the model is now producing more consistent morphological predictions.",
"This improved consistency is probably the result of enhanced diacritization and lemmatization models, which provide a better signal to the overall analysis ranking.",
"The improvement in TAGS for EGY , on the other hand, is limited.",
"This indicates that the model was probably already producing more consistent non-lexicalized morphological features, and the improvement in the FULL metric is due to improved diacritization and lemmatization only.",
"The Role of Morphological Analyzers Morphological analyzers are also used to guarantee consistency in the predicted features.",
"The base-Model FULLTAGSDIACLEXPOS",
"lines and our best performing model all use morphological analyzers, to get the candidate tags at the input, and to produce the best analysis through the ranking process.",
"We train our model without using the analyzer without the t vector and without ranking to evaluate its role in the morphological disambiguation task.",
"The results are lower, both for MSA and EGY .",
"However, the result for MSA is very close to the (Zalmout and Habash, 2017) baseline, which uses separate feature models (with the analyzer).",
"This indicates that our model can match the accuracy of a strong baseline, without relying on expensive external resources.",
"This does not apply to EGY , probably due to the lower training data size and noisier content.",
"Even with a better model, morphological analyzers still provide additional consistency between the different features.",
"BLINDTEST Results The results for the BLINDTEST dataset were consistent with the DEVTEST .",
"The accuracy for EGY using the strongest baseline is 78.1, based on the multitask learning architecture for the tags.",
"The accuracy of the best system, using the joint modeling architecture along with the morphological analyzer, is 80.3.",
"We also observed the same behavior for MSA, with somewhat similar values to DEVTEST .",
"The strongest baseline had an accuracy of 90.8, whereas the best model had an accuracy of 92.6.",
"The Role of Morphological Analyzers The goal is to assess the role of morphological analyzers in the consistency (following the consistency definition mentioned earlier) of the predicted features.",
"We took a sample of 1000 words from the MSA DEVTEST , and ran it through the joint model that does not use a morphological analyzer, and checked the errors in the predictions.",
"There were 110 errors (11% of the sample), for an accuracy of 89%, which is close to the reported accuracy over the entire dataset.",
"About 62% of the errors had consistent feature predictions, but the predicted analysis did not match the gold.",
"And around 13% of the errors are due to gold errors.",
"Around 25% of the errors (2.8% of sample) had inconsistent predictions.",
"This roughly matches the accuracy gap between the joint model with and without the morphological analyzer, which is also around 2%.",
"This indicates that the accuracy boost that the morphological analyzer provides is to a large extent due to the consistency it conveys.",
"We also observed that 37% of the inconsistent predictions (1% of the sample) had a correct lemma, but the lemma was inconsistent with the analysis.",
"The remaining 63% (1.7% of sample), had an invalid lemma.",
"Joint Modeling vs Separate Modeling We also investigated the distribution of errors over the different features for the joint model against the baseline of separate feature models, both using the morphological analyzer.",
"We annotated the errors in a 1000-word sample from DEVTEST , for both MSA and EGY , with the main erroneous feature.",
"For example, if the predicted analysis is a verb inflection of a gold noun, the main erroneous feature would be the POS tag, even if other features ended up being wrong as a result.",
"For MSA, the error distribution for the baseline is: case 27%, diacritization 22%, POS 18%, lemmatization 13%, gold errors 11%, and smaller percentages for state, voice, person, and enclitics.",
"Whereas the distribution for the joint model is: case 26%, POS 21%, lemmatization 18%, gold errors 14%, diacritization 13%, and small percentages for state, voice, and person.",
"In both models, case dominates the error distribution, since identifying the case ending in MSA is particularly challenging.",
"The main difference between the models in terms of error distribution is the diacritization, where we observe a significant boost when we use the joint model.",
"The apparent increase in the error percentages of the other error types at the joint model is due to the drop in the overall errors count, while many have a lower drop rate.",
"For EGY , a notable error pattern is when the prediction matches the MSA-equivalent analysis of the dialectal word, like having an MSA-like diacritization, or having a case ending (DA, like EGY , does not have case ending).",
"This happens due to code-switching with MSA in the dialectal content, which is also reflected at the analyzer.",
"This error type is not an error per se, but we do include it in the analysis.",
"The error distribution for the separate features baseline is: gold errors 23%, MSA-equivalents 21%, POS 17%, lemmatization 14%, diacritization 12%, and smaller percentages for several other error types.",
"Whereas the distribution for the joint model is: gold errors 27%, MSA-equivalents 21%, lemmatization 18%, POS 14%, diacritization 7%, and smaller frequencies for the other errors.",
"Gold errors are frequent, but this is consistent with other contributions that use the same dataset (Zalmout et al., 2018).",
"Like MSA, the percentage increase of the other error types is due to lower drop rates.",
"We presented a joint modeling approach for the lexicalized and non-lexicalized features in morphologically rich and Semitic languages.",
"Our model achieves a significant improvement over several baselines for Arabic, and matches the baseline for MSA without having to use an expensive morphological analyzer.",
"The results highlight the benefits of joint modeling, where diacritization seems to have benefitted the most.",
"We observe, however, that further research is needed to enhance the overall consistency of the predicted features, without relying on external morphological analyzers.",
"The first author was supported by the New York University Abu Dhabi Global PhD Student Fellowship program.",
"The support and resources from the High Performance Computing Center at New York University Abu Dhabi are gratefully acknowledged."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"method",
"result",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"other",
"other"
] |
[
"Cross-lingual language tasks typically require a substantial amount of annotated data or parallel translation data.",
"We explore whether language representations that capture relationships among languages can be learned and subsequently leveraged in cross-lingual tasks without the use of parallel data.",
"We generate dense embeddings for 29 languages using a denoising autoencoder, and evaluate the embeddings using the World Atlas of Language Structures (WALS) and two extrinsic tasks in a zero-shot setting: cross-lingual dependency parsing and cross-lingual natural language inference 1 .",
"Recent efforts to leverage multilingual datasets in language modeling (Conneau and Lample, 2019; Devlin et al., 2019) and machine translation (John-son et al., 2017; Lu et al., 2018) highlight the potential of multilingual models that can perform well across various languages, including ones for which training sets are scarce.",
"Most of the current multilingual research focuses on learning invariant representations or removing language-specific features after training (Libovick et al., 2020; Bjerva and Augenstein, 2021).",
"Despite recent advances, there are still limitations.",
"Previous work has shown that similar languages can benefit from sharing parameters, but less similar languages do not help (Zoph et al., 2016; Pires et al., 2019).",
"However, in spite of some interests in typology (Ponti et al., 2019), identifying similar languages is nontrivial, especially for less studied ones.",
"In addition, as Zhao et al. (2019) suggest, learning invariant representations can actually harm model performance.",
"1 Our learned language embeddings and code available at https://github.com/DianDYu/language_ embeddings Therefore, in order to leverage language agnostic and language specific information effectively, we propose to generate language representations and examine the interactions among different language representations.",
"One way to represent language identity within a multilingual model is the use of language codes, or dense vectors representing language embeddings.",
"If languages are represented with vectors that capture cross-lingual similarities and differences across different dimensions, this information can guide a multilingual model regarding what and how much of the information in the model should be shared among specific languages.",
"Much of the previous research focused on generating language embeddings using prior knowledge such as word order (Ammar et al., 2016; Littell et al., 2017), using a parallel corpus (Bjerva et al., 2019b; stling and Tiedemann, 2017), and using language codes as an indicator to distinguish input and output words in a shared vocabulary into different languages (John-son et al., 2017; Conneau and Lample, 2019).",
"In contrast, our work focuses on generating and using language embeddings more effectively as soft-sharing (de Lhoneux et al., 2018) of parameters among various languages in a single model.",
"Furthermore, we are motivated by a more difficult setting where the properties of each language are not known in advance, and no parallel data is available.",
"We investigate whether we can generate language embeddings to represent typological information derived solely from corpora in each language without the use of any parallel text, translation models, or prior knowledge.",
"Inspired by the findings that structural similarity, especially word ordering, is crucial in large pretrained multilingual language models (K et al., 2020), we propose an unsupervised method leveraging denoising autoencoders (Vincent et al., 2008) to generate language embeddings.",
"We show that our approach captures typological information by comparing the information in our language embeddings to language-specific information in the World Atlas of Language Structures (WALS, Dryer and Haspelmath, 2013).",
"In addition, to address the question of whether the learned language embeddings can help in downstream language tasks, we plug-in the language embeddings to cross-lingual dependency parsing and natural language inference (XNLI, Conneau et al., 2018) in a zero-shot learning setting, obtaining performance improvements.",
"Previous related research approached language representations by using prior knowledge, dense language embeddings with multilingual parallel data, or no prior knowledge about languages but having language embeddings primarily as a signal to identify different languages.",
"An intuitive method to represent language information is through explicit information such as known word order patterns (Ammar et al., 2016; Little, 2017), part-of-speech tag sequences (Wang and Eisner, 2017), and syntactic dependencies (stling, 2015).",
"Littell et al. (2017) propose sparse vectors using pre-defined language features such as known typological and geographical information for a given language.",
"However, linguistic features may not be available for less studied languages.",
"Our proposed approach assumes no prior knowledge about each language, deriving typological information from plain text alone.",
"Once a vector for a target language is created, it contains many typological features of the target language, and can be used for transfer learning in downstream tasks.",
"Other previous work has also explored dense continuous representations of languages.",
"One method is to append a language token to the beginning of a source sentence and train the language embeddings with a many-to-one neural machine translation model (Malaviya et al., 2017; Tan et al., 2019).",
"Another method is to concatenate language embedding vectors to a character level language model (stling and Tiedemann, 2017; Bjerva and Augenstein, 2018; Bjerva et al., 2019a).",
"These two methods require parallel translation data such as Bible and TED Talk.",
"Rabinovich et al. (2017) derive typological information in the form of phylogenetic trees from translation of different languages into English and French using the European Parliament speech corpus (Koehn, 2005), based on the assumption that unique language properties are present in translations (Baker et al., 1993; Toury, 1995).",
"Bjerva et al. (2019b) abstract the translated sentences from other languages to English with part-of-speech tags, function words, dependency relation tags, and constituent tags, and train the embedding vectors by concatenating a language representation with a symbol representation.",
"In comparison, we generate our language embeddings using no parallel corpora or linguistic annotation, which is suitable for a wider variety of languages, including in situations where no parallel data or prior knowledge is available.",
"The approach that is closest to ours is XLM (Con-neau and Lample, 2019), which adds language embeddings to each byte pair embedding using Wikipedia data in various languages with a masked language modeling objective.",
"However, similar to Johnson et al. (2017), the trained language embeddings only serve as an indicator to the encoder and decoder to identify input and output words in the vocabulary as belonging to different languages.",
"In fact, in a follow up paper, XLM-R (Conneau et al., 2020), language embeddings are removed from the model for better code-switching, which suggests that the learned language embeddings may not be optimal for cross-lingual tasks.",
"In this paper, following the finding that structural similarity is critical in multilingual language models (K et al., 2020), we generate language embeddings from a denoising autoencoder objective and demonstrate that they can be effectively used in cross-lingual zero-shot learning.",
"We first present the data used to generate language embeddings, then introduce our approach inspired by denoising autoencoders (Vincent et al., 2008).",
"To train our multilingual model, we use the CommonCrawl dataset from the CoNLL 2017 shared task (Ginter et al., 2017) to obtain monolingual plain text in various languages.",
"To represent words across different languages in a shared space, we use the unsupervised pretrained aligned word embeddings from MUSE (Lample et al., 2018).",
"We choose the 29 languages from the CoNLL 2017 monolingual text dataset for which MUSE pretrained embeddings are available.",
"2 A subset of 200K sentences are selected randomly for each language.",
"The languages we use are: English, French, Romanian, Arabic, German, Russian, Bulgarian, Greek, Slovak, Catalan, Hebrew, Slovene, Croatian, Hungarian, Spanish, Czech, Indonesian, Swedish, Danish, Italian, Turkish, Dutch, Norwegian Bokml, Ukrainian, Estonian, Polish, Vietnamese, Finnish, and Portuguese, which cover ten language genera.",
"We experiment with two types of word representations in training language embeddings.",
"The most straightforward way is to use the pretrained MUSE embedding for each specific language (we refer to this setting as Spe. ).",
"We also experimented with mapping word embeddings from different languages into one language (English in our experiments because it is used as the pivot language in MUSE embeddings, Eng. ) for three reasons.",
"First, because MUSE is mainly trained by an orthogonal rotation matrix and the distances among words in each language are still maintained thereafter, language identities can potentially be revealed.",
"The result is that the learned language embeddings reflect the features incorporated in the unsupervised word mapping methods instead of the intrinsic language features.",
"Second, we hypothesize that mapping to a single language space requires the model to encode more information in language embeddings as their language identities instead of relying on their revealed ones.",
"Finally, using shared word embeddings can reduce the vocabulary size for memory concerns by effectively reducing both the lookup table size and the output softmax dimension size.",
"For Eng. word embedding mapping, we align words from different languages to English embeddings using cross-domain similarity local scaling (CSLS, Lample et al., 2018).",
"The vocabulary of our model is restricted to the words in the English MUSE embeddings, and all unknown words are replaced with a special unknown token.",
"Although imperfect mapping from each language to English tokens may introduce noise (see scores in Appendix D) and result in a coarse approximation of the original sentences, crucial syntactic and semantic infor-2 https://github.com/facebookresearch/ MUSE mation should still be present.",
"In our experiments, a language code is appended to each token according to the original language of the sentence.",
"For instance, the German sentence Er hat den roten Hund nicht gesehen\" would be represented in our Spe. condition as Er_de hat_de den_de roten_de Hund_de nicht_de gesehen_de and in the Eng. condition as he_de has_de the_de red_de dog_de not_de seen_de Intuitively, the idea is to have the words themselves be the same across languages (either through the aligned MUSE embeddings or by direct mapping to English words), and let the additional language code provide to the model the information that would explain the structural differences observed across languages in the training data. 3.2 Denoising autoencoder Given a multilingual plain text corpus with sentences in each language (and no parallel text), we first perturb each sentence to create a noisy version of the sentence where its words are randomly shuffled. The training objective is to recover the original sentences, which requires the model to learn how to order words in each language. We hypothesize that compared to language modeling, this will encourage the language embeddings to learn more structural information instead of relying on topics or word co-occurrence to generate meaningful training sentences. We implement our multilingual denoising autoencoder with an LSTM-based (Hochreiter and Schmidhuber, 1997) sequence-to-sequence model (Sutskever et al., 2014). The input strings are perturbed sentences and the output strings are the original sentences. See Appendix A.1 for implementation details. After preprocessing the data, we concatenate a language embedding vector initialized from normal distribution as a language identity feature (the language code mentioned in Section 3.1) to each of the pretrained word embeddings. Since certain languages are more similar to, or more different from, each other, the model will learn how to reorder a sequence of words depending on the specific language. For example, reordering an Italian sentence should be more similar to reordering a Spanish sentence than it is to reordering a German sentence. Because the decoder captures the actual word order of the sentences in each target language, whereas the language codes in the encoder are meant to capture only language identity and no word order information, we use the extracted language embeddings from the decoder in our experiments. 3 Each word is represented with a pretrained 300-dimensional vector, and each language embedding is represented with a 50-dimensional vector 4 . The input token is thus a 350-dimensional vector from the concatenation. 4 Experiments To examine the quality of the typological information captured by the language embeddings, we perform intrinsic and extrinsic evaluations. Our intrinsic evaluation consists of predicting linguistic typology and language features from the World Atlas of Language Structures (WALS, Dryer and Haspelmath, 2013). Our extrinsic evaluations are based on cross-lingual dependency parsing and cross-lingual natural language inference (XNLI, Conneau et al., 2018) in a zero-shot learning setting, where a trained model makes predictions on a language not seen during training, but for which a language embedding has been learned from plain monolingual text. In contrast with previous research which applies learned typology to cluster similar languages and train machine translation tasks in clusters (Tan et al., 2019), we explore if we can apply the learned embeddings directly into downstream tasks. We compare three different sets of embeddings based on our approach with three sets of embeddings from previous work: Spe. lang_emb represents language embeddings from our proposed denoising autoencoder trained with language specific MUSE embeddings, using CommonCrawl text. Eng. lang_emb represents language embeddings trained with English MUSE embeddings after mapping words from different languages to English, using CommonCrawl text. Wiki lang_emb represents language embeddings trained with English MUSE embeddings using Wikipedia. We use the same data selection and preprocessing process as detailed in Section 3.1. We use these embeddings to show the 3 To confirm our assumption about the embeddings for the language codes in the encoder and the decoder, we also performed experiments using the encoder language embeddings. As expected, the results obtained with embeddings from the encoder were inferior in every case tested. 4 We experimented with different dimensions for language embedding and did not observe performance difference. impact of training data. In addition, we use these embeddings to compare with XLM embeddings trained with Wikipedia. Malaviya represents language embeddings from Malaviya et al. (2017), trained with a many-to-one machine translation model using Bible parallel data. It has 26 languages in common with our 29 languages except English, Hebrew, and Norwegian. We use these embeddings to represent previous methods of learning language representations from parallel data. 5 XLM mono represents language embeddings trained with XLM model using the same monolingual data as Wiki lang_emb on 29 languages. XLM parallel represents language embeddings trained with XLM using monolingual and parallel data from 15 XNLI languages. We extract the embeddings from the publicly available model. 4.1 Linguistic typology prediction We first inspect the language embeddings qualitatively through principle component analysis (PCA) visualization. We also use spectral clustering to recover the language genus (language family subgroup) information from the embeddings. To compare the quality of the clusterings quantitatively, we calculate the adjusted Rand index (Hubert and Arabie, 1985) between the generated clusters and the actual language genera. 4.2 WALS feature prediction We evaluate the language embeddings on predicting language features in WALS. Each WALS feature describes a characteristic of languages, such as the order of subject, object, and verb. We consider the features for which information is available for more than 50% of the languages we use and cast each feature prediction as a multi-class classification task. We then classify the features into the following categories (see details in Appendix B). Lexicon : usage of specific words, e.g. whether the language has separate words for hand and arm,",
"etc.; Syntax : mostly related to the relative orders between various types of constituents, including order of subject, object and verb, adpo-5 We do not evaluate the embeddings from Malaviya et al. (2017) on parsing and XNLI because they do not include English embeddings, which are necessary for a direct comparison.",
"In XNLI, in particular, there is only training data for English.",
"sitions and noun phrases, and also features related to syntactic constructions; Partially Morphological (Part. Morph.) : features that mainly concern syntax or semantics but either usually relate to morphology (such as inflectional morphemes), or have morphological information coded in the values of the features, e.g. gender systems, order of negative morphemes and verbs; Non-learnable : features that mainly concern morphology, phonology, or phonotactics, and are not learnable from reordering plain text.",
"The categories make it easier to evaluate what the language embeddings capture.",
"We train linear classifiers to predict WALS results.",
"For each feature, we hold out one language and train a classifier on the language embeddings of the rest of the languages to predict the corresponding feature values on the held-out language embedding, in a leave-one-out cross-validation scheme.",
"We then average the accuracy of the features within each category to report the results.",
"In addition to comparing different language embeddings, we also compare to two baselines: a Random baseline, and a Majority baseline (which predicts the most common value for each feature).",
"We repeat this procedure 100 times while randomly permuting the orders of the input vectors to the classifiers to eliminate possible effects due to initial states and report the average and significant scores.",
"Compared to a recent shared task where the input is some features of a language (e.g. language family and various WALS features), with optionally pre-computed language embeddings to develop models to predict other features (Bjerva et al., 2020), we investigate if trained language embeddings alone can be used to predict WALS features.",
"In addition, we showed that our language embeddings outperformed a frequency baseline among other baselines (see Section 5.2) compared to Bjerva et al. (2020).",
"Since our language embeddings are trained using a word ordering task, we hypothesize that they capture syntactic information.",
"To verify that meaningful syntactic information is captured in the language embeddings, we use a dependency parsing task where sentences for each target language are parsed with a model trained with treebanks from other languages, but no training data for the target language.",
"This can be seen as a form of cross-lingual parsing or zero-shot parsing, where multiple source languages are used to train a model for a new target language.",
"Without annotated training data for parsing a target language, the model is expected to leverage treebanks from other languages through language embeddings.",
"We use 16 languages from Universal Dependencies v2.6 (Zeman et al., 2020), representing five distinct language genera (Table 2).",
"We modified Yu Zhang's implementation 6 of biaffine dependency parser (Dozat and Manning, 2017).",
"In specific, we freeze word embeddings, concatenate a 50-dimensional embedding (either the corresponding Eng. language embedding or a random embedding) to the embedding of each token, and not use part-of-speech information (since we are assuming no annotated data is available for the target language).",
"The goal of this evaluation is not to obtain state-of-the-art attachment scores, but to find whether a model that uses our language embeddings produces higher attachment scores than a model that instead uses random embeddings of the same size 7 .",
"While our embeddings should capture syntactic typology, random embeddings would simply indicate to the model the language for each sentence with no information about how languages are related.",
"Natural language inference (NLI) is a language understanding task where the goal is to predict textual entailment between a premise and a hypothesis as a three-way classification: neutral , contradiction , and entailment .",
"The XNLI dataset (Conneau et al., 2018) translates English NLI validation and test data into 14 other languages.",
"We evaluate on ten of the XNLI languages which we trained language embeddings with.",
"State-of-the-art models on XNLI are Transformers (Vaswani et al., 2017) pretrained on large corpora (Hu et al., 2020).",
"To evaluate if our learned language embeddings (from an LSTM model) can be plugged off-the-shelf into other architectures such as Transformer, we compare with two strong Transformer-based baselines, XLM (Conneau and Lample, 2019. L = 12, H = 1024, 250M params) 6 https://github.com/yzhangcs/parser 7 Random embeddings are used to eliminate the effect of different dimensionality.",
"In our preliminary experiments, we found that adding a random embedding performs better than not adding any embedding.",
"and XLM-R (Conneau et al., 2020. XLM-R Base : L = 12, H = 768, 270M params; XLM-R Large : L = 24, H = 1024, 550M params).",
"XLM adds language embeddings together with each word embedding and position embedding as the input embedding in training masked language modeling (MLM, with monolingual data) and/or a translation language modeling (TLM, with translation parallel data).",
"In comparison, XLM-R removes language embedding and is pretrained with MLM on much more data.",
"We train our model on the English MultiNLI (Williams et al., 2018) dataset, and directly evaluate the trained model on the other languages without language-specific fine-tuning, in a zero-shot cross-lingual setting.",
"To select the best checkpoint for test set evaluation, we follow Conneau et al. (2020) by evaluating on the development set of all languages.",
"In addition, we also experiment with a fully zero-shot transfer setting where we select the best checkpoint by evaluating on the English development set.",
"We run the selected checkpoint on the test set of each language and report the accuracy scores.",
"We use the pub-lic available XLM model pretrained on 15 XNLI languages with MLM and TLM objectives, and XLM-R pretrained on 100 languages.",
"In order to add our learned language embeddings into XLM and XLM-R models, we normalize our embeddings to have the same variance as the XLM language embeddings, and we learn a simple linear projection layer to map our 50-dimension embeddings (which is frozen during training) to the hidden dimension of corresponding models.",
"We report all results averaged over three random seeds.",
"See Appendix A.2 for implementation details.",
"We show results of our proposed language embeddings in comparison to the baselines and language vectors generated from previous work on linguistic typology, WALS, cross-lingual parsing, and XNLI.",
"We report results with Eng. language embeddings.",
"Detailed comparison to other language embeddings on each task can be found in Appendix C. 5.1 Linguistic Typology Lexicon Syntax Part.",
"projection of the learned language embeddings.",
"Due to space limitations, we only show the projection of the language embeddings using words mapped to English embeddings; using language-specific embeddings produces similar results.",
"We can clearly see the clustering of Slavic languages on the lower left, Romance on the right, and Germanic on the upper left.",
"Our dataset also contains two Finnic languages, which appear right above the Slavic languages, and two Semitic languages, which appear on the lower right.",
"The other languages, Vietnamese, Indonesian, Turkish, and Greek, are from language groups underrepresented in our dataset, and appear either mixed with the Germanic languages (in the case of Hungarian, Turkish and Greek), or far on the lower right corner (Viet-namese, Indonesian).",
"Romanian, a Romance language, appears miscategorized by our language embeddings.",
"While it is close to the cluster of romance languages, it appears closer to the singleton languages in the dataset and to the two Semitic languages.",
"In addition to actual language relationships represented by color, we also present the result of spectral clustering with four categories through different shapes.",
"Results illustrate that our language embeddings can capture similarities and dissimilarities among language families.",
"In comparison, language embeddings generated by Malaviya et al. (2017) do not capture clearly visible language relationships (see Appendix C.3).",
"Quantitatively, clusters from our learned language embeddings ( Eng. ) achieve a much higher Rand score (0.58) compared to previous language embeddings, as shown in Table 1 (last column).",
"This indicates that our clusters closely align with true language families.",
"Table 1 shows the prediction accuracy for WALS features, averaged within each category.",
"Unlike the language representations generated by Bjerva et al. (2019b), which do not outperform the majority baseline without finetuning, our derived language embeddings perform significantly better than the baselines and previous methods in lexicon, syntax, and partially morphological categories.",
"Note that even though the training objective of the denoising autoencoder is to recover a language-specific word order, the model does not use linguistic features such as grammatical relation labels or subject-verb-object order information.",
"Instead, it derives typological information from text alone through the word reordering task.",
"The language embeddings generated with words mapped to English embeddings ( Wiki and Eng. ) generally produce more accurate predictions, with the models trained from Wikipedia producing slightly better results likely due to cleaner training data.",
"We show WALS results comparison on 29 languages and comparison to XLM parallel in Appendix C.1.",
"Results from different settings show that we do not need clean data (e.g. Wiki) to generate language embeddings.",
"The cross-lingual dependency parsing results in Table 2 indicate that our language embeddings are in fact effective in allowing a parsing model to leverage information from different languages to parse a",
"new language.",
"Substantial accuracy improvements were observed for 13 of the 16 languages used in the experiment, while accuracy degradation was observed for two languages.",
"Notably, there were large improvements for each of the four Romance languages used (ranging from 7.32 to 10.62 absolute points), and a steep drop in accuracy for Hebrew (-8.21).",
"Although a sizeable improvement was observed for the only other language from the same genus in our experiment (Arabic, with a 4.07 improvement), accuracy for the two Semitic languages was far lower than the accuracy for the other genera.",
"This is likely due to the over-representation of Indo-European languages in our dataset, and the lower quality of the MUSE word alignments for these languages (Appendix D).",
"While our accuracy results are well below current results obtained with supervised methods (i.e. using training data for each target language), the average accuracy improvement of 3.4 over the baseline, which uses the exact same parsing setup but without language embeddings, shows that our language embeddings encode actionable syntactic information, corroborating our results using WALS.",
"The XNLI results in Table 3 indicate that our language embeddings, which capture relationships between each test language and the training language (English), are also effective in tasks involving higher-level semantic information.",
"We observe consistent performance gains over very strong baselines in all settings and models for each language.",
"Specifically, in the fully zero-shot setting where we select the best model based on the English development data, adding our learned language embeddings increases 1.1 absolute points on average for XLM.",
"The same trend holds for XLM-R results, not shown due to space limits.",
"On the other hand, if we select the best model on the averaged development set following Conneau et al. (2020), we observe averaged performance gain of 0.9, 0.5, and 0.6 absolute points for XLM, XLM-R Base , and XLM-R Large , respectively.",
"We conjecture that the lower improvement on XLM-R models compared to XLM is due to that XLM-R was pretrained without language embeddings.",
"When we add our language embeddings to the original word and positional embeddings, the distribution of the overall input embedding such as variance is changed.",
"Hence, the language embeddings can be considered as noise at the beginning, making it hard to learn and incorporate additional information.",
"However, the improvement is consistent over all strong baselines, suggesting that our language embeddings, which are not optimized towards any specific task, can be leveraged off-the-shelf in large pretrained models and achieve better zero-shot transfer ability in downstream tasks.",
"Our results in each of the intrinsic and extrinsic evaluation settings demonstrate that our denoising autoencoder objective, which has been shown to be effective in various language model pre-training tasks (Lewis et al., 2020; Raffel et al., 2020), is effective for learning language embeddings that capture typological information and can be used to",
"improve cross-lingual inference.",
"Even though reconstructing the original sentence from a randomly ordered string is the direct training objective, our evaluation of the resulting embeddings is not based simply on word order.",
"The grammar of a language is of course an important factor in determining the order of words in a sentence in that language, although it is not the only factor.",
"The syntax area features in our WALS evaluation, which are largely related to relative orders of constituents and syntactic constructions and therefore clearly relevant to our training objective, confirm that part of what our embeddings capture is in fact related to word ordering.",
"However, our results on the lexicon and morphology areas indicate that language-specific information capture in our embeddings goes beyond ordering information.",
"Although it may seem that the model only has access to information about word ordering during training, text in the various languages also provides information about word usage, co-occurrence, and to some extent even inflection through the word embeddings.",
"As a result, language embeddings trained with our approach capture interpretable and useful typological information beyond word order.",
"Because language embeddings are the only signal to the model indicating what each of the languages that are mixed within the training data reads like, we conjecture that our denoising autoencoder objective encourages the embeddings to encode language-specific information necessary to distinguish each language from the others.",
"Language embeddings have the potential to contribute to our understanding of language and linguistic typology, and to improve the performance of downstream multilingual NLP applications.",
"Our proposed method to generate dense vectors to capture language features is relatively simple, based on the idea of denoising autoencoders.",
"The model does not require any labeled or parallel data, which makes it promising for cross-lingual learning in situations where no task-specific training datasets are available.",
"We showed that the trained language embeddings represent typological information, and can also benefit the downstream tasks in a zero-shot learning setting.",
"This is an encouraging result that indicates that task-specific annotated data for various languages can be leveraged more effectively for improved task performance in situations where language-specific resources may be scarce.",
"At the same time, our results indicate that the effectiveness of our approach is sensitive to the set of languages used, highlighting the importance of using a more balanced variety of languages than is current practice, our work included.",
"We will pursue an investigation of the impact of language selection in multilingual and cross-lingual models as future work, to our understanding of these methods and their broader applicability.",
"We thank the anonymous reviewers for their constructive suggestions.",
"This work was supported by the National Science Foundation under Grant No. 1840191.",
"Any opinions, findings, and conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views of the NSF.",
"Our motivation to learn language embeddings without parallel data is to understand how language relationships and typology can be generated without any human annotation.",
"We also explore how our learned language embeddings can be applied to downstream tasks.",
"We hope that our proposed method can inspire future research on generating and utilizing typology in cross-lingual settings because we may not have a large amount of translation data for each language, which has been widely used in past research on data-driven modeling of linguistic typology.",
"Since our proposed method can be easily adapted to different architectures and pre-trained models with minimal cost (in terms of both data annotation cost and computation cost), it can reduce resources needed when applying language embeddings for zero-shot cross-lingual downstream tasks.",
"We run all our experiments on two TITAN RTX GPUs and two RTX 2080Ti GPUs.",
"We compare our language embeddings to baselines in the standard settings in literature."
] | [
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"objective",
"abstain",
"result",
"result",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"objective",
"objective",
"abstain",
"result",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method"
] |
[
"Document-level contextual information has shown benefits to text-based machine translation, but whether and how context helps end-to-end (E2E) speech translation (ST) is still under-studied.",
"We fill this gap through extensive experiments using a simple concatenation-based context-aware ST model, paired with adaptive feature selection on speech encodings for computational efficiency.",
"We investigate several decoding approaches, and introduce in-model ensemble decoding which jointly performs documentand sentence-level translation using the same model.",
"Our results on the MuST-C benchmark with Transformer demonstrate the effectiveness of context to E2E ST. Compared to sentence-level ST, context-aware ST obtains better translation quality (+0.18-2.61 BLEU), improves pronoun and homophone translation, shows better robustness to (artificial) audio segmentation errors, and reduces latency and flicker to deliver higher quality for simultaneous translation.",
"1 1 Introduction Document-level context often offers extra informative clues that could improve the understanding of individual sentences.",
"Such clues have been proven effective for textual machine translation (MT), particularly in handling translation errors specific to discourse phenomena, such as inaccurate corefer-ence of pronouns (Guillou, 2016) and mistranslation of ambiguous words (Rios et al., 2017).",
"Besides, ensuring consistency in translation is virtually impossible without document-level context as well (Voita et al., 2019).",
"Analogous to MT, speech translation (ST) also suffers from these translation issues, and super-sentential context could in fact be more valuable to ST because 1) homophones 1 Source code is available at https://github.com/ bzhangGo/zero .",
"Figure 1 : Overview of the concatenation-based context-aware ST. y n denotes the n -th target sentence in a document; x n denotes the speech encodings extracted from the n -th audio segment.",
"We use dashed gray box to indicate the concatenation operation.",
"< s > : sentence separator symbol.",
"and acoustic noise bring additional ambiguity to ST, and 2) a common use case in ST is simultaneous translation, where the system has to output translations of sentence fragments, and may have to predict future input to account for word order differences between the source and target language (Grissom II et al., 2014).",
"Both for ambiguity from the acoustic signal, and operating on small sentence fragments, we hypothesize that access to extra context 2 will be beneficial.",
"Although recent studies on ST have achieved promising results with end-to-end (E2E) models (Anastasopoulos and Chiang, 2018; Di Gangi et al., 2019; Zhang et al., 2020a; Wang et al., 2020; Dong et al., 2020), nevertheless, they mainly focus on sentence-level translation.",
"One practical challenge when scaling up sentence-level E2E ST to the document-level is the encoding of very long audio segments, which can easily hit the computational bottleneck, especially with Transformers (Vaswani et al., 2017).",
"So far, the research question of whether and how contextual information benefits E2E ST has received little attention.",
"In this paper, we answer this question through extensive experiments by exploring a concatenation-2 By default, we use context to denote both sourceand target-side information from previous sentences.",
"based context-aware ST model.",
"Figure 1 illustrates our model, where neighboring source (tar-get) sequences are chained together into one sequence for joint translation.",
"This paradigm only requires data-level manipulation, thus allowing us to reuse any existing sentence-level E2E ST models.",
"Despite its simplicity, this approach successfully leverages contextual information to improve textual MT (Tiedemann and Scherrer, 2017; Bawden et al., 2018; Lopes et al., 2020), and here we adapt it to ST. As for the computational bottleneck, we shorten the speech encoding sequence via adaptive feature selection (Zhang et al., 2020b,a, AFS), which only retains a small subset of encodings ( 16%) for each audio segment.",
"We investigate several decoding methods, including chunk-based decoding and sliding-window based decoding.",
"We also study an extension of the latter with the constraint of target prefix, where the prefix denotes the translation of previous context speeches.",
"We find that using these methods sometimes results in misaligned translations , particularly when using the constraint.",
"This issue manifests itself in mismatching sentence boundaries and producing overand/or under-translation, which greatly hurts sentence-based evaluation metrics.",
"To avoid such misalignments, we introduce in-model ensemble decoding (IMED) to regularize the document-level translation with its sentence-level counterpart.",
"Note that we use the same context-aware ST model here for both types of translation that's why we call it in-model ensemble.",
"We adopt Transformer (Vaswani et al., 2017) for experiments with the MuST-C dataset (Di Gangi et al., 2019).",
"We study the impact of context on translation in different settings.",
"Our results demonstrate the effectiveness of contextual modeling.",
"Our main findings are summarized below: Incorporating context improves overall translation quality (+0.18-2.61 BLEU) and benefits pronoun translation across different language pairs, resonating with previous findings in textual MT (Miculicich et al., 2018; Huo et al., 2020).",
"In addition, context also improves the translation of homophones.",
"ST models with contexts suffer less from (ar-tificial) audio segmentation errors.",
"Contextual modeling improves translation quality and reduces latency and flicker for simultaneous translation under re-translation strategy (Arivazhagan et al., 2020a).",
"Our work is inspired by pioneer studies on context-aware textual MT. Context beyond the current sentence carries information whose importance for translation cohesion and coherence has long been posited (Hardmeier et al., 2012; Xiong and Zhang, 2013).",
"With the rapid development of neural MT and also available document-level textual datasets, research in this direction gained great popularity.",
"Recent efforts often focus on either advanced contextual neural architecture development (Tiede-mann and Scherrer, 2017; Kuang et al., 2018; Miculicich et al., 2018; Zhang et al., 2018, 2020c; Kang et al., 2020; Chen et al., 2020; Ma et al., 2020a; Zheng et al., 2020) and/or improved analysis and evaluation targeted at specific discourse phenomena (Bawden et al., 2018; Laubli et al., 2018; Guillou et al., 2018; Voita et al., 2019; Kim et al., 2019; Cai and Xiong, 2020).",
"We follow this research line, and adapt the concatenation-based contextual model (Tiedemann and Scherrer, 2017; Bawden et al., 2018; Lopes et al., 2020) to ST. Our main interest lies in exploring the impact of context on ST. Developing dedicated contextual models for ST is beyond the scope of this study, which we leave to future work.",
"Context-aware ST extends the sentence-level ST towards streaming ST which allows models to access unlimited previous audio inputs.",
"Instead of improving contextual modeling, many studies on streaming ST aim at developing better sentence/-word segmentation policies to avoid segmentation errors that greatly hurt translation (Matusov et al., 2007; Rangarajan Sridhar et al., 2013; Iranzo-Sanchez et al., 2020; Zhang and Zhang, 2020; Arivazhagan et al., 2020b).",
"Very recently, Ma et al. (2020b) proposed a memory augmented Transformer encoder for streaming ST, where the previous audio features are summarized into a growing continuous memory to improve the model's context awareness.",
"Despite its success, this method ignores the target-side context, which turns out to have sig-nificant positive impact on ST in our experiments.",
"Our study still relies on oracle sentence segmentation of the audio.",
"The most related work to ours is (Gaido et al., 2020), which also investigated con-textualized translation and showed that context-aware ST is less sensitive to audio segmentation errors.",
"While they exclusively focus on the robustness to segmentation errors, our study investigates the benefits of context-aware E2E ST more broadly.",
"We extend the sentence-level ST with document-level context, by modeling up to C previous source/target segments/sentences for translation.",
"Formally, given a pre-segmented audio (source document) A = (cid:0) a 1 , . . . , a N (cid:1) as well as its paired target document Y = (cid:0) y 1 , . . . , y N (cid:1) , the model is trained to maximize the following likelihood: log p ( Y | A ) = N (cid:88) n =1 log p (cid:0) y n | x n , C n y , C n x (cid:1) , (1) where x n = AFS ( a n ) , i.e. the speech encodings extracted via AFS (Zhang et al., 2020a).",
"a n and y n denote the n -th audio segment and target sentence, respectively.",
"N is the number of seg-ments/sentences in the document.",
"C n x and C n y stand for the source and target context, respectively, i.e. { x n i } Ci =1 and { y n i } Ci =1 .",
"Adaptive Feature Selection Audio segment is often converted into frame-based features for neural modeling.",
"Different from text, each segment might contain hundreds or even thousands of such features, making contextual modeling computationally difficult.",
"Zhang et al. (2020a) found that most speech encodings emitted by a Transformer-based audio encoder carry little information for translation, and their deletion even improves translation quality.",
"We follow Zhang et al. (2020a) and perform AFS to only extract those informative encodings ( 16%) optimized via sentence-level speech recognition with L 0 DROP (Zhang et al., 2020b).",
"This greatly shortens the speech encoding sequence, thus enabling broader context exploration.",
"the previous context ( C n x / C n y ) (Tiedemann and Scherrer, 2017; Bawden et al., 2018) as shown in Figure",
"1. After obtaining the AFS-based encodings ( x n ) for each audio segment, we concatenate those encodings of neighboring segments to form the source input.",
"The same is applied to the target-side sentences, except for a separator symbol < s > inserted in-between sentences to distinguish sentence boundaries.",
"3 Such modeling enables us to use arbitrary encoder-decoder models for context-aware ST, such as the Transformer (Vaswani et al., 2017) used in this paper.",
"Despite no dedicated hierarchical modeling (Miculicich et al., 2018), this paradigm still allows for intraand inter-sentence attention during encoding and decoding, which explicitly utilizes context for translation and has been proven successful (Lopes et al., 2020).",
"Concatenation-based contextual modeling allows for different inference strategies with possible trade-offs between simplicity/efficiency and accuracy.",
"We investigate the following inference strategies (see Figure 2): Chunk-based Decoding (CBD) CBD splits all audio segments in one document into nonoverlapping chunks, with each chunk concatenating C + 1 segments, as shown in Figure 2a.",
"CBD directly translates each chunk, and then recovers sentence-level translation via the separator symbol < s > .",
"CBD is the most efficient inference strategy, only encoding/decoding each sentence once, but it might suffer from misaligned translation , 3 Note that we did not add similar boundary information to audio segments, because AFS implicitly captures these signals through independent segment encoding.",
"producing more or fewer sentences than the input segments.",
"We simply drop the extra generated sentences and replace the missing ones with < unk > when computing sentence-based evaluation metrics.",
"Also, CBD introduces an independence assumption between chunks.",
"Sliding Window-based Decoding (SWBD) SWBD avoids such inter-chunk independence by sequentially translating each audio segment ( x n ), together with its corresponding previous source context ( C n x ).",
"We distinguish two variants of SWBD.",
"The first variant, SWBD, translates the concatenated segments and regards the last generated sentence as the translation of the current segment while discarding all other generations (Figure 2b).",
"Note that this might introduce inconsistencies between the output produced at a time step, and the one used as target context in future time steps.",
"By contrast, the second variant, SWBD-Cons, leverages the previously generated (up to C ) sentences as a decoding constraint, based on which the model only needs to generate one sentence (Figure 2c).",
"In-Model Ensemble Decoding (IMED) We observe that SWBD still suffers from misaligned translation , where the translation of the current segment might contain information from previous segments.",
"We introduce IMED to alleviate this issue as shown in Figure 2d.",
"IMED extends SWBD-Cons by interpolating the document-level prediction ( p d ) with the sentence-level prediction ( p s ) as follows: p s ( y nt | y n<t , x n ) + (1 ) p d ( y nt |C ) , (2) where C = {C n x , C n y , x n , y n<t } , is a hyperparameter, y nt denotes the t -th target word in sentence y n , and both predictions are based on the same model .",
"Intuitively, the sentence-level translation acts as a regularizer, avoiding the overor under-translation.",
"Note IMED with = 0 corresponds to SWBD-Cons.",
"We use the MuST-C dataset (Di Gangi et al., 2019) for experiments, which was collected from English TED talks and covers translations from English to 8 different languages, including German (De), Spanish (Es), French (Fr), Italian (It), Dutch (Nl), Portuguese (Pt), Romanian (Ro) and Russian (Ru).",
"MuST-C offers a standard training, development and test set split for each language pair, with each dataset consisting of English audio, English transcriptions and their translations.",
"Each training set contains transcribed speeches of 452 hours with 252K utterances on average.",
"We report results on tst-COMMON, whose size ranges from 2502 (Es) to 2641 (De) utterances.",
"We perform our major study on MuST-C En-De.",
"To construct acoustic features, for each audio segment, we extract 40-channel log-Mel filterbanks using overlapping windows of 25 ms and step size of 10 ms. We enrich these features with their first and second-order derivatives, followed by mean subtraction and variance normalization.",
"Following Zhang et al. (2020a), we perform nonoverlapping feature stacking to combine the features of three consecutive frames.",
"All the texts are tokenized and truecased (Koehn et al., 2007), with out-of-vocabulary words handled by BPE segmentation (Sennrich et al., 2016), using 16K merging operations.",
"Model Settings and Evaluation Our context-aware ST follows Transformer base (Vaswani et al., 2017): 6 layers, 8 attention heads, and hidden/feed-forward size 512/2048.",
"We use Adam ( 1 = 0 . 9 , 2 = 0 . 98 ) (Kingma and Ba, 2015) for parameter updates with label smoothing of 0.1.",
"We use the same learning rate schedule as Vaswani et al. (2017) and set the warmup step to 4K.",
"We apply dropout to attention weights and residual connections with a rate of 0.2 and 0.5, respectively.",
"By default, we set C = 2 and = 0 .",
"5 .",
"Following (Zhang et al., 2020a), we apply AFS( (cid:15) = 0 .",
"1 , = 2 / 3 ) to both temporal and feature dimensions for feature selection, which prunes out 84% speech encodings.",
"We initialize our context-aware ST with the sentence-level Baseline, i.e. ST+AFS, and then finetune the model for 20K steps based on the concatenation method with a batch size of around 40K subwords.",
"4 We adopt beam search for decoding, with a beam size of 4 and length penalty of 0.6.",
"We average the last 5 checkpoints for evaluation.",
"We measure general translation quality with tokenized case-sensitive BLEU (Papineni et al., 2002) and also report the detokenized one via sacreBLEU (Post, 2018) 5 for cross-paper comparison.",
"We calculate BLEU based on sentences unless oth-4 Our experiments show that such initialization eases the learning of long inputs and improves the convergence of context-aware ST. 5 signature: BLEU+c.mixed+#.1+s.exp+tok.13a+v.1.3.6 ID Model BLEU APT 1 Baseline (ST+AFS) 22.38 (27.40) 60.77 2 Ours + CBD 22.72 (27.95) 62.31 3 Ours + SWBD 22.70 (28.02) 62.83 4 Ours + SWBD-Cons 22.11 (27.98) 60.94 5 Ours + IMED 22.86 ( 28.03 ) 62.56 6 1 + 20K-step finetuning 22.02 (27.00) 61.58 7 5 + = 1 .",
"Table 1 : Case-sensitive tokenized BLEU and APT for different models and settings on MuST-C En-De test set.",
"Numbers in bracket denote document -based BLEU.",
"lp : the length penalty for beam search decoding.",
"w/o C n y : models that are trained without target-side context.",
"Best results are highlighted in bold.",
"Note C = 2 , = 0 .",
"5 and lp = 0.6 by default.",
"erwise specified.",
"We use APT (Miculicich Werlen and Popescu-Belis, 2017), the accuracy of pronoun translation, as an approximate proxy for document-level evaluation.",
"Word alignment required by APT is automatically extracted via fast align (Dyer et al., 2013) with the strategy grow-diag-final-and .",
"Does context improve translation?",
"Yes, but the decoding method matters for context-aware ST. Table 1 summarizes the results.",
"Our model with IMED outperforms Baseline by +0.48 BLEU (sig-nificant at p < 0.05) 6 and +1.79 APT (1 5), clearly showing the benefits from contextual modeling.",
"Although SWBD-Cons yields worse sentence-based BLEU (-0.27, 1 4), it still beats Baseline in document-based BLEU (+0.58) and pronoun translation (+0.17 APT).",
"The reason behind this inferior BLEU partially lies in misaligned translation (see Table 8 in Appendix for example).",
"We observe that SWBD-Cons sometimes segments its output in a way that is misaligned to the reference segmentation.",
"This also hurts CBD, where CBD produces mismatched sentences for around 1.8% cases.",
"This is only a problem if we rely on the sentence-level alignment for BLEU, but not when we measure document-based BLEU (in brackets), where translations in one document are concatenated into a sequence for BLEU calculation.",
"Overall, SWBD 6 We perform significance test using bootstrap-hypothesis-difference-significance.pl in moses (Koehn et al., 2007).",
"and IMED are more stable and perform the best, and SWBD surpasses Baseline by 2.06 APT (1 3).",
"We will proceed with using IMED and SWBD for more reliable results with APT and later analysis.",
"Since we finetune our model based on the pre-trained Baseline, directly comparing with Baseline might be unfair.",
"To offset its influence, we continue to train Baseline for the same 20K steps, following the settings in Section 5.1.",
"Results show that this extra training (1 6) slightly deteriorates BLEU (-0.36) and only explains part of the improvement in APT (+0.81).",
"Therefore, the gain brought by SWBD and IMED does not come from longer training.",
"However, we do observe that initializing from the sentence-level Baseline benefits context-aware ST, compared to directly training context-aware ST from the AFS model (13 3, 14 4).",
"Apart from faster convergence and higher quality, another benefit of this finetuning is that the trained context-aware ST still carries the ability to translate individual sentences.",
"Table 1 shows that using context-aware ST for sentence-level translation (1 7) yields similar BLEU to Baseline (+0.04) but surprisingly much better pronoun translation (+1.19), although it still underperforms SWBD and IMED.",
"The fact that we can perform sentence-level ST using the same context-aware ST model indicates that it can be useful for ensembling, as confirmed by the effectiveness of IMED.",
"Upon closer inspection, we find that context-aware ST prefers to produce longer translations than Baseline.",
"To control for the effects of output length on BLEU differences, we experiment with larger length penalty ( lp : 0.6 1.0) to beam search.",
"Results in Table 1 show that biasing the decoding greatly improves sentence-level ST (1 8), achieving performance on par with context-aware ST (when lp is 0.6) in terms of BLEU with similar translation lengths but still falling short of pronoun translation (-0.94 APT, 8 3).",
"In addition, we observe that context-aware ST also benefits from decoding with larger length penalty, beating all sentence-level ST models (3 9, 5 10).",
"Particularly, SWBD with lp of 1.0 delivers the best BLEU of 22.97 and APT of 63.51 (3 9).",
"Note we adopt lp of 0.6 for the following experiments.",
"Does target-side context matter for context-aware ST?",
"Yes, it matters a lot.",
"By default, we utilize both sourceand target-side context for contextual modeling.",
"Removing the target-side part (also at training), as shown in Table 1 (11, 12), sub-Model BLEU APT SWBD 22.70 62.83 SWBD + Random C n x 22.31 61.16 IMED 22.86 62.56 IMED + Random C n x 21.83 59.95 IMED + Random C n y 21.99 60.01 IMED + Random C n y & C n y 21.76 59.67 Table 2 : Case-sensitive tokenized BLEU and APT for context-aware ST with random source/target context on MuST-C En-De test set.",
"We report average performance over three runs with different random seeds.",
"C = 2 , = 0 .",
"5 .",
"Incorrect context hurts our model.",
"stantially weakens translation quality, even leading to worse performance than Baseline.",
"Apart from offering direct target-side translation clues, we argue that the target-side context also enforces the context-aware ST to utilize the source-side context for translation, thus benefiting its training.",
"This observation echoes with several previous studies on textual translation (Bawden et al., 2018; Huo et al., 2020; Lopes et al., 2020).",
"Does the model learn to utilize context?",
"Yes.",
"We answer this question by studying the impact of incorrect context on our model.",
"We replace the correct source context with some random audio segments from the same document, and randomly select the target context from previous translations during decoding.",
"Intuitively, the performance of our model should be intact if it ignores the context.",
"Note that we trained our model with correct contexts but test it with random contexts here.",
"Results in Table 2 show that the randomized context, either sourceor target-side, hurts the performance of our model in both BLEU and APT, similar to the findings in (Voita et al., 2018), and the translation of pronouns suffers more ( > -1.6 APT).",
"Compared to SWBD, the incorrect context has more negative impact on IMED, resulting in worse performance than Baseline (Table 1), although IMED also uses sentence-level translation.",
"We ascribe this to the target prefix constraint in IMED which makes translation errors at early decoding much easier to propagate.",
"We observe that the incorrect target context acts similarly to its source counterpart under IMED, albeit its selection scope is much smaller (only limited to the translated segments), and combining both contexts leads to a slight but consistent performance degradation.",
"These results demonstrate that our model indeed learns to use contextual information for translation.",
"Figure 3 : Case-sensitive tokenized BLEU (top) and APT (bottom) as a function of context size C on MuST-C En-De test set.",
"Figure 4 : Case-sensitive tokenized BLEU (left y-axis) and APT (right y-axis) on MuST-C En-De test set when varying for IMED.",
"Solid and dashed curves are for BLEU and APT, respectively.",
"C = 2 .",
"How much context sentences should we use?",
"Although adding extra context provides more information, it makes learning harder: neural models often struggle with long sequences.",
"Figure 3 shows the impact of context size on translation.",
"We find that our models do not benefit from context size beyond 2 previous segments.",
"Figure 3 also shows that the overall trend of the impact of C on BLEU and APT is similar for different decoding methods.",
"Increasing C to 1 delivers the best APT, while context-aware ST achieves its best BLEU at C = 2 .",
"We use C = 2 for the following experiments.",
"Impact of on IMED.",
"IMED heavily relies on the hyperparameter (Eq. 2) to control its preference between sentence-level and document-level decoding.",
"Figure 4 shows its impact on translation Model ACC hp Baseline (ST+AFS) 48.93 Ours + SWBD 49.90 Ours + IMED 49.66 Ours + IMED = 1 .",
"Table 3 : Translation accuracy of homophones (ACC hp ) MuST-C En-De test set.",
"C = 2 , = 0 .",
"5 .",
"quality, which clearly reveals a trade-off.",
"The performance of IMED (BLEU and APT) reaches its peak at = 0 .",
"4 , and decreases when becomes either smaller or larger.",
"The optimal value of for IMED might vary greatly across different language pairs.",
"It also shows some difference across evaluation sets (see Figure 7 in Appendix).",
"In the following experiments, we will apply equal weighting ( = 0 . 5 ), a common choice for model ensembles and not substantially worse than the optimum on this dataset.",
"Impact of context on homophone translation.",
"Homophones (words that sound the same but hold different meanings, such as I vs. eye and would vs. wood ) and other acoustically similar words increase the learning difficulty of ST models compared to textual MT. To allow for automatic quantitative evaluation, we extract words from the MuST-C test set transcriptions which share the same phonemes with Montreal Forced Aligner (McAuliffe et al., 2017).",
"We collect all homophones and evaluate their translation accuracy (ACC hp ) in the same way as APT.",
"Table 3 shows that context-aware ST outperforms Baseline by > 0.73 ACC hp , where SWBD performs slightly better than IMED.",
"After removing the document-level decoding, IMED ( = 1 . 0 ) performance drops greatly, even underperforming Baseline.",
"While we see some improvements to homophone translations, they are in the same relative range as general improvements from context.",
"Anecdotal examples from manual inspection (see Table 7 in Appendix) indicate that context may at times help disambiguate acoustically similar forms, but that (near-)homophones still remain a salient source of translation errors.",
"Context improves the robustness of ST models to audio segmentation errors.",
"In MuST-C, the audio is already well-segmented, with each segment corresponding to a short transcript.",
"Nevertheless, natural audio, streaming speeches in particular, has no such segment boundaries, and how to parti-Model Random Gold Baseline (ST+AFS) 20.40 27.40 Ours + SWBD 21.83 28.02 Ours + IMED 22.03 28.03 Table 4 : Document -level case-sensitive tokenized BLEU for different models on MuST-C En-De test set with erroneous audio segmentation.",
"We report average BLEU over three runs; each run uses a different random seed to simulate segmentation errors.",
"C = 2 , = 0 .",
"5 .",
"Random/Gold : document-based BLEU when the random/gold segments are used.",
"tion audio itself is an active research area (Rangara-jan Sridhar et al., 2013; Zhang and Zhang, 2020).",
"Since ST models are often trained with gold segments, they inevitably suffer from segmentation errors at inference when the gold ones are unavailable.",
"The bottleneck mainly comes from the incompleteness of each segment, which, we argue, contextual information could alleviate.",
"We simulate segmentation errors by randomly re-segmenting the audio in MuST-C En-De test set based on the given segment number.",
"Especially, given an audio with N gold segments, we randomly re-segment it into N disjoint pieces, where each piece usually has different boundaries against its gold counterpart.",
"7 We evaluate different ST models with document-based BLEU.",
"Table 4 summarizes the results.",
"Segmentation noise deteriorates translation quality for all ST models to a large degree ( > -6 BLEU).",
"Compared to sentence-level ST, context-aware ST is less sensitive to those errors.",
"In particular, our model with IMED yields a document-based BLEU of 22.03, substantially outperforming Baseline (by 1.63 BLEU).",
"Our results also confirm the findings of Gaido et al. (2020).",
"Context benefits simultaneous translation.",
"Simultaneous translation requires that we start decoding before receiving the whole audio input to minimize latency; operating on such short units increases ambiguity, and the model may be forced to predict future input to account for word order differences, which we hypothesize is easier with access to super-sentential context.",
"We focus on segment-7 Note we intentionally keep the same segment number, N , in the simulated noisy segmentation, because this offers us a fair setup to analyze the impact of segmentation errors on the final translation when compared to the gold segmentation.",
"This avoids the potential influence resulting from mismatched segment number.",
"We leave the study of the model's robustness to genuine segmentation noises to future work.",
"Table 6 : Simultaneous translation results (BLEU, DAL and NE) for different models on MuST-C En-De test set.",
"C = 2 , = 0 .",
"5 .",
"level E2E simultaneous translation, and adopt the re-translation method (Niehues et al., 2016; Arivazhagan et al., 2020b,a) where we translate the source input segment from scratch after every 1 second.",
"For training, we finetune each model for extra 20K steps with a 1:1 mix of full-segment and prefix pairs, following Arivazhagan et al. (2020a).",
"We construct the prefix pairs by uniformly selecting an audio prefix length and then proportionally deciding the target prefix length based on the sentence length.",
"Note that the context inputs in our model are still full segments/sentences.",
"We adopt tokenized BLEU, differentiable average lagging (DAL), and normalized erasure (NE) to evaluate the translation quality, latency and stability, respectively, following Arivazhagan et al. (2020a).",
"Note DAL and NE are measured based on words.",
"Results in Table 6 show that context-aware ST improves translation quality ( > +0.84 BLEU) and reduces translation latency ( > -0.06 DAL) regardless of the decoding method.",
"It also enhances translation stability when the target prefix constraint is applied ( > -0.08 NE, SWBD-Cons & IMED).",
"SWBD performs worse in NE, because it allows changes in the translation of context which increases instability.",
"Overall, context provides extra information to the translation model, before the 0 .",
"Figure 5 : DAL (left y-axis) and NE (right y-axis) as a function of for IMED on MuST-C En-De test set in simultaneous translation setting.",
"Solid and dashed curves are for DAL and NE, respectively.",
"C = 2 .",
"0 .",
"0 : document-level decoding; 1 .",
"0 : sentence-level decoding.",
"simultaneous translation.",
"Figure 5 further illustrates how context impacts simultaneous translation.",
"With the increase of sentence-level decoding ( 1 . 0 ), IMED produces higher DAL and NE, i.e. worse quality.",
"We ascribe the reduction of latency and stability in our model to the inclusion of contextual information.",
"Table 5 summarizes the results for all 8 translation pairs covered by MuST-C.",
"Overall, our model obtains improvements over most metrics and language pairs, despite their different language characteristics.",
"Out of 8 languages, our model performs relatively worse on Es and It with smaller BLEU gains and even negative results in ACC hp .",
"By contrast, our model yields the largest improvement on Ro.",
"In particular, our model with IMED achieves a detokenized BLEU of 23.6 on En-Ro, surpassing the state-of-the-art result 22.2 (Zhao et al., 2020) reported so far.",
"Our experiments confirm the effectiveness of context-aware modeling for end-to-end speech translation.",
"With concatenation-based contextual modeling and appropriate decoding method, we observe positive impact of context on translation.",
"Context-aware ST improves general translation quality in BLEU, and also helps pronoun and homophone translation.",
"ST models become less sensitive to (artificial) audio segmentation errors with context.",
"In addition, context also improves simultaneous translation by reducing latency and erasure.",
"We observe overall positive results over different languages and evaluation metrics on the MuST-C corpus.",
"In the future, we will investigate more dedicated neural architectures to handle long-form speech input.",
"While we relied on a dataset with sentence segmentation in this work, we are interested in removing the reliance on segmentation at inference time to implement the full-fledged streaming translation scenario.",
"We thank the reviewers for their insightful comments.",
"This project has received funding from the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreements 825460 (ELITR).",
"Rico Sennrich acknowledges support of the Swiss National Science Foundation (MUTAMUR; no. 176727)."
] | [
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"result",
"abstain",
"method",
"method",
"method",
"method",
"objective",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"method",
"result",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"other",
"other",
"other"
] |
[
"Evaluating Natural Language Generation (NLG) systems is a challenging task.",
"Firstly, the metric should ensure that the generated hypothesis reflects the reference's semantics.",
"Secondly, it should consider the grammatical quality of the generated sentence.",
"Thirdly, it should be robust enough to handle various surface forms of the generated sentence.",
"Thus, an effective evaluation metric has to be multifaceted.",
"In this paper, we propose an automatic evaluation metric incorporating several core aspects of natural language understanding (language competence, syntactic and semantic variation).",
"Our proposed metric, RoMe, is trained on language features such as semantic similarity combined with tree edit distance and grammatical acceptability, using a self-supervised neural network to assess the overall quality of the generated sentence.",
"Moreover, we perform an extensive robustness analysis of the state-of-the-art methods and RoMe.",
"Empirical results suggest that RoMe has a stronger correlation to human judgment over state-of-the-art metrics in evaluating system-generated sentences across several NLG tasks.",
"Automatic generation of fluent and coherent natural language is a key step for human-computer interaction.",
"Evaluating generative systems such as text summarization, dialogue systems, and machine translation is challenging since the assessment involves several criteria such as content determination, lexicalization, and surface realization (Liu et al., 2016; Dale and Mellish, 1998).",
"For assessing system-generated outputs, human judgment is considered to be the best approach.",
"Obtaining human evaluation ratings, on the other hand, is both expensive and time-consuming.",
"As a result, developing automated metrics for assessing the quality of machine-generated text has become an active area of research in NLP.",
"The quality estimation task primarily entails determining the similarity between the reference and hypothesis as well as assessing the hypothesis for grammatical correctness and naturalness.",
"Widely used evaluation metrics such as BLEU (Pap-ineni et al., 2002), METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004) which compute the word-overlaps, were primarily designed for evaluating machine translation and text summarization systems.",
"Word-overlap based metrics, on the other hand, are incapable of capturing the hy-potheses' naturalness and fluency.",
"Furthermore, they do not consider the syntactic difference between reference and hypothesis.",
"In a different line of research, word mover distance (WMD) (Kus-ner et al., 2015), BERTScore (Zhang et al., 2020a) and MoverScore (Zhao et al., 2019) compute word embedding based similarity for evaluating system-generated texts.",
"Although these metrics employ the contextualized representation of words, they do not take the grammatical acceptability of the hypothesis and the syntactical similarity to the reference into account.",
"To address these shortcomings, we propose RoMe, an automatic and robust metric for evaluating NLG systems.",
"RoMe employs a neural classifier that uses the generated sentence's grammatical, syntactic, and semantic qualities as features to estimate the quality of the sentence.",
"Firstly , it calculates the earth mover's distance (EMD) (Rubner et al., 1998) to determine how much the hypothesis differs from the reference.",
"During the computation of EMD, we incorporate hard word alignment and soft-penalization constants to handle various surface forms of words in a sentence, such as repeated words and the passive form of a sentence.",
"Secondly , using a semantically enhanced tree edit distance, the difference in syntactic structures between the reference and hypothesis sentences is quantified.",
"Thirdly , the metric incorporates a binary classifier to evaluate the grammatical accept-5645 ability of the generated hypotheses.",
"Finally , the scores obtained from the preceding steps are combined to form a representation vector, which is subsequently fed into a self-supervised network.",
"The network produces a final score, referred to as RoMe's output which represents the overall quality of the hypothesis statement.",
"We investigate the effectiveness of our proposed metric by conducting experiments on datasets from various domains of NLG such as knowledge graph based language generation dataset (KELM (Agar-wal et al., 2021)), dialogue datasets (Eric et al., 2017; Chaudhuri et al., 2021), the WebNLG 2017 challenge dataset (Shimorina et al., 2018), structured data to language generation dataset (BAGEL (Mairesse et al., 2010) and SFHOTEL (Wen et al., 2015)).",
"The capability of existing metrics to handle various forms of text has lately become a matter of debate in the NLP community (Ribeiro et al., 2020; Novikova et al., 2017; Liu et al., 2016).",
"Hence, we conduct an extensive robustness analysis to assess RoMe's performance in handling diverse forms of system-generated sentences.",
"To verify our claim, we design the analysis based on the text perturbation methods used in CHECKLIST (Ribeiro et al., 2020) and adversarial text transformation techniques from TextFooler (Jin et al., 2020) and TextAttack (Morris et al., 2020).",
"Empirical assessment on benchmark datasets and the robustness analysis results exhibit that RoMe can handle various surface forms and generate an evaluation score, which highly correlates with human judgment.",
"RoMe is designed to function at the sentence level and can be used to evaluate English sentences in the current version of the implementation.",
"In the future versions, we plan to extend RoMe by including more languages.",
"We released the code and annotation tool publicly 1 .",
"The Earth Mover's Distance (EMD) estimates the amount of work required to transform a probability distribution into another (Rubner et al., 1998).",
"Inspired by the EMD, in NLP the transportation problem is adopted to measure the amount of work required to match the system generated hypothesis sentence with the reference sentence (Kusner et al., 2015; Zhao et al., 2019).",
"Let us define the reference as R = { r 1 , r 2 , ..., r p } and the hypothesis as 1 https://github.com/rashad101/RoMe Figure 1: Illustrating an abstraction of the EMD.",
"H = { h 1 , h 2 , ..., h q } , where r i and h j indicates the i -th and j -th word of the reference and hypothesis, respectively.",
"The weight of the word r i and h j are denoted as m i and n j respectively.",
"Then, the total weight distribution of R and H is m (cid:80) = (cid:80) pi =1 m i and n (cid:80) = (cid:80) qj =1 n j , respectively.",
"Here, the sentence-level and normalized TF-IDF score of a word is considered as the word's weight.",
"Formally, EMD can be defined as: EMD ( H , R ) = min f ij F ( H , R ) (cid:80) pi =1 (cid:80) qj =1 d ij f ij min ( m (cid:80) , n (cid:80) ) (1) where d ij is the distance between the words r i and h j in the space and F ( H , R ) is a set of possible flows between the two distributions that the system tries to optimize.",
"In Equation 1, EMD ( H , R ) denotes the amount of work required to match the hypothesis with the reference.",
"The optimization is done following four constraints: f ij 0 i = 1 , 2 , ..., p and j = 1 , 2 ,",
".., q, q (cid:88) j =1 f ij m i i = 1 , 2 , ..., p, p (cid:88) i =1 f ij n j j = 1 , 2 , ..., q, p (cid:88) i =1 q (cid:88) j =1 f ij = min ( m (cid:80) , n (cid:80) ) (2) The first constraint indicates that each flow must be non-negative.",
"The second constraint limits the total weights flowing from r i to less than or equal to m i .",
"Similarly, the third constraint restricts the total weights flowing from h j to less than or equal to n j .",
"The final constraint indicates that the total flow of weights must be equal to the minimum weight distribution.",
"Figure 1 depicts the EMD for a given hypothesis-reference pair.",
"In computational linguistics, dependency and constituency trees are used to represent syntactic dependencies between words in a sentence.",
"Unlike the constituency tree, a dependency tree can represent non-adjacent and non-projective dependencies in a sentence, which frequently appear in spoken language and noisy text.",
"That leads us to prefer dependency trees over constituency trees for evaluating NLG output.",
"Formally, a dependency tree is a set of nodes = { w 0 , w 1 , ..., w k } and a set of dependency links G = { g 0 , g 1 , ..., g k } , where w 0 is the imaginary root node and g i is an index into representing the governor of w i .",
"Every node has exactly one governor except for w 0 , which has no governor (Hall and Novk, 2010).",
"Syntactic similarity between a pair of dependency trees can be estimated using several methods, such as graph centralities and Euclidean distances (Oya, 2020).",
"In our work, we exploit the Tree Edit Distance (TED) algorithm (Zhang and Shasha, 1989) to estimate syntactic similarity between reference and hypothesis.",
"TED is typically computed on ordered labeled trees and can thus be used to compare dependency trees.",
"The edit operations performed during the comparison of parsed dependency trees include Change , Delete , and Insert .",
"Let us consider TH and TR be the parsed dependency trees of the hypothesis and reference, respectively.",
"The operations required to transform one tree into another are visualized in Figure 2.",
"In TED, an exact match between the nodes of the compared trees is performed to decide if any edit operation is required.",
"In this work, the syntactic difference between hypothesis and reference is determined by the output of TED, which specifies the total number of edit operations.",
"In RoMe, a neural network determines the final evaluation score given a reference-hypothesis pair.",
"The network is trained to predict the evaluation score based on three features: semantic similarity computed by EMD, enhanced TED, and the grammatical acceptability score.",
"We explain these features in the following subsections.",
"During the computation of EMD, we employ hard word alignment and soft-penalization techniques to tackle repetitive words and passive forms of a sentence.",
"We compute a distance matrix and a flow matrix as described below and finally obtain EMD utilizing Equation 1.",
"Hard Word Alignment.",
"We first align the word pairs between reference and hypothesis based on their semantic similarities.",
"The alignment is performed by computing all paired cosine similarities while taking word position information into account, as in (Echizen-ya et al., 2019).",
"In contrast to (Echizen-ya et al., 2019), we use contextualized pre-trained word embedding from the language model ALBERT (Lan et al., 2020).",
"ALBERT uses sentence-order prediction loss, focusing on modeling inter-sentence coherence, which improves multi-sentence encoding tasks.",
"The word alignment score is computed as follows: A ( r i , h j ) = (cid:126)r i (cid:126)h j (cid:107) (cid:126)r i (cid:107)(cid:107) (cid:126)h j (cid:107) | q ( i + 1) p ( j + 1) | pq (3) where (cid:126)r i and (cid:126)h j denote the contextualized word embedding of r i and h j , respectively.",
"The first part of the right side of the equation computes the cosine similarity between (cid:126)r i and (cid:126)h j , and the second part calculates the relative position information as proposed in (Echizen-ya et al., 2019).",
"Figure 3 depicts a matrix of word alignment scores generated on an example pair of sentences.",
"This alignment strategy fails to handle repetitive words where a word from the hypothesis may get aligned to several words in the reference (see Figure 4).",
"To tackle such cases, we restrict the word alignment by imposing a hard constraint.",
"In the hard constraint, we prevent the words in the hypothesis from getting aligned to multiple words in the reference as illustrated by the dotted arrows in Figure 4.",
"We denote the resulting set of hard-aligned word pairs as A hc .",
"words.",
"For each aligned pair ( r i , h j ) A hc where (cid:126)r i (cid:126)h j (cid:107) (cid:126)r i (cid:107)(cid:107) (cid:126)h j (cid:107) > , the distance between r i and h j is computed as follows: d ij = 1 .",
"0 (cid:126)r i (cid:126)h j (cid:107) (cid:126)r i (cid:107)(cid:107) (cid:126)h j (cid:107) e | q ( i +1) p ( j +1) | pq (4) where d ij D and is a confidence threshold found via hyper-parameter search, [ 1 , 0) is a soft-penalization constant. For all the non-hard-aligned pairs and aligned pairs with value less than , the distance d ij receives a maximum value of 1.0. Intuitively, a lower value of d ij implies that the word needs to travel a shorter distance in the transportation problem of EMD. In Equation 4, e | q ( i +1) p ( j +1) | pq works as a penalty where a higher position difference multiplied with the negative constant will results in low d ij score. The role of is explained below. Soft-penalization. Existing metrics often impose hard penalties for words with different order than the reference sentence (Zhao et al., 2019; Echizen-ya et al., 2019). For instance, sentences phrased in the passive form obtain a very low score in those metrics. Addressing this issue, we introduce a soft-penalization constant = | j i | max ( p,q ) in Equation 4 to handle the passive form of a sentence better. Let us consider a reference, \" Shakespeare has written Macbeth \" and the passive form of the sentence as hypothesis, \" The Macbeth is written by Shakespeare \". The word Shakespeare appears at the beginning of the reference and at the end of the hypothesis, thus the position difference is larger. In such scenario, imposes a lower penalty as it divides the position difference by the length max ( p, q ) . Finally, following the optimization constraints of Equation 2, we obtain the transportation flow F ( H , R ) . For the optimized flow f ij F ( H , R ) , the final equation of EMD is as follows: EMD ( H , R ) = min f ij F ( H , R ) (cid:80) pi =1 (cid:80) qj =1 d ij f ij min ( m (cid:80) , n (cid:80) ) (5) The semantic similarity between hypothesis and reference is denoted as F sem = 1 . 0 EMD . The normalized value of EMD is used to calculate F sem . 3.2 Semantically Enhanced TED To estimate the difference between the syntactic structures of reference and hypothesis, we extend the TED algorithm (Zhang and Shasha, 1989). The original TED algorithm performs edit operations based on an exact match between two nodes in the dependency trees of hypothesis and reference. In this work, we modify the TED algorithm and compute a word embedding-based cosine similarity to establish the equivalence of two nodes. Two nodes are considered equal, if the cosine similarity of their embedding representations exceeds the threshold . This allows the semantically enhanced TED to process synonyms and restricts it from unnecessary editing of similar nodes. We call the resulting algorithm TED-SE. The normalized value of TED-SE is denoted as F ted . We compute TED-SE over the lemmatized reference and hypothesis since lemmatized text exhibits improved performance in such use cases (Kutuzov and Kuzmenko, 2019). The lemmatizer and dependency parser from Stanza (Qi et al., 2020) are utilised to obtain the tree representation of the text. Further details are provided in Appendix A.1. 3.3 Grammatical Acceptability Classification Linguistic competence assumes that native speakers can judge the grammatical acceptability of a sentence. However, system-generated sentences are not always grammatically correct or acceptable. Therefore, we train a binary classifier on the Corpus of Linguistic Acceptability (CoLA) (Warstadt 5648 et al., 2019), predicting the probability that the hypothesis is grammatically acceptable. CoLA is a collection of sentences from the linguistics literature with binary expert acceptability labels containing over 10k examples (Warstadt et al., 2019) 2 . The classifier is based on BERT-large (Devlin et al., 2019) and trained to optimize binary cross-entropy loss. A text sequence is fed as input and as output, the classifier produces the class membership probability (grammatically acceptable, grammatically unacceptable). The model achieves an accuracy of 80.6% on the out-of-domain CoLA test set (Warstadt et al., 2019, p. 8). We denote the score from the classifier as the feature F g , which is used to train a neural network (see 3.4). 3.4 Final Scorer Network A feed-forward neural network takes the previously computed features as input and learns a function f ( F sem ; F ted ; F g ) in the final step, yielding a final output score in the [0 , 1] interval.",
"Transport Distance.",
"A distance matrix D is required to compute the final EMD score.",
"The output score is regarded as the overall quality of the hypothesis.",
"Following a self-supervised paradigm, the network is trained on artificially generated training samples from the KELM dataset (Agarwal et al., 2021).",
"KELM contains knowledge-grounded natural sentences.",
"We randomly choose 2,500 sentence pairs from the KELM dataset and generate 2,500 more negative samples by randomly augmenting the sentences using TextAttack (Morris et al., 2020) and TextFooler (Jin et al., 2020).",
"Following a similar approach, we additionally generate 1,000 test sentence pairs from the KELM dataset.",
"Overall, we then have 5,000 training and 1,000 test examples.",
"The network is a simple, two-layered feed-forward network optimized with stochastic gradient descent using a learning rate of 1e-4.",
"To assess RoMe's overall performance, first, we benchmark on two language generation datasets, BAGEL (Mairesse et al., 2010) and SFHOTEL (Wen et al., 2015), containing 404 and 796 data points, respectively.",
"Each data point contains a meaning representation (MR) and a system generated output.",
"Human evaluation scores of these datasets are obtained from (Novikova et al., 2017).",
"Furthermore, we evaluate dialogue system's outputs on Stanford in-car dialogues (Eric et al., 2017) 2 with 70.5% examples manually labeled acceptable .",
"containing 2,510 data points and the soccer dialogue dataset (Chaudhuri et al., 2019) with 2,990 data points.",
"Each data point of these datasets includes a user query, a reference response, and a system response as a hypothesis.",
"Three different system outputs are evaluated for each dialogue dataset.",
"We use the human annotated data provided by (Chaudhuri et al., 2021).",
"Moreover, we evaluate the metrics on the system generated outputs from the WebNLG 2017 challenge (Shimorina et al., 2018).",
"Finally, to conduct robustness analysis, we randomly sample data points from KELM (Agarwal et al., 2021) and perturb them with adversarial text transformation techniques.",
"Three annotators participated in the data annotation process (two of them are from a Computer Science and one from a non-Computer Science background), where they annotated the perturbed data.",
"We provided the annotators with an annotation tool which displays the reference sentence and the system output for each data point.",
"The annotators were asked to choose a value from a range of [1,3], for each of the categories: Fluency , Semantic Correctness , and Grammatical correctness .",
"In this case, the values stand for 1: poor , 2: average , and 3: good .",
"The overall inter-annotator agreement score, is 0.78.",
"The annotation tool and its interface are discussed in detail in Appendix A.2.",
"We use = 0 .",
"60 and = 0 .",
"65 in 3.1.",
"Best values are found by a hyper-parameter search from a range of [0,1.0] with an interval of 0.1.",
"RoMe obtained the best result by utilizing ALBERT-large (Lan et al., 2020) model with 18M parameters and 24 layers.",
"Furthermore, we use the English word embedding of dimension 300 to obtain results from Fasttext (Bojanowski et al., 2017) throughout the paper.",
"As the grammatical acceptability classifier, we train a BERT-base model with 110M parameters and 12 layers.",
"The hidden layer size is 768 with a hidden layer dropout of 0.1.",
"A layer norm epsilon of 1e-12 was used for layer normalization.",
"GELU (Hendrycks and Gimpel, 2016) was used as the activation function.",
"We use a single GPU with 12GBs of memory for all the evaluations.",
"We select both the word-overlap and embedding-based metrics as strong baselines.",
"For the experiment and robustness analysis we choose BLEU (Pa-5649 Settings Metrics BAGEL SFHOTEL Info Nat Qual Info Nat Qual BLEU-1 0.225 0.141 0.113 0.107 0.175 0.069 BLEU-2 0.211 0.152 0.115 0.097 0.174 0.071 METEOR 0.251 0.127 0.116 0.163 0.193 0.118 BERTScore 0.267 0.210 0.178 0.163 0.193 0.118 SMD+W2V 0.024 0.074 0.078 0.022 0.025 0.011 Baselines SMD+ELMO+PMEANS 0.251 0.171 0.147 0.130 0.176 0.096 SMD+BERT+MNLI+PMAENS 0.280 0.149 0.120 0.205 0.239 0.147 WMD-1+ELMO+PMEANS 0.261 0.163 0.148 0.147 0.215 0.136 WMD-1+BERT+PMEANS 0.298 0.212 0.163 0.203 0.261 0.182 WMD-1+BERT+MNLI+PMEANS 0.285 0.195 0.158 0.207 0.270 0.183 RoMe (Fasttext) 0.112 0.163 0.132 0.172 0.190 0.231 RoMe RoMe (BERT) 0.160 0.251 0.202 0.212 0.283 0.300 RoMe (ALBERT-base) 0.162 0.259 0.222 0.231 0.295 0.315 RoMe (ALBERT-large) 0.170 0.274 0.241 0.244 0.320 0.327 Table 1: Spearman correlation ( ) scores computed from the metric scores with respect to the human evaluation scores on BAGEL and SFHOTEL.",
"pineni et al., 2002), METEOR (Banerjee and Lavie, 2005), BERTScore (Zhang et al., 2020a) and MoverScore (Zhao et al., 2019).",
"We evaluate the metrics on the sentence level to make a fair comparison.",
"Table 1 shows the performance of different metrics on data to language generation datasets (BAGEL and SFHOTEL).",
"In both the BAGEL and SFHOTEL, a meaning representation (MR), for instance inform(name='hotel drisco',price_range='pricey') is given as a reference sentence, where the system output is: the hotel drisco is a pricey hotel , in this case.",
"Although, RoMe outperformed the baseline metrics in evaluating the informativeness , naturalness and quality score, the correlation scores remain low with regard to human judgment.",
"This is because the MR, which is not a natural sentence, is the reference statement in this scenario.",
"For all the experiments, we take the normalized human judgement scores.",
"We firstly evaluate our model using Fasttext (Bojanowski et al., 2017) word embedding.",
"We notice a significant improvement in results when we replace the Fasttext embedding with contextualized word embedding obtained from BERT (Devlin et al., 2019).",
"Furthermore, we experiment with multiple language models and finally, we reach to our best performing model with ALBERT-large (Lan et al., 2020).",
"In all the experiments, we report the results of RoMe, using ALBERT-large (Lan et al., 2020).",
"In Table 1, WMD and SDM refer to word mover distance and sentence mover distance, respectively, used in MoverScore.",
"We report the results of WDM and SMD from (Zhao et al., 2019).",
"Table 4 demonstrates the evaluation results on dialogue datasets.",
"We evaluated the system-generated dialogues from three dialogue system models: Mem2Seq (Madotto et al., 2018), GLMP (Wu et al., 2019), and DialoGPT (Zhang et al., 2020b).",
"In case of in-car dataset, all the non-word-overlap metric achieved a better correlation score than the word-overlap based metrics.",
"This is because generated responses in dialogue systems are assessed based on the overall semantic meaning and correctness of the information.",
"Overall, RoMe achieves stronger correlation scores on both in-car and soccer dialogue datasets in evaluating several dialogue system outputs.",
"Metrics BLEU METEOR BERTScore MoverScore RoMe Systems r r r r r ADAPT 0.38 0.39 0.27 0.57 0.58 0.41 0.61 0.72 0.50 0.68 0.73 0.49 0.72 0.70 0.51 Baseline 0.35 0.42 0.26 0.49 0.49 0.33 0.49 0.50 0.35 0.59 0.61 0.43 0.53 0.53 0.37 melbourne 0.32 0.31 0.21 0.35 0.35 0.24 0.33 0.33 0.26 0.40 0.39 0.28 0.44 0.50 0.35 Pkuwriter 0.37 0.38 0.28 0.47 0.47 0.31 0.48 0.53 0.38 0.57 0.56 0.39 0.58 0.56 0.39 tilburg-nmt 0.25 0.20 0.13 0.26 0.26 0.18 0.38 0.39 0.30 0.49 0.50 0.36 0.64 0.68 0.50 tilburg-pipe 0.38 0.41 0.30 0.52 0.43 0.30 0.53 0.48 0.33 0.62 0.50 0.35 0.38 0.42 0.27 tilburg-smt 0.25 0.20 0.13 0.21 0.19 0.13 0.33 0.30 0.25 0.40 0.38 0.27 0.50 0.51 0.36 upf-forge 0.14 0.13 0.08 0.13 0.11 0.08 0.26 0.25 0.19 0.27 0.27 0.18 0.42 0.42 0.30 vietnam 0.73 0.80 0.62 0.87 0.90 0.72 0.81 0.76 0.70 0.90 0.78 0.73 0.84 0.89 0.83 Table 6: Metrics correlation with human judgment on system outputs from the WebNLG 2017 challenge.",
"competition and report the correlation scores in Table",
"6. Although RoMe achieves the best correlation in most of the cases, we notice a comparable and in some cases better results achieved by the MoverScore (Zhao et al., 2019).",
"A correlation graph is plotted in Figure 5 to investigate the metrics' performance correlations further.",
"The graph is constructed from RoMe and baseline metrics' scores on the BAGEL dataset.",
"As observed from the correlation graph, we can infer that our proposed metric, RoMe correlates highly with the MoverScore.",
"However, since RoMe handles both the syntactic and semantic properties of the text it achieved better results in all the datasets across different NLG tasks.",
"We conduct an ablation study to investigate the impact of the RoMe's components on its overall performance.",
"Table 5 exhibits the incremental improvement in Spearman's correlation coefficient, that each of the components brings to the metric.",
"We randomly choose 100 system-generated dialogue utterances from the dialogue datasets, since they frequently contain sentences in passive form and repetitive words.",
"The correlation of standard EMD with the human judgement is denoted as \"RoMe score with EMD std \".",
"Inclusion of semantic word alignment (EMD align ) and soft-penalization (EMD soft ) further improved the correlation score.",
"The classifier was not used until this point in the ablation since there was just one score.",
"Moreover, the correlation score improved significantly when the semantically enhanced TED and grammatical acceptability were introduced as features in addition to the EMD score to a neural classifier.",
"We hypothesize that the inclusion of language features related to grammar and syntactic similarity helped the neural network achieve better performance.",
"RoMe is developed in a modular fashion, so it may be used to generate scores for semantic similarity, syntactic similarity, and grammatical acceptability separately.",
"Table 2 shows the component-wise score and the final score of RoMe on three example data points.",
"In the first example, RoMe demonstrates its ability of capturing similar sentences 5651 Metrics BLEU METEOR BERTScore MoverScore RoMe Perturbation methods f s g f s g f s g f s g f s g Entity replacement 0.06 0.04 0.06 0.09 0.09 0.08 0.11 0.07 0.09 0.16 0.13 0.11 0.16 0.19 0.14 Adjective replacement 0.07 0.06 0.07 0.09 0.13 0.11 0.11 0.11 0.13 0.13 0.17 0.16 0.18 0.23 0.18 Random word replacement 0.05 0.06 0.03 0.06 0.06 0.05 0.11 0.10 0.08 0.11 0.13 0.09 0.15 0.15 0.23 Text transformation 0.03 0.01 0.03 0.08 0.09 0.07 0.13 0.15 0.15 0.15 0.18 0.19 0.18 0.19 0.21 Passive form 0.02 0.01 0.04 0.08 0.10 0.08 0.19 0.24 0.21 0.23 0.24 0.22 0.25 0.28 0.28 Table 7: Metrics Spearman correlation score against human judgment on perturbed texts.",
"by obtaining high score.",
"The scores from several components in the second example demonstrate RoMe's ability to handle passive form.",
"The final example in Table 2 demonstrates that RoMe penalizes sentence with repetitive word.",
"Table 3 shows the performance of the three baselines and RoMe in handling erroneous cases.",
"Although the first example contains a completely different hypothesis and the second case with repetitive hypothesis both BERTScore and MoverScore exhibit high score.",
"On the contrary, BLEU score is unable to handle such scenarios.",
"However, by obtaining low scores, RoMe demonstrates its ability to understand such cases better.",
"In this section, we design five test cases to stress the models' capabilities.",
"For the analysis purpose, we randomly sample data points from KELM (Agarwal et al., 2021) (cases 1, 2, and 4) and BAGEL (Mairesse et al., 2010) (cases 3 and 5).",
"The annotators annotate the sampled data points on the following criteria: fluency , semantic correctness , grammatical correctness .",
"Case 1: Entity replacement.",
"We perform invariance test (INV) from (Ribeiro et al., 2020) to check the metrics' NER capability in assessing the text quality.",
"In this approach, we replace the entities present in the text partially or fully with other entities in the dataset.",
"For instance, \" The population of Germany \" gets transformed to \" The population of England \".",
"Case 2: Adjective replacement.",
"Similar to the entity replacement, in this case we choose 100 data points from KELM that contain adjective in them.",
"Then we replace the adjectives with a synonym and an antonym word to generate two sentences from a single data point.",
"For instance, the adjective different is replaced with unlike and same .",
"At the end of this process, we obtain 200 data points.",
"Case 3: Random word replacement.",
"The words in different positions in the text are replaced by a generic token AAA following the adversarial text attack method from (Morris et al., 2020), in this case.",
"For instance, the sentence, \" x is a cheap restaurant near y \" is transformed into \" x is a cheap restaurant AAA AAA \".",
"We select the greedy search method with the constraints on stop-words modi-fication from the TextAttack tool.",
"This approach generates repetitive words when two consecutive words are replaced.",
"Case 4: Text transformation.",
"We leverage TextFooler (Jin et al., 2020) to replace two words in the texts by similar words, keeping the semantic meaning and grammar preserved.",
"Case 5: Passive forms.",
"In this case, we randomly choose 200 data points from the KELM (Agarwal et al., 2021) dataset where the system generated responses are in passive form.",
"From the results of robustness analysis in Table 7, it is evident that almost all the metrics obtain very low correlation scores with respect to human judgment.",
"Word-overlap based metrics such as BLEU and METEOR mostly suffer from it.",
"Although RoMe achieves higher correlation scores in most of the cases, there are still scope for improvement in handling the fluency of the text better.",
"Text perturbation techniques used to design the test cases often generate disfluent texts.",
"In some cases, the texts' entities or subjects get replaced by words from out of the domain.",
"From our observation, we hypothesize that handling keywords such as entities may lead to a better correlation score.",
"A potentially good evaluation metric is one that correlates highly with human judgment.",
"Among the unsupervised approaches, BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005) and ROUGE (Lin, 2004) are the most popular evaluation metrics traditionally used for evaluating NLG 5652 systems.",
"Although these metrics perform well in evaluating machine translation (MT) and summarization tasks, (Liu et al., 2016) shows that none of the word overlap based metrics is close to human level performance in dialogue system evaluation scenarios.",
"In a different line of work, word embedding based metrics are introduced for evaluating NLG systems (Mikolov et al., 2013; Matsuo et al., 2017).",
"Several unsupervised automated metrics were proposed that leverage EMD; one of them is word mover's distance (WMD) (Kusner et al., 2015).",
"Later, (Matsuo et al., 2017) proposed an evaluation metric, incorporating WMD and word-embedding, where they used word alignment between the reference and hypothesis to handle the word-order problem.",
"Recently, (Echizen-ya et al., 2019) introduced an EMD-based metric WE_WPI that utilizes the word-position information to tackle the differences in surface syntax in reference and hypothesis.",
"Several supervised metrics were also proposed for evaluating NLG.",
"ADEM (Lowe et al., 2017) uses a RNN-based network to predict the human evaluation scores.",
"With the recent development of language model-based pre-trained models (Zhang et al., 2020a) proposed BERTScore, which uses a pre-trained BERT model for evaluating various NLG tasks such as machine translation and image captions.",
"Recently, (Zhao et al., 2019) proposed MoverScore, which utilizes contextualized embedding to compute the mover's score on word and sentence level.",
"A notable difference between MoverScore and BERTScore is that the latter relies on hard alignment compared to soft alignments in the former.",
"Unlike the previous methods, RoMe focuses on handling the sentence's word repetition and passive form when computing the EMD score.",
"Furthermore, RoMe trains a classifier by considering the sentence's semantic, syntactic, and grammatical acceptability features to generate the final evaluation score.",
"We have presented RoMe, an automatic and robust evaluation metric for evaluating a variety of NLG tasks.",
"The key contributions of RoMe include 1) EMD-based semantic similarity , where hard word alignment and soft-penalization techniques are employed into the EMD for tackling repetitive words and passive form of the sentence, 2) semantically enhanced TED that computes the syntactic similarity based on the node-similarity of the parsed dependency trees, 3) grammatical acceptability classifier , which evaluates the text's grammatical quality, and 4) robustness analysis , which assesses the metric's capability of handling various form of the text.",
"Both quantitative and qualitative analyses exhibit that RoMe highly correlates with human judgment.",
"We intend to extend RoMe by including more languages in the future.",
"We acknowledge the support of the following projects: SPEAKER (BMWi FKZ 01MK20011A), JOSEPH (Fraunhofer Zukunftsstiftung), OpenGPT-X (BMWK FKZ 68GX21007A), the excellence clusters ML2R (BmBF FKZ 01 15 18038 A/B/C), ScaDS.AI (IS18026A-F) and TAILOR (EU GA 952215)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"objective",
"other"
] |
[
"Multilingual representations embed words from many languages into a single semantic space such that words with similar meanings are close to each other regardless of the language.",
"These embeddings have been widely used in various settings, such as cross-lingual transfer, where a natural language processing (NLP) model trained on one language is deployed to another language.",
"While the cross-lingual transfer techniques are powerful, they carry gender bias from the source to target languages.",
"In this paper, we study gender bias in multilingual embeddings and how it affects transfer learning for NLP applications.",
"We create a multilingual dataset for bias analysis and propose several ways for quantifying bias in multilingual representations from both the intrinsic and extrinsic perspectives.",
"Experimental results show that the magnitude of bias in the multilingual representations changes differently when we align the embeddings to different target spaces and that the alignment direction can also have an influence on the bias in transfer learning.",
"We further provide recommendations for using the multilingual word representations for downstream tasks.",
"Natural Language Processing (NLP) plays a vital role in applications used in our daily lives.",
"Despite the great performance inspired by the advanced machine learning techniques and large available datasets, there are potential societal biases embedded in these NLP tasks where the systems learn inappropriate correlations between the final predictions and sensitive attributes such as gender and race.",
"For example, Zhao et al. (2018a) and Rudinger et al. (2018) demonstrate that coreference resolution systems perform unequally on Most of the work was done while the first author was an intern at Microsoft Research.",
"different gender groups.",
"Other studies show that such bias is exhibited in various components of the NLP systems, such as the training dataset (Zhao et al., 2018a; Rudinger et al., 2018), the embeddings (Bolukbasi et al., 2016; Caliskan et al., 2017; Zhou et al., 2019; Manzini et al., 2019) as well as the pre-trained models (Zhao et al., 2019; Kurita et al., 2019).",
"Recent advances in NLP require large amounts of training data.",
"Such data may be available for resource-rich languages such as English, but they are typically absent for many other languages.",
"Multilingual word embeddings align the embeddings from various languages to the same shared embedding space which enables transfer learning by training the model in one language and adopting it for another one (Ammar et al., 2016; Ahmad et al., 2019b; Meng et al., 2019; Chen et al., 2019).",
"Previous work has proposed different methods to create multilingual word embeddings.",
"One common way is to first train the monolingual word embeddings separately and then align them to the same space (Conneau et al., 2017; Joulin et al., 2018).",
"While multiple efforts have focused on improving the models' performance on low-resource languages, less attention is given to understanding the bias in cross-lingual transfer learning settings.",
"In this work, we aim to understand the bias in multilingual word embeddings.",
"In contrast to existing literature that mostly focuses on English, we conduct analyses in multilingual settings.",
"We argue that the bias in multilingual word embeddings can be very different from that in English.",
"One reason is that each language has its own properties.",
"For example, in English, most nouns do not have grammatical gender, while in Spanish, all nouns do.",
"Second, when we do the alignment to get the multilingual word embeddings, the choice of target space may cause bias.",
"Third, when we do transfer learning based on multilingual word embeddings, the alignment methods, as well as the transfer procedure can potentially influence the bias in downstream tasks.",
"Our experiments confirm that bias exists in the multilingual embeddings and such bias also impacts the cross-lingual transfer learning tasks.",
"We observe that the transfer model based on the multilingual word embeddings shows discrimination against genders.",
"To discern such bias, we perform analysis from both the corpus and the embedding perspectives, showing that both contribute to the bias in transfer learning.",
"Our contributions are summarized as follows: We build datasets for studying the gender bias in multilingual NLP systems.",
"1 We analyze gender bias in multilingual word embeddings from both intrinsic and extrinsic perspectives.",
"Experimental results show that the pre-trained monolingual word embeddings, the alignment method as well as the transfer learning can have an impact on the gender bias.",
"We show that simple mitigation methods can help to reduce the bias in multilingual word embeddings and discuss directions for future work to further study the problem.",
"We provide several recommendations for bias mitigation in cross-lingual transfer learning.",
"Gender Bias in Word Representations Word embeddings are widely used in different NLP applications.",
"They represent words using low dimensional vectors.",
"Bolukbasi et al. (2016) find that, in the embedding space, occupation words such as professor and nurse show discrepancy concerning the genders.",
"Similarly, Caliskan et al. (2017) also reveal the gender stereotypes in the English word embeddings based on the Word Embedding Association Test (WEAT).",
"However, both works only consider English and cannot be directly adapted to other languages such as Spanish.",
"McCurdy and Serbetci (2017) reveal that bias exists in languages with grammatical gender while Zhou et al. (2019) and Lauscher and Glavas (2019) show that there is bias in bilingual word embeddings.",
"However, none of them consider the cross-lingual transfer learning which is an important application of the multilingual word embeddings.",
"To mitigate the bias in word embeddings, various approaches 1 Code and data will be available at https://aka.ms/ MultilingualBias .",
"have been proposed (Bolukbasi et al., 2016; Zhao et al., 2018b).",
"In contrast to these methods in English embedding space, we propose to mitigate the bias from the multilingual perspectives.",
"Comparing to Zhou et al. (2019), we show that a different choice of alignment target can help to reduce the bias in multilingual embeddings from both intrinsic and extrinsic perspectives.",
"Multilingual Word Embeddings and Cross-lingual Transfer Learning Multilingual word embeddings represent words from different languages using the same embedding space which enables cross-lingual transfer learning (Ruder et al., 2019).",
"The model is trained on a labeled data rich language and adopted to another language where no or a small portion of labeled data is available (Duong et al., 2015; Guo et al., 2016).",
"To get the multilingual word embeddings, Mikolov et al. (2013) learn a linear mapping between the source and target language.",
"However, Xing et al. (2015) argue that there are some inconsistencies in directly learning the linear mapping.",
"To solve those limitations, they constrain the embeddings to be normalized and enforce an orthogonal transformation.",
"While those methods achieve reasonable results on benchmark datasets, they all suffer from the hubness problem which is solved by adding cross-domain similarity constraints (Conneau et al., 2017; Joulin et al., 2018).",
"Our work is based on the multilingual word embeddings achieved by Joulin et al. (2018).",
"Besides the commonly used multilingual word embeddings obtained by aligning all the embeddings to the English space, we also analyze the embeddings aligned to different target spaces.",
"Bias in Other Applications Besides the bias in word embeddings, such issues have also been demonstrated in other applications, including named entity recognition (Mehrabi et al., 2019), sentiment analysis (Kiritchenko and Mohammad, 2018), and natural language inferences (Rudinger et al., 2017).",
"However, those analyses are limited to English corpus and lack the insight of multilingual situations.",
"In this section, we analyze the gender bias in multilingual word embeddings.",
"Due to the limitations of the available resources in other languages, we analyze the bias in English, Spanish, German and French.",
"However, our systematic evaluation approach can be easily extended to other languages.",
"We first define an evaluation metric for quantifying gender bias in multilingual word embeddings.",
"Note that in this work, we focus on analyzing gender bias from the perspective of occupations.",
"We then show that when we change the target alignment space, the bias in multilingual word embeddings also changes.",
"Such observations provide us a way to mitigate the bias in multilingual word embeddings by choosing an appropriate target alignment space.",
"We begin with describing inBias , our proposed evaluation metric for quantifying intrinsic bias in multilingual word embeddings from word-level perspective.",
"We then introduce the dataset we collected for quantifying bias in different languages.",
"Bias Definition Given a set of masculine and feminine words, we define inBias as: inBias = 1 NN (cid:88) i =1 | dis ( OM i , SM ) dis ( OF i , SF ) | , (1) where dis ( OG i , S ) = 1 | S | (cid:88) s S (1 cos ( OG i , s )) .",
"Here ( OM i , OF i ) stands for the masculine and feminine format of the i -th occupation word, such as (doctor, doctora).",
"SM and SF are a set of gender seed words that contain male and female gender information in the definitions such as he or she.",
"Intuitively, given a pair of masculine and feminine words describing an occupation, such as the words doctor (Spanish, masculine doctor) and doctora (Spanish, feminine doctor), the only difference lies in the gender information.",
"As a result, they should have similar correlations to the corresponding gender seed words such as el (Spanish, he) and ella (Spanish, she).",
"If there is a gap between the distance of occupations and corresponding gender, (i.e., the distance between doctor and el against the distance between doctora and ella), it means such occupation shows discrimination against gender.",
"Note that such metric can also be generalized to other languages without grammatical gender, such as English, by just using the same format of the occupation words.",
"It is also worth noting that our metric is general and can be used to define other types of bias with slight modifications.",
"For example, it can be used to detect age or race bias by providing corresponding seed words (e.g., young old or names correlated with different races).",
"In this paper we focus on gender bias as the focus of study.",
"We provide detailed descriptions of those words in the dataset collection subsection.",
"Unlike previous work (Bolukbasi et al., 2016) which requires calculating a gender direction by doing dimensionality reduction, we do not require such a step and hence we can keep all the information in the embeddings.",
"The goal of inBias is aligned to that of WEAT (Caliskan et al., 2017).",
"It calculates the difference of targets (occupations in our case) corresponding to different attributes (gender).",
"We use paired occupations in each language, reducing the influence of grammatical gender.",
"Compared to Zhou et al. (2019), we do not need to separately generate the two gender directions, as in our definition, the difference of the distance already contains such information.",
"In addition, we no longer need to collect the gender neutral word list.",
"In multilingual settings, due to different gender assignments to each word (e.g., spoon is masculine is DE but feminine in ES), it is expensive to collect such resources which can be alleviated by the inBias metric.",
"Multilingual Intrinsic Bias Dataset To conduct the intrinsic bias analysis, we create the MIBs dataset by manually collecting pairs of occupation words and gender seed words in four languages: English (EN), Spanish (ES), German (DE) and French (FR).",
"We choose these four languages as they come from different language families (EN and DE belong to the Germanic language family while ES and FR belong to the Italic language family) and exhibit different gender properties (e.g., in ES, FR and DE, there is grammatical gender).",
"2 We refer to languages with grammatical gender as GENDER-RICH languages; and otherwise, as GENDER-LESS languages.",
"Among these three gender-rich languages, ES and FR only have feminine and masculine genders while in DE, there is also a neutral gender.",
"We obtain the feminine and masculine words in EN from Zhao et al. (2018b) and extend them by manually adding other common occupations.",
"The English gender seed words are from Bolukbasi et al. 2 We also do analyses with Turkish where there is no grammatical gender and no gendered pronoun.",
"Details are in Sec. 3.2.4.",
"(2016).",
"For all the other languages, we get the corresponding masculine and feminine terms by using online translation systems, such as Google Translate.",
"We refer to the words that have both masculine and feminine formats in EN (e.g., waiter and waitress) as strong gendered words while others like doctor or teacher as weak gendered words.",
"In total, there are 257 pairs of occupations and 10 pairs of gender seed words for each language.",
"In the gender-rich languages, if the occupation only has one lexical format, (e.g., prosecutor in ES only has the format fiscal), we add it to both the feminine and the masculine lists.",
"As mentioned in Sec. 1, multilingual word embeddings can be generated by first training word embeddings for different languages individually and then aligning those embeddings to the same space.",
"During the alignment, one language is chosen as target and the embeddings from other languages are projected onto this target space.",
"We conduct comprehensive analyses on the MIBs dataset to understand: 1) how gender bias exhibits in embeddings of different languages; 2) how the alignment target affects the gender bias in the embedding space; and 3) how the quality of multilingual embeddings is affected by choice of the target language.",
"For the monolingual embeddings of individual languages and the multilingual embeddings that used English as the target language (*-en), 3 we use 3 We refer to the aligned multilingual word embeddings using the format src-tgt.",
"For example, es-en means we align the ES embeddings to the EN space.",
"An embedding not following such format refers to a monolingual embedding.",
"the publicly available fastText embeddings trained on 294 languages in Wikipedia (Bojanowski et al., 2017; Joulin et al., 2018).",
"For all other embeddings aligned to a target space other than EN, we adopt the RCSLS alignment model (Joulin et al., 2018) based on the same hyperparameter setting (details are in Appendix).",
"We examine the bias using four languages mentioned previously based on all the word pairs in the MIBs .",
"Table 1 reports the inBias score on this dataset.",
"The diagonal values here stand for the bias in each language before alignment.",
"Bias commonly exists across all the four languages.",
"Such results are also supported by WEAT in Zhou et al. (2019), demonstrating the validity of our metric.",
"What is more, comparing those four languages, we find DE and FR have stronger biases comparing to EN and ES.",
"to different languages?",
"Commonly used multilingual word embeddings align all languages to the English space.",
"However, our analysis shows that the bias in the multilingual word embeddings can change if we choose a different target space.",
"All the results are shown in Table 1. Specifically, when we align the embeddings to the gender-rich languages, the bias score will be lower compared to that in the original embedding space.",
"In the other situation, when aligning the embeddings to the gender-less language space (i.e., EN in our case), the bias increases.",
"For example, in original EN, the bias score is 0 .",
"0830 and when we align EN to ES, the bias decreases to 0 .",
"0639 with 23% reduction in the bias score.",
"However, the bias in ES embeddings increases to 0 .",
"0889 when aligned to EN while only 0 .",
"0634 when aligned to DE.",
"4 In Fig. 1, we show the examples of word shifting along the gender direction when aligning ES to different languages.",
"The gender direction is calculated by the difference of male gendered seeds and female gendered seeds.",
"We observe the feminine occupations are further away from female seed words than masculine ones, causing the resultant bias.",
"In comparison to using EN as target space, when aligning ES to DE, the distance between masculine and feminine occupations with corresponding gender seed words become more symmetric, therefore reducing the inBias score.",
"What words changed most after the alignment?",
"We are interested in understanding how the gender bias of words changes after we do the alignment.",
"To do this, we look at the top-15 most and least changed words.",
"We find that in each language, the strongest bias comes from the strong gendered words; while the least bias happens among weak gendered words.",
"When we align EN embeddings 4 We show the bias for all the 257 pairs of words in EN.",
"In the appendix, we also show the bias for strong gendered words and weak gendered words separately.",
"to gender-rich languages, bias in the strong gendered words will change most significantly; and the weak gendered words will change least significantly.",
"When we align gender-rich languages to EN, we observe a similar trend.",
"Among all the alignment cases, gender seed words used in Eq.",
"(1) do not change significantly.",
"To evaluate the quality of word embeddings after the alignment, we test them on the bilingual lexicon induction (BLI) task (Conneau et al., 2017) goal of which is to induce the translation of source words by looking at their nearest neighbors.",
"We evaluate the embeddings on the MUSE dataset with the CSLS metric (Conneau et al., 2017).",
"We conduct experiments among all the pair-wise alignments of the four languages.",
"The results are shown in Table 2. Each row depicts the source language, while the column depicts the target language.",
"When aligning languages to different target spaces, we do not observe a significant performance difference in comparison to aligning to EN in most cases.",
"This confirms the possibility to use such embeddings in downstream tasks.",
"However, due to the limitations of available resources, we only show the result on the four languages and it may change when using different languages.",
"In this paper, we mainly focus on four European languages from different language families, partly caused by the limitations of the currently available resources.",
"We do a simplified analysis on Turkish (TR) which belongs to the Turkic language family.",
"In TR, there is no grammatical gender for both nouns and pronouns, i.e., it uses the same pronoun o to refer to he, she or it.",
"The original bias in TR is 0 .",
"0719 and when we align it to EN, the bias remains almost the same at 0 .",
"0712 .",
"When aligning EN to TR, we can reduce the intrinsic bias in EN from 0 .",
"0830 to 0 .",
"0592 , with 28 .",
"7% reduction.",
"However, the BLI task shows that the performance on such aligned embeddings drops significantly: only 53 .",
"07% when aligned to TR but around 80% when aligned to the other four languages.",
"Moreover, as mentioned in Ahmad et al. (2019a), some other languages such as Chinese and Japanese cannot align well to English.",
"Such situations require more investigations and forming a direction for future work.",
"Researchers have proposed different approaches to mitigate the bias in EN word embeddings (Boluk-basi et al., 2016; Zhao et al., 2018b).",
"Although these approaches cannot entirely remove the bias (Gonen and Goldberg, 2019), they significantly reduce the bias in English embeddings.",
"We refer to such embedding as ENDEB .",
"We analyze how the bias changes after we align the embeddings to such ENDEB space.",
"The ENDEB embeddings are obtained by adopting the method in Bolukbasi et al. (2016) on the original fastText monolingual word embeddings.",
"Table 3 and 4 show the bias score and BLI performance when we do the alignment between ENDEB and other languages.",
"Similar to Zhou et al. (2019), we find that when we align other embeddings to the ENDEB space, we can reduce the bias in those embeddings.",
"What is more, we show that we can reduce the bias in ENDEB embeddings further when we align it to a gender-rich language such as ES while keeping the functionality of the embeddings, which is consistent with our previous observation in Table 1. Besides, comparing aligning to gender-rich languages and to ENDEB, the former one can reduce the bias more.",
"In addition to the intrinsic bias in multilingual word embeddings, we also analyze the downstream tasks, specifically in the cross-lingual transfer learning.",
"One of the main challenges here is the absence of appropriate datasets.",
"To motivate further research in this direction, we build a new dataset called MLBs .",
"Experiments demonstrate that bias in multilingual word embeddings can also have an effect on models transferred to different languages.",
"We further show how mitigation methods can help to reduce the bias in the transfer learning setting.",
"De-Arteaga et al. (2019) built an English BiosBias dataset to evaluate the bias in predicting the occupations of people when provided with a short biography on the bio of the person written in third person.",
"To evaluate the bias in cross-lingual transfer settings, we build the Multilingual BiosBias ( MLBs ) Dataset which contains bios in different languages.",
"Dataset Collection Procedure We collect a list of common occupations for each language and follow the data collection procedure used for the English dataset (De-Arteaga et al., 2019).",
"To identify bio paragraphs, we use the pattern NAME is an OCCUPATION-TITLE where name is recognized in each language by using the corresponding Named Entity Recognition model from spaCy.",
"5 To control for the same time period for datasets across languages, we process the same set of Common Crawl dumps ranging from the year 2014 to 2018.",
"For the occupations, we use both the feminine and masculine versions of the word in the gender-rich languages.",
"For EN, we use the existing BiosBias dataset.",
"The number of occupations in each language is shown in Table 5.",
"As the bios are written in third person, similar to De-Arteaga et al. (2019), we extract the binary genders based on the gendered pronouns in each language, such as he and she.",
"We follow the method in Zhao et al. (2018a) to measure the extrinsic bias: using the performance gap between different gender groups as a metric to evaluate the bias in the MLBs dataset.",
"We split the dataset based on the gender attribute.",
"A gender-agnostic model should have similar performance in each group.",
"To be specific, we use the average performance gap across each occupation in the male and female groups aggregated across all occupations ( | Diff | in Table 6) to measure the bias.",
"However, as described in Swinger et al. (2019), people's names are potentially indicative of their genders.",
"To eliminate the influence of names as well as the gender pronouns on the model predictions, we use a scrubbed version of the MLBs dataset by removing the names and some gender indicators (e.g., gendered pronouns and prefixes such as Mr. or Ms.).",
"To make predictions of the occupations, we adopt the model used in De-Arteaga et al. (2019) by taking the fastText embeddings as the input and encoding the bio text with bi-directional GRU units following by an attention mechanism.",
"The predictions are generated by a softmax layer.",
"We train such models using standard cross-entropy loss and keep the embeddings frozen during the training.",
"In this section, we analyze the bias in the multilingual word embeddings from the extrinsic perspective.",
"We show that bias exists in cross-lingual transfer learning and the bias in multilingual word embeddings contributes to such bias.",
"The gender distribution of the MLBs dataset is shown in Fig. 2. Among the three languages, EN corpus is most gender neutral one where the ratio between male and female instances is around MLBs Emb.",
"1 .",
"2 : 1 .",
"For all the other languages, male instances are far larger than female ones.",
"In ES, the ratio between male and female is 2 .",
"7 : 1 , in DE it is 3 .",
"53 : 1 , and in FR, it is 2 .",
"5 : 1 ; all are biased towards the male gender.",
"Bias in Monolingual BiosBias We first evaluate the bias in the MLBs monolingual dataset by predicting the occupations of the bios in each language.",
"6 From Table 6 we observe that: 1) Bias commonly exists across all languages ( | Diff | > 0 ) when using different aligned embeddings, meaning that the model works differently for male and female groups.",
"2) When training the model using different aligned embeddings, it does not affect the overall average performance significantly (Avg. column in the table).",
"3) The alignment direction influences the bias.",
"On training the model based on the embeddings aligned to different target space, we find that aligning the embeddings to ENDEB 6 The results of DE and FR are in the appendix.",
"Trans.",
"Src.",
"Tgt.",
"Avg.",
"Female Male | Diff | EN ES en es-en 41.68 42.29 41.42 2.83 en-es es 34.15 33.97 34.22 3.49 ES EN es en-es 57.33 59.61 54.75 8.33 es-en en 57.05 59.32 54.47 10.13 Table 7: Results of transfer learning on the scrubbed MLBs .",
"Trans.",
"Src.",
"Tgt.",
"Avg.",
"Female Male | Diff | EN ES endeb es-endeb 37.44 39.90 36.40 5.93 ES EN es-endeb endeb 52.51 54.45 50.03 9.06 Table 9: Bias mitigation results of transfer learning when we aligned the embeddings to the ENDEB space on gender balanced scrubbed MLBs .",
"or a gender-rich language reduces the bias in the downstream task.",
"This is aligned with our previous observation in Section 3. Bias in Transfer Learning Multilingual word embeddings are widely used in cross-lingual transfer learning (Ruder et al., 2019).",
"In this section, we conduct experiments to understand how the bias in multilingual word embeddings impacts the bias in transfer learning.",
"To do this, we train our model in one language (i.e., source language) and transfer it to another language based on the aligned embeddings obtained in Section 3.2.",
"For the transfer learning, we train the model on the training corpus of the source language and randomly choose 20% of the dataset from the target language and use them to fine-tune the model.",
"7 Here, we do not aim at achieving state-of-the-art transfer learning performance but pay more attention to the bias analysis.",
"Table 7 shows that the bias is present when we do the transfer learning regardless of the direction of transfer learning.",
"Bias from Multilingual Word Embeddings The transfer learning bias in Table 7 is a combined consequence of both corpus bias and the multilingual word embedding bias.",
"To better understand the influence of the bias in multilingual word embeddings on the transfer learning, we make the training corpus gender balanced for each occupation by upsampling to approximately make the model free of the corpus bias.",
"We then test the bias for different languages with differently aligned embeddings.",
"The results are shown in Table 8.",
"When we adopt the embeddings aligned to gender-rich languages, we could reduce the bias in the transfer learning, whereas adopting the embeddings aligned to EN results in an increased bias.",
"Bias after Mitigation Inspired by the method in Zhao et al. (2018a), we mitigate the bias in the downstream tasks by adopting the bias-mitigated word embeddings.",
"To get the less biased multilingual word embeddings, we align other embeddings to the ENDEB space previously obtained in Section 3. Table 9 demonstrates that by adopting such less biased embeddings, we can reduce the bias in transfer learning.",
"Comparing to Table 8, aligning the embeddings to a gender-rich language achieves better bias mitigation and, at the same time, remains the overall performance.",
"Contextualized embeddings such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2018) and XL-Net (Yang et al., 2019) have shown significant performance improvement in various NLP applications.",
"applications.",
"Multilingual BERT (M-BERT) has shown its great ability for the transfer learning.",
"As M-BERT provides one single language model trained on multiple languages, there is no longer a need for alignment procedure.",
"In this section, we analyze the bias in monolingual MLBs dataset as well as in transfer learning by replacing the fastText embeddings with M-BERT embeddings.",
"Similar to previous experiments, we train the model on the English dataset and transfer to other languages.",
"Table 10 and 11 summarizes our results: comparing to results by fastText embeddings in Table 6, M-BERT improves the performance on monolingual MLBs dataset as well as the transfer learning tasks.",
"When it comes to the bias, using M-BERT gets similar or lower bias in the monolingual datasets, but sometimes achieves higher bias than the multilingual word embeddings in transfer learning tasks such as the EN ES (in Table 7).",
"Recently bias in embeddings has attracted much attention.",
"However, most of the work only focuses on English corpora and little is known about the bias in multilingual embeddings.",
"In this work, we build different metrics and datasets to analyze gender bias in the multilingual embeddings from both the intrinsic and extrinsic perspectives.",
"We show that gender bias commonly exists across different languages and the alignment target for generating multilingual word embeddings also affects such bias.",
"In practice, we can choose the embeddings aligned to a gender-rich language to reduce the bias.",
"However, due to the limitation of available resources, this study is limited to the European languages.",
"We hope this study can work as a foundation to motivate future research about the analysis and mitigation of bias in multilingual embeddings.",
"We encourage researchers to look at languages with different grammatical gender (such as Czech and Slovak) and propose new methods to reduce the bias in multilingual embeddings as well as in cross-lingual transfer learning.",
"This work was supported in part by NSF Grant IIS-1927554.",
"We would like to thank Maria De-Arteaga and Andi Peng for the helpful discussion, and thank all the reviewers for their feedback."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"objective",
"method",
"objective",
"result",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"method",
"method",
"objective",
"objective",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"method",
"objective",
"objective",
"other",
"other"
] |
[
"Processing open-domain Chinese texts has been a critical bottleneck in computational linguistics for decades, partially because text segmentation and word discovery often entangle with each other in this challenging scenario.",
"No existing methods yet can achieve effective text segmentation and word discovery simultaneously in open domain.",
"This study fills in this gap by proposing a novel method called TopWORDS-Seg based on Bayesian inference, which enjoys robust performance and transparent interpretation when no training corpus and domain vocabulary are available.",
"Advantages of TopWORDS-Seg are demonstrated by a series of experimental studies.",
"Due to absence of word boundaries in Chinese, Chinese natural language processing (CNLP) faces a few unique challenges, including text segmentation and word discovery.",
"When processing open-domain Chinese corpus containing many unregistered words and named entities, these challenges become more critical as they often entangle with each other: we usually cannot segment Chinese texts correctly without knowing the underlying vocabulary; on the other hand, it is often difficult to precisely discover unregistered words and named entities from open-domain corpus without guidance on text segmentation.",
"Most methods for CNLP in the literature assume that the underlying vocabulary is known and focus on improving performance of text segmentation in closed test.",
"The first category of methods along this research line are simple methods based on Word Matching (Chen and Liu, 1992; Geutner, 1996; Chen, 2003; Shu et al., 2017), which segment a Chinese sentence by matching sub-strings in the sentence to a pre-given vocabulary in a forward or * Corresponding author.",
"reserve order.",
"The second category of methods utilize manually segmented corpus or large-scale pretraining corpus to train statistical models such as Maximum Entropy (Berger et al., 1996; McCallum et al.; Low et al., 2005), HMM (Sproat et al., 1994; Zhang et al., 2003) and CRF (Lafferty et al., 2001; Xue, 2003; Peng et al., 2004; Luo et al., 2019), or deep learning models including CNN (Wang and Xu), LSTM (Chen et al., 2015), Bi-LSTM (Ma et al., 2018) and BERT (Yang, 2019), or hybrid models like Bi-LSTM-CRF (Huang et al., 2015) and LSTM-CNNs-CRF (Ma and Hovy, 2016), to achieve text segmentation directly or indirectly.",
"Methods of this category have led to popular toolkits for processing Chinese texts, including Jieba (Sun, 2012), StanfordNLP (Manning et al., 2014), THULAC (Sun et al., 2016), PKUSEG (Luo et al., 2019), and LTP (Che et al., 2021).",
"A popular strategy adopted by some of these toolkits is to segment the target texts into sequences of basic words first, and capture unregistered words and named entities, which are often word compounds consisting of basic words, later via chunking and syntactic analysis.",
"Although such a strategy can equip these toolkits with some ability on word discovery, it is apparently sub-optimal, because we may mis-segment basic words at the first place without realizing the existence of potential technical words, making it impossible to discover technical word compounds correctly in post analysis such as chunking and syntactic analysis.",
"On the other hand, unsupervised methods are also developed to achieve text segmentation when no pre-given vocabulary and manually segmented training corpus are available.",
"Some methods of this research line segment texts based on local statistics of the target texts, including Description Length Gain (Kit and Wilks, 1999), Mutual Information (Chang and Lin, 2003), Accessor Variety (Feng et al., 2004), Evaluation-Selection-Adjustment Process (Wang et al., 2011), and Normalized Variation 158 of Branching Entropy (Magistry and Sagot, 2012).",
"The others, however, rely on generative statistical models whose parameters can be estimated from the target texts only, including Hierarchical Dirichlet Process (Goldwater et al., 2009), Nested Pitman-Yor Process (Mochihashi et al., 2009), Bayesian HMM (Chen et al., 2014), TopWORDS (Deng et al., 2016) and GTS (Yuan et al., 2020).",
"In general, methods based on word matching and unsupervised learning cannot produce high-quality text segmentation (Zhao and Kit, 2011), although some unsupervised methods are successful on word discovery (Deng et al., 2016).",
"Methods based on supervised learning can achieve excellent performance in closed test (Emerson, 2005), but often suffer from dramatic performance degradation when applied to open-domain Chinese corpus containing many unregistered words and named entities (Liu and Zhang, 2012; Wang et al., 2019).",
"Methods based on deep learning are usually more robust under the pre-training and fine-tuning framework, but still suffer from unstable performance and often fail to correctly segment technical words, which play a key role in deciphering the meaning of domain-specific texts, when applied to open-domain texts (Zhao et al., 2018; Fu et al., 2020).",
"There are also some efforts in the literature to integrate supervised and unsupervised methods for improved performance (Zhao and Kit, 2007, 2008, 2011; Wang et al., 2019; Yang et al., 2019).",
"But, these methods either heavily depend on manually labelled corpus for model training, or suffer from unbalanced emphasis on text segmentation and word discovery, resulting in limited improvement for CNLP in open domain.",
"These facts make processing open-domain Chinese texts a critical bottleneck in computational linguistics even for today.",
"Many factors contribute to the stagnation on development of efficient tools for processing open-domain Chinese texts.",
"From the methodology point of view, we do not have a proper learning framework yet to connect the text segmentation problem to the word discovery problem and deal with them at the same time effectively.",
"From the practical point of view, the lack of proper evaluation criterion in open domain places a critical barrier for fair comparison of different methods and discourages researchers from looking for potential solutions.",
"This study tries to provide solutions to these critical issues.",
"First, we propose a novel Bayesian framework to integrate TopWORDS, an effective word discoverer (Deng et al., 2016), and PKUSEG, a strong text segmenter, leading to a more efficient text segmenter called TopWORDS-Seg, which can achieve effective text segmentation and word discovery simultaneously in open domain.",
"Next, we design a cocktail strategy for method evaluation and comparison by measuring the overall performance of a target method on both text segmentation in benchmark corpus and technical word discovery and segmentation in open-domain corpus.",
"Experimental studies demonstrate that the proposed TopWORDS-Seg outperforms existing methods with a significant margin for CNLP in open domain.",
"Proposed by Deng et al. (2016), TopWORDS is a general approach for offline natural language processing based on unsupervised statistical learning.",
"Assuming that sentences are generated by randomly sampling and concatenating words from an underlying word dictionary (i.e., unigram language model), TopWORDS starts with an over-complete initial word dictionary D containing all plausible word candidates in the target texts, and gradually simplifies the model by removing non-significant word candidates from D based on statistical model selection principles, with the unknown word usage frequencies estimated by EM algorithm (Dempster et al., 1977).",
"TopWORDS is closely related to methods widely used in neural machine translation for constructing sub-word dictionary, and can be viewed as an advanced version of WordPiece (Schuster and Naka-jima, 2012), Byte Pair Encoding (Sennrich et al., 2016) and Unigram Language Model (Kudo, 2018).",
"In practice, TopWORDS is particularly effective on discovering words, technical terms and phrases from open-domain Chinese texts, but tends to segment texts with coarser granularity at phrase instead of word level.",
"In this section, we upgrade TopWORDS from a weak text segmenter with strong ability on word discovery to a more powerful tool enjoying balanced ability on both dimensions via Bayesian inference.",
"Following the setting in Deng et al. (2016), let T = { T 1 , , T n } be a collection of unsegmented Chinese text sequences to process, A = { a 1 , a 2 , , a M } be the set of Chinese characters",
"involved in T , and DT be the underlying vocabulary behind T unknown to the investigator.",
"We aim to discover DT from T , and predict the invisible word boundary profile B j = ( b j 1 , , b jL j ) for each piece of unsegmented Chinese text T j = a j 1 a j 2 a jL j e , where b jl = 1 if there is a word boundary behind the l -th position of T j and 0 otherwise, and e is a special end mark indicating the end of text sequence.",
"To learn DT , we starts with an over-complete initial word dictionary D = { w 1 , w 2 , . . . , w N , e } covering all plausible word candidates in T (i.e., all sub-strings in T whose length L and frequency F ) and the end mark e .",
"For simplicity, we always assume that DT D and all characters in A are covered by D .",
"Under the unigram language model, we have the following likelihood function for a piece of unsegmented text T j given B j and D : P ( T j | D , , B j ) = (cid:89) w D ( w ) n w ( B j ) , (1) where = { w } w D with w being the usage frequency of word w in T , and n w ( B j ) counts the number of occurrences of word w in the segmented version of T j based on B j .",
"Let B = { B 1 , , B n } being the word boundary profiles of the n text sequences in T .",
"We have P ( T | D , , B ) = n (cid:89) j =1 P ( T j | D , , B j ) = (cid:89) w D ( w ) n w ( B ) , (2) where n w ( B ) = n (cid:88) j =1 n w ( B j ) .",
"In this study, we propose to specify a joint prior distribution ( , B ) for ( , B ) to integrate prior preference on word usage and text segmentation into the learning procedure.",
"According to the Bayes Theorem, we have the following posterior distribution of ( , B ) given T and D : P ( , B | T , D ) ( , B ) P ( T | D , , B ) , which leads to the following marginal and conditional posterior distributions: P ( | T , D ) = (cid:90) P ( , B | T , D ) d B , P ( B | T , D , ) P ( , B | T , D ) .",
"There are various ways to specify the prior distributions ( , B ) .",
"In this study, we choose to use the independent conjugate prior below for conceptual and computational convenience: ( , B ) = ( ) ( B ) , where ( ) = Dirichlet ( | ) , ( B ) = n (cid:89) j =1 ( B j ) = n (cid:89) j =1 L j (cid:89) l =1 ( b jl ) , ( b jl ) = Binary ( b jl | jl ) , with = { w } w D and = { jl } being the hyper-parameters controlling the strength of prior information.",
"In this study, we choose to specify w = 1 , w D , (4) leading to a flat prior distribution for , but adopt a non-flat prior distribution for by smoothing the word boundary profiles B = { B j } 1 j n predicted by a pre-given text segmenter S : jl = (cid:40) (1 ) b jl + , l < L j , 1 , l = L j , (5) where b jl is the location-specific binary segmentation indicator predicted by S , (0 , 1) is the smoothing parameter, and > 0 highlights the probability to place a word boundary at each location by a pseudo segmenter that places boundaries randomly in the text sequence.",
"Here, we set = 0 .",
"5 by default, and leave as a hyper-parameter that can be tuned to fit different application scenarios, leading to the following joint prior distribution: ( , B ) n (cid:89) j =1 L j (cid:89) l =1 ( jl ) b jl (1 jl ) 1 b jl .",
"Given the prior distribution ( , B ) specified previously, the posterior distribution becomes:",
"P ( , B | T , D ) ( , B ) P ( T | D , , B ) n (cid:89) j =1 (cid:34) ( B j ) (cid:89) w D ( w ) n w ( B j ) (cid:35) , (7) where ( B j ) = L j (cid:89) l =1 ( jl ) b jl (1 jl ) 1 b jl",
"is a deterministic function of , as jl 's degenerate to constants for fixed based on (5).",
"Under such a Bayesian model, the problem of word discovery can be naturally converted into a statistical model selection problem, as only word candidates whose usage frequency w is significantly larger than 0 could be meaningful words.",
"We estimate by the posterior mode as defined in (3), which can by obtained via the EM algorithm (Dempster et al., 1977) with B as the missing data.",
"Details of the EM algorithm are described in Appendix A. Once the EM algorithm gets converged, we can evaluate the statistical significance of a word candidate w by the likelihood-ratio statistics between the full model and a reduced model with w removed: w = log (cid:32) P ( T | D , ) P ( T | D , [ w =0] ) (cid:33) , (8) where [ w =0] is the modification of by setting w = 0 with other elements unchanged.",
"Apparently, a larger w suggests that word candidate w is more important for fitting the observed texts, and thus is more likely to be a meaningful word.",
"Because 2 w 2 asymptotically under the null hypothesis that the reduced model with w removed is the true model, we can filter out word candidates whose w < , where threshold is the (1 0 . 05 N ) -quantile of the 2 distribution, following the Bonferroni correction principle for multiple hypothesis testing.",
"As demonstrated by Deng et al. (2016), such a model selection strategy can effectively filter out most meaningless word candidates and results in a concise final dictionary containing meaningful words and phrases only.",
"Considering that w = n (cid:88) j =1 log (1 r wj ) , where r wj = P (cid:16) w B j | T j , D , (cid:17) (9) = (cid:88) B j B j I ( w B j ) P ( B j | T j , D , ) , with notation w B j meaning that word candidate w appears in the segmented version of T j based on B j , we can get w by calculating r wj for each T j .",
"Alternatively, we can also calculate the posterior probability of existing a word boundary at position ( j, l ) as jl = (cid:88) B B j b jl P ( B j | T j , D , ) , (11) and segment T j based on B j = I ( j S ) , (12) where j = ( j 1 , , jL j ) and S is a pre-given threshold with 0.5 as the default value.",
"Here, we choose to use the second segmentation strategy, because it leads to more robust results in practice.",
"Integrating the dictionary initialization stage via sub-string enumeration, the prior construction stage guided by a pre-given segmenter S (i.e., PKUSEG by default), the word discovery stage empowered by EM algorithm and likelihood-ratio tests, and the text segmentation stage based on conditional probability inference, into a united framework, we come up with the TopWORDS-Seg algorithm as demonstrated in Figure 1.",
"Computation issues involved in the algorithm are detailed in Appendix B. A collection of hyper-parameters, including L , F , , and S , are associated with the TopWORDS-Seg algorithm, and need be specified to initiate the algorithm.",
"We recommend to set L = 15 , F = 2 and = S = 0 .",
"5 by default.",
"The specification of hyper-parameter is a bit complicated.",
"To capture unregistered words from open-domain texts more efficiently, we would like to 161 Target text Segmented text Final dictionary Prior distribution () Initial dictionary Posterior distribution , , , (|,,) Word boundary profiles Sub-string enumeration Bayesian framework Pregiven segmenter Prior specification Model selection with Text segmentation Parameter estimation Prior for text segmentation (,) Dictionary Initialization Stage Prior Specification Stage Text Segmentation Stage Word Discovery Stage Unigram language model Prior for word discovery (,) Estimated parameters Figure 1: Flow chart of the TopWORDS-Seg choose a larger to encourage word discovery.",
"To segment regular texts more precisely, however, we would like to choose a smaller instead to better utilize the prior information.",
"To get rid of the dilemma, we allow to specify with different values in different tasks, i.e., using a large (referred to as d ) in the word discovery stage and a small (referred to as s ) in the text segmentation stage.",
"Based on a wide range of experimental studies, we suggest to set d = 0 .",
"5 and s = 0 .",
"001 by default.",
"Composed of over 10 billion Chinese character tokens from 3.6 million webpages, Chinese Wikipedia ( https://dumps.wikimedia.org/ ) is one of the largest open-source Chinese corpus.",
"Containing rich contents of various domains and millions of technical terms highlighted by hyperlinks, the Chinese Wikipedia is an ideal corpus for studying CNLP in open domain.",
"Considering that it's computationally expensive to processing all webpages in Chinese Wikipedia, we randomly picked up 1,500 webpages involving 8 million Chinese character tokens (referred to as Chinese Wiki-Rand, or TW-R ) as the representative samples of the general texts in Chinese Wikipedia.",
"Moreover, we selected two collections of special webpages from Chinese Wikipedia with label \" (referred to as Chinese Wiki-Film, or TW-F ) or \" (referred to as Chinese Wiki-Physics, or TW-P ), involving 5 million Chinese character tokens for each, as the representatives of the domain-specific texts in Chinese Wikipedia.",
"Figure 2",
"(a) and",
"(b) demonstrates a typical Wikipedia web page and histograms for term length and appearance frequency of technical terms involved in TW-R .",
"In this section, we apply TopWORDS-Seg to process these Wikipedia corpora separately, and compare its performance to 6 existing methods, including Jieba (Sun, 2012), StanfordNLP (Man-ning et al., 2014), THULAC (Sun et al., 2016), PKUSEG (Luo et al., 2019), LTP (Che et al., 2021), and TopWORDS (Deng et al., 2016) itself, from various aspects.",
"Due to the lack of gold standard, it is not straightforward to evaluate and compare the performance of different methods on open-domain corpus like Chinese Wikipedia.",
"Here, we propose a cocktail strategy for method evaluation by measuring the overall performance of each method on both open-domain corpuora and benchmark corpus.",
"Let V t be the collection of frequent technical terms in a particular Wikipedia corpus (terms with hyperlinks appear at least 2 times), with n w be the number of occurrences for each w V t .",
"Suppose V is the discovered vocabulary reported by a particular method M , and m w is the number of successful catches of w by M .",
"Taking advantage of the self-labelled technical terms with hyperlinks in Wikipedia webpages, it is straightforward to measure discovery recall R d and segmentation recall R s for technical terms in V t as below: R d = | V t V | | V t | and R s = (cid:80) w V t m w (cid:80) w V t n w .",
"M to deal with technical terms in open-domain texts.",
"Because it is difficult to directly evaluate the perform of a method M on segmenting non-technical contents of the Wikipedia corpus, we retreat to indirect evaluation by evaluating its performance on segmenting the PKU corpus TP , a benchmark corpus with gold standard released by SIGHAN 2005 Bake-Off (Emerson, 2005), instead.",
"Let F s be the F 1 score of method M on text segmentation for the PKU corpus.",
"Score F s reflects M 's ability to process general Chinese texts without technical contents.",
"Apparently, R d , R s and F s measure the strength of a method comprehensively from various aspects, with both word discovery and text segmentation considered for technical as well as non-technical texts.",
"Such a cocktail strategy provide us a principle to evaluate and compare the overall performance of different CNLP methods in open domains.",
"If a method enjoys high R d , R s and F s values across different corpora stably, we would feel comfortable to claim it as a robust tools for CNLP in open domains.",
"Figure 2",
"(c) summarizes the performance of TopWORDS-Seg (with the default setting) and the 6 competing methods on the Wikipedia and PKU corpora in terms of R d , R s and F s , with the size of discovered vocabulary | V | reported as well.",
"Comparing these results, we find that TopWORDS-Seg enjoys robust performance on segmenting classic benchmark corpus ( F s = 82 . 2% for TP ), open-domain corpus ( R s = 76 . 5% for TW-R ) and domain-specific corpus ( R s = 76 . 8% and 70 . 8% for TW-F and TW-P respectively), and high effi-ciency on discovering technical terms ( R d > 82% for all three Wikipedia corpora).",
"The other methods, however, all suffer from either missing too many technical terms in the Wikipedia corpora ( R d ranging from 45% to 77% as in supervised methods), or segmenting the PKU corpus poorly ( F s = 50 . 4% as in TopWORDS).",
"Considering that TopWORDS-Seg reports a vocabulary that is 16K smaller than TopWORDS, it actually outperforms TopWORDS significantly in all dimensions.",
"Moreover, considering that both TopWORDS and TopWORDS-Seg tend to segment Chinese texts at coarser granularity with technical terms and phrases preserved as composite words instead of cutting them into smaller language units, the text segmentation standard adopted by the PKU corpus, which tends to segment Chinese texts at finer granularity, may over-punish them.",
"To ease the impact on performance evaluation due to segmentation granularity, we choose to mask part of the PKU corpus TP where method M is not consistent with the standard segmentation only on granularity (with the concrete criteria detailed in Appendix C), and measure the F 1 score of method M on the masked version of TP only, leading to a masked version of F s referred to as F m .",
"The proportion of masked corpus (i.e., mask rate ) is also calculated for each method and reported in Figure 2",
"(c).",
"TopwORDS-Seg achieves an improved F m = 93 .",
"7% with a mask rate of 16.6%, suggesting that TopwORDS-Seg actually segments the PKU corpus very well.",
"Meanwhile, a much higher mask rate of 50.4% is obtained for TopWORDS, which is consistent to our impression that TopWORDS tends to preserve too many sub-phrases in text segmentation.",
"In addition, because some methods based on supervised learning, e.g., Jieba, THULAC and PKUSEG, can receive external vocabulary for processing open-domain corpus, there exists an alternative strategy to integrate TopWORDS with thses methods by simply forwarding the vocabulary discovered by TopWORDS to them.",
"We refer to approaches based on this strategy as TopWORDS-Jieba/THULAC/PKUSEG, and report their performance on both Chinese Wikipedia corpus and PKU corpus in Figure 2",
"(c) as well.",
"Unfortunately, although this family of approaches achieve a higher R d in general, they tend to report an over-large vocabulary and segment texts with coarser granularity like TopWORDS does.",
"These results indicate that simply concatenating TopWORDS to other methods does not necessarily lead to an improved approach, and thus imply that the proposed strategy based on Bayesian inference is not trivial.",
"The heatmaps in Figure 2",
"(d) demonstrate the similarity on text segmentation of different methods on four different target corpora, where the similarity between any two methods M i and M j is measured by ij = (cid:80) T T D sum ( B ( i ) T B ( j ) T ) (cid:80) T T D sum ( B ( i ) T B ( j ) T ) , with B ( i ) T denoting the predicted word boundary vector of text sequence T by method M i .",
"From the figure, we can see clearly that text segmentation 163",
"(a) A typical web page in Chinese Wikipedia.",
"(b) Key characteristics of technical terms involved in Chines Wikipedia.",
"(c) Results on PKU, Chinese Wiki-Rand, Chinese Wiki-Film and Chinese Wiki-Physics datasets of different methods.",
"(d) Similarity on text segmentation of different methods on four different target corpora.",
"(e) Segmentation results on a typical sentence reported by TopWORDS-Seg is very similar to the results reported by supervised methods, but is significantly different from the result reported by TopWORDS for all four corpora.",
"Such results confirm the strength of TopWORDS-Seg on text segmentation in addition to word discovery, and provide strong evidences to support TopWORDS-Seg as a powerful tool for processing open-domain Chinese texts.",
"Figure 2",
"(e) shows an illustrative example of text segmentation of PKUSEG, TopWORDS and TopWORDS-Seg for a piece of target text, respectively.",
"Apparently, PKUSEG segments the target text almost perfectly except for chopping the technical term allotropes ( ) into three substrings by mistake, due to the lack of ability to recognize unregistered words.",
"TopWORDS, however, successfully recognizes and segments the technical term allotropes correctly, but segments the other part of the target text with coarser granularity leaving phrases like physical properties ( ) and extremely different ( ) as unsegmented language units.",
"TopWORDS-Seg, as expected, segments the target text perfectly, with 164",
"the technical term allotropes correctly recognized and the rest part segmented with proper granularity.",
"Written by Goodfellow et al. (2016), the book Deep Learning has become a classic tutorial for deep learning.",
"In 2017, its Chinese version was published in China (see Figure 3",
"(a) for the book's cover), which is composed of more than 400,000 Chinese character tokens (referred to as TD ).",
"Covering rich technical contents in the domain of machine learning, including over 800 technical terms as listed in the Index Table at the end of the book, such a book is an ideal target for testing the performance of the proposed TopWORDS-Seg in real application.",
"Feeding full text of the book to TopWORDS-Seg and competing methods respectively, we obtained results as summarized in Figure 3.",
"Figure 3",
"(b) shows that TopWORDS-Seg discovers 84.1% technical terms listed in the Index Table of the book with a vocabulary of 10.7K discovered words.",
"TopWORDS achieves a slightly higher R d = 85 .",
"0% at the price of a larger vocabulary with 12.8K discovered words.",
"Other methods based on supervised learning result in much lower R d with the vocabulary size varying between 6.8K to 12.2K.",
"Figure 3",
"(d) shows the most frequent words discovered by TopWORDS-Seg.",
"Figure 3",
"(e) displays part of the technical terms captured by TopWORDS-Seg but missed by all supervised methods, which are all meaningful technical terms like unsupervised learning ( ) and stochastic gradient decent ( ).",
"Figure 3",
"(f) summarizes typical pseudo words and phrases reported by TopWORDS but eliminated by TopWORDS-Seg, which are all common collocations widely used but usually not treated as words in Chinese, e.g., in the model ( ) and it is because of ( ).",
"These results suggest that TopWORDS-Seg is indeed more effective than competing methods on word discovery.",
"In terms of text segmentation, the heatmap in Figure 3",
"(c) visualizes the similarity between TopWORDS-Seg and other approaches on this corpus in a similar fashion as in Figure 2",
"(d).",
"Again, the performance of TopWORDS-Seg is very similar to the supervised methods, and demonstrates significant difference from TopWORDS, suggesting that TopWORDS-Seg is a robust tool with balanced ability on processing open-domain Chinese texts.",
"In this paper, we proposed TopWORDS-Seg, a powerful tool for processing open-domain Chi-165",
"nese texts based on Bayesian inference with balanced ability on text segmentation and word discovery.",
"A series of experimental studies confirm that TopWORDS-Seg can discover unregistered technical terms in open-domain texts effectively, and achieve high-quality text segmentation on both benchmark and open-domain corpora.",
"Taking advantage of the Bayesian framework, TopWORDS-Seg is ready to process large scale open-domain Chinese texts without extra training corpus or pregiven domain vocabulary, leading to an ideal solution to a critical bottleneck existing in computational linguistics for decades.",
"Moreover, combing the strong points of PKUSEG and TopWORDS via Bayesian inference, TopWORDS-Seg enjoys transparent reasoning process, and is fully interpretable to most people.",
"In practical applications, such a property is very attractive to many researchers and practicers.",
"Meanwhile, TopWORDS-Seg also suffers from a few obvious limitations.",
"For example, although the current learning framework is effective to discover frequent words, it tends to miss many rare words that appear only a few times in the texts.",
"For another instance, because PKUSEG is more reliable on segmenting general texts, but less reliable on segmenting technical texts, in the ideal case we should adopt prior information provided by PKUSEG adaptively when processing texts of different types.",
"Unfortunately, TopWORDS-Seg does not take such a natural idea into consideration yet, and simply use the PKUSEG prior at the same intensity everywhere.",
"These deficiencies partially explain why TopWORDS-Seg still misses about 15% technical terms in both experimental studies reported in this paper.",
"More research efforts are needed to fill in these gaps in future.",
"This research is partially supported by the National Scientific and Technological Innovation 2030 Major Project (No: 2020AAA0106501), the Guo Qiang Institute of Tsinghua University, the Beijing Natural Science Foundation (Z190021), and the Scientific-Technological Innovation Plan Program of Universities guided by the Ministry of Education of China.",
"Changzai Pan is supported by China Scholarship Council."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Building a taxonomy from the ground up involves several sub-tasks: selecting terms to include, predicting semantic relations between terms, and selecting a subset of relational instances to keep, given constraints on the taxonomy graph.",
"Methods for this final step taxonomic organization vary both in terms of the constraints they impose, and whether they enable discovery of synonymous terms.",
"It is hard to isolate the impact of these factors on the quality of the resulting taxonomy because organization methods are rarely compared directly.",
"In this paper, we present a head-to-head comparison of six taxonomic organization algorithms that vary with respect to their structural and transitivity constraints, and treatment of synonymy.",
"We find that while transitive algorithms out-perform their non-transitive counterparts, the top-performing transitive algorithm is prohibitively slow for taxonomies with as few as 50 entities.",
"We propose a simple modification to a non-transitive optimum branching algorithm to explicitly incorporate synonymy, resulting in a method that is substantially faster than the best transitive algorithm while giving complementary performance.",
"Many words and phrases fit within a natural semantic hierarchy: a mobile is a type of telephone , which in turn is a communications device and an object .",
"Taxonomies, which encode this knowledge, are important resources for natural language understanding systems.",
"There is ongoing interest in developing methods to build taxonomic resources automatically (Bor-dea et al., 2015, 2016).",
"Although several widely-used general ontologies (e.g. WordNet (Miller, 1995)) and domain-specific ontologies (e.g. Unified Medical Language System (UMLS) (Boden-Taxonomic Organization Common Initialization Entity Extraction Relation Prediction NOCYCNOCYC + CLUSDMSTDMST + CLUSMAXTRANSGRAPHMAXTRANSFOREST bus vehicle trainer carperson bus vehicle person car trainer",
"reider, 2004)) exist, these resources are handcrafted and therefore expensive to update or expand.",
"Automatic taxonomy induction enables the construction of taxonomic resources at scale in new languages and domains.",
"Further, there is evidence that it is useful to build dynamic or context-specific taxonomies extemporaneously for some applications (Do and Roth, 2010).",
"Taxonomy induction involves three sub-tasks: entity extraction , relation prediction , and taxonomic organization .",
"In many cases these subtasks are undertaken sequentially to build a taxonomy from the ground up.",
"While many works directly compare methods for relation prediction (e.g. Turney and Mohammad (2015), Shwartz et al. (2017) and others), none directly compare methods for the final taxonomic organization step with varying constraints.",
"Each paper that proposes a taxonomic organization method starts with its own set of predicted relations, making it impossible to determine even with benchmark datasets the extent to which improvements in identifying ground-truth relations are due to",
"(a) better relation prediction, or",
"(b) better taxonomic organization.",
"In this work, we present an empirical apples-to-apples comparison of six algorithms for unsupervised taxonomic organization.",
"The algorithms vary along three axes: whether they impose transitivity constraints on the taxonomic graph, whether they specify that the final graph structure be a directed acyclic graph (DAG) or tree/forest, and whether they identify clusters' of synonymous terms.",
"In each case we begin with the same sets of terms and predicted relations (see Figure 1).",
"This makes it possible to address several research questions.",
"First, which combination of these factors produces a taxonomy that most closely mirrors a set of ground-truth taxonomic relations?",
"Second, which algorithms are efficient enough in practice to run on large term sets?",
"And third, how robust is each algorithm to noise in the predicted relations used as input?",
"We find that while transitive algorithms perform better than non-transitive algorithms given the same constraints on graph structure, the best-performing transitive algorithm is prohibitively slow to use on input with as few as 50 nodes.",
"By modifying a commonly-used optimum branching algorithm to consolidate clusters of predicted synonyms into a single graph node, we show that it is possible to achieve complementary performance levels with an average runtime that is faster by orders of magnitude.",
"The problem of taxonomy induction can be summarized via three core sub-tasks.",
"While all systems that build taxonomies automatically must address each of these tasks, the sequence and manner in which they are addressed varies.",
"In the most straightforward case, the core tasks are viewed as orthogonal and carried out sequentially.",
"They are: 1. Entity Extraction: Identify a set of entities E (i.e. word types, synsets, etc) that will become nodes in the eventual taxonomy graph.",
"2. Relation Prediction: Predict the presence or absence of a directed semantic relation (hy-pernymy or entailment) between each pair of nodes, ( e i , e j ) E E .",
"The outputs are",
"(a) a set of potential edges R E E , where we use the notation r ij R to signify the relational instance, or edge, ( e i , e j ) , and",
"(b) relation scores s ( r ij ) for each edge derived from the classifier's predicted likelihood that the relational instance exists.",
"3. Taxonomic Organization: Select a subset of the predicted edges, R R , that produces a high sum of scores, P r R s ( r ij ) , subject to structural constraints.",
"The final output is the graph G ( E, R ) .",
"Structural constraints dictate what can be considered a valid or invalid combination of edges in a taxonomic graph (Do and Roth, 2010).",
"Two structural constraints frequently imposed are that the final graph be a DAG, or that the final graph be a tree/forest.",
"1 Examples of algorithms that produce DAG structures are the longest-path algorithm of Kozareva and Hovy (2010), the ContrastMedium approach of Faralli et al. (2017), and the random cycle-breaking method used in (Panchenko et al., 2016) and Faralli et al. (2015).",
"We experiment with a variation of the last one here, which we call NOCYC .",
"To produce tree-structured taxonomies, most researchers (including us) use algorithms for finding the maximally-weighted rooted tree spanning a directed graph (DMST).",
"Examples of prior work following this approach are Navigli et al. (2011) and Bansal et al. (2014).",
"Another dimension along which taxonomy organization approaches differ is whether they explicitly require the set of chosen relational instances R to be fully transitive.",
"The transitivity constraint dictates that if ( beetle IS-A insect ) is selected as part of R , and ( insect IS-A organism ) is 1 WLOG, the tree and forest constraints are identical, as a dummy root node can be attached to the root of each component in a forest to produce a tree.",
"selected as part of R , then ( beetle IS-A organism ) must also be selected.",
"Two methods that impose such transitivity constraints are the MAXTRANSGRAPH and MAXTRANSFOREST methods of Berant et al. (2015), both of which we experiment with here.",
"A final consideration when choosing a taxonomy organization algorithm is whether the method should enable the consolidation of synonyms into a single taxonomic entity.",
"Synonym sets, or synsets , are present as nodes in the WordNet graph (Miller, 1995).",
"Potential advantages to using synonym sets, rather than individual terms, as nodes include the ability to model polysemy ( horse means one thing when grouped with its synonym cavalry and another entirely when grouped with sawhorse ), and the ability to be more precise in defining relations.",
"A few early taxonomy induction approaches incorporated synonym clustering (e.g. Lin and Pantel (2002) and Pantel and Ravichandran (2004)).",
"The two transitive algorithms that we analyze here, MAXTRANSGRAPH and MAXTRANSFOREST , also consolidate equivalent terms into a single node.",
"The six algorithms that we compare differ along the three dimensions just described, namely, the structural constraints imposed (DAG or tree), whether transitivity is required, and whether synonyms are combined into a single taxonomy node (Figure 2).",
"Here we provide a short description of each.",
"The no-cycles method, which we abbreviate as NOCYC , is a simple method for constructing a DAG with high score from a set of predicted relational edges.",
"It is not transitive .",
"The algorithm works as follows.",
"From the set R of all predicted hypernym relations, we first fil-ter out of the graph G ( E, R ) any edges with score s ( r ij ) less than a tunable threshold .",
"Next, we break any cycles by finding strongly connected components (SCC) in the graph (i.e. a subset of nodes such that each node in the subset has a path to every other node in the subset), and iteratively removing the lowest-scoring edge from each SCC until all cycles are broken.",
"This implementation is slightly different from that of Faralli et al. (2015) and Panchenko et al. (2016), where cycles were broken by removing cycle edges randomly.",
"The search for SCCs in each iteration is linear using Tarjan's algorithm (Tarjan, 1972).",
"The NOCYC algorithm does not explicitly cluster synonyms, but we can find synonyms in the resulting graph implicitly as follows.",
"If we assume all synonymous terms share the same direct hypernyms and direct hyponyms, we can find such pairs by taking the transitive reduction 2 of the resulting graph G = ( E, R ) , and grouping all pairs of terms that have identical sets of direct hypernyms and hyponyms in the transitive reduction.",
"While NOCYC itself does support finding synonyms within the graph implicitly, we also experiment with an explicit synonym-clustering version, NOCYC + CLUS .",
"We modify NOCYC by collapsing into a single node all subsets of nodes predicted to be synonym clusters, using a method described in Section 4.2.2, prior to executing the cycle breaking algorithm.",
"Our second method selects hypernym edges for the taxonomy by using the Chu-Liu-Edmonds optimum branching algorithm (Chu and Liu, 1965; Edmonds, 1967) to solve the directed analog of the maximum spanning tree problem (DMST).",
"It constrains the final graph to be a tree and is not transitive .",
"2 In the transitive closure of a graph, each node e i is directly connected by a single edge to every node e j to which it has a path.",
"The transitive reduction can be obtained for a graph G by removing all edges from G that do not change its transitive closure.",
"The transitive reduction of a DAG is unique (Aho et al., 1972).",
"The algorithm works by adding a dummy root node e ROOT to E , and an edge from e ROOT to every other node e i in the graph.",
"We then use Chu-Liu-Edmonds to find the directed tree rooted at e ROOT that spans all nodes in E and has the maximal sum of scores.",
"Note that until now we have considered edges in taxonomy graphs to point from hyponyms to hypernyms; in this case we must switch the order, so that the spanning tree starts at the most general level of the hierarchy and reaches down to the leaves.",
"Chu-Liu-Edmonds finds the DMST efficiently in polynomial time (Tarjan, 1977).",
"Because DMST requires the final graph to be a tree, there is no implicit way to find synonyms within the taxonomy graphs it generates.",
"As with NOCYC , we test a modification called DMST+ CLUS that collapses predicted synonym clusters into a single graph node prior to running the DMST algorithm (see Section 4.2.2).",
"The first transitive algorithm we evaluate is MAXTRANSGRAPH (Berant et al., 2012, 2015), which constrains the graph structure to be a DAG .",
"MAXTRANSGRAPH was originally designed for building taxonomies of entailment relations (which can be subclassified as either synonyms or hypernyms) and is solved using integer linear programming (ILP).",
"Rather than using classifier scores directly as input, MAXTRANSGRAPH first computes a weight between each term pair ( e i , e j ) that is equal to the classifier score minus a tunable parameter: w ij = s ( r ij ) .",
"The purpose of modifying scores this way is ef-ficiency; MAXTRANSGRAPH solves its optimization on each connected component of the graph independently, where components are constructed by considering only positively-weighted edges in the graph.",
"Increasing increases sparsity and decreases runtime.",
"The objective of the ILP is to maximize the weights of selected relations, while requiring that the graph respects transitivity.",
"Berant et al. (2012) proved this problem is NP-hard and provided an ILP formulation for it as follows.",
"Let x ij be a binary variable that indicates whether edge ( e i , e j ) is in the subset of selected edges, R .",
"max x X i 6 = j w ij x ij s.t. e i , e j , e k E, x ij + x jk x ik 1 e i , e j E, x ij { 0 , 1 } (1) The objective maximizes the sum of edge weights where the edge is turned on' (i.e. x ij = 1 ).",
"The first constraint enforces transitivity, i.e. for every triple of nodes ( e i , e j , e k ) , if edge ( e i , e j ) R and edge ( e j , e k ) R , then edge ( e i , e k ) R .",
"The second constraint specifies that all x ij are binary.",
"The number of variables is O ( | E | 2 ) and number of constraints is O ( | E | 3 ) .",
"MAXTRANSGRAPH assumes that cycles of entailment relations in the resulting graph G ( E, R ) comprise cycles of synonyms, and that the remaining edges which are not part of a cycle are hypernym edges.",
"Because the resulting graph must be transitive, all cycles of three or more nodes are 326 cliques, in which each node is directly connected to every other.",
"Once every SCC is collapsed into a single synonym cluster node, the transitive reduction of the resulting graph is a DAG (Figure 3b).",
"The final algorithm we evaluate is MAXTRANSFOREST (Berant et al., 2012, 2015), which like MAXTRANSGRAPH is transitive , but produces a forest/tree structure.",
"MAXTRANSFOREST is nearly identical to MAXTRANSGRAPH , with the addition of one constraint that imposes its forest structure.",
"More specifically, the graphs produced by MAXTRANSFOREST are forest reducible .",
"A forest reducible graph is one where, after collapsing every SCC into a single node, the transitive reduction of the result is a forest (see Figure 3).",
"In practice, the forest reducibility constraint is enforced by applying one additional constraint to the ILP in Equation 1: e i , e j , e k E x ij + x ik x jk x kj 1 (2) This constraint says that each node e i can have only a single parent.",
"If relations r ij and r ik are in R , then either e j is the parent of e k or vice versa; e i may not have two parents that are not related via a hypernym relationship.",
"Like MAXTRANSGRAPH , the number of variables is O ( | E | 2 ) and number of constraints is O ( | E | 3 ) .",
"Also like MAXTRANSGRAPH , cycles in the resulting graph are assumed to constitute clusters of synonymous terms.",
"In order to directly compare the organization algorithms described, we organize our experiments as follows.",
"We first run entity extraction (Sec-tion 4.1) and relation prediction (Section 4.2) as a common initialization for all algorithms.",
"Then, we take the edge scores output by the relation prediction step and feed them to each taxonomic organization algorithm (Section 4.3).",
"Finally, we compare the output from each algorithm.",
"Here we describe the initialization steps in more detail.",
"Pavlick et al., 2015).",
"Our goal is to construct local taxonomies , where each entity set E consists of terms sharing a common target paraphrase.",
"For example, a local taxonomy centered around the target coach might contain entities bus , vehicle , trainer , person , car , and railcar .",
"The local taxonomy for a target word does not contain the target word itself.",
"We build a dataset for constructing local taxonomies centered around 50 target nouns drawn from the 2010 SemEval word sense induction dataset (Manandhar et al., 2010).",
"For each target noun, we extract as taxonomy terms the set of PPDB paraphrases having a PPDB2.0S CORE of at least 2.0 with the target.",
"3 The number of entities in each local taxonomy ranges from 13 to 126, with a median of 40 entities per set.",
"We hold out 5 local taxonomies to tune parameters for NOCYC , MAXTRANSGRAPH , and MAXTRANSFOREST , and use the remaining 45 as our test set.",
"Because they consist of related terms centered around a common paraphrase, there are several semantic relations present among these entity sets in addition to hypernymy and synonymy.",
"We analyze the overlap between all pairs of terms appearing in our local taxonomies and in WordNet, and find that the distribution of relation types among the overlapping pairs is 6.0% hypernym/hyponym, 1.3% synonym, 0.1% meronym/holonym, 3.1% coordinate terms (sharing a common direct hyper-nym), and 89.5% none of these.",
"Having extracted a set of entities, the next step in our initialization process is to make pairwise relation predictions for each pair of terms ( e i , e j ) that exist within an entity set E .",
"The different organization algorithms we compare take predicted synonym and/or hypernym edge scores as input.",
"Here we describe the methods we use to generate these scores.",
"For hypernym prediction, we adopt the state-of-the-art HypeNET method of Shwartz et al. (2016).",
"HypeNET integrates distributional (Lin and Pantel, 2002; Roller et al., 2014; Levy et al., 2015; Benotto, 2015) and path-based (Hearst, 1992; Snow et al., 2004; Nakashole et al., 2012) ap-3 The PPDB2.0S CORE is a supervised metric designed to correlate with human judgements of paraphrase quality (Pavlick et al., 2015).",
"proaches to hypernym prediction.",
"It uses a recurrent neural network to represent the set of observed dependency paths connecting an input word pair, and concatenates this representation with distributional word embeddings to produce a set of features for predicting hypernymy.",
"We create a dataset of noun pairs for training and evaluating the HypeNET model.",
"It combines noun pairs from four benchmark relation prediction datasets (BLESS (Baroni and Lenci, 2011), ROOT09 (Santus et al., 2016), EVALution (Santus et al., 2015), and K&H+N (Necsulescu et al., 2015)) with a set of related and unrelated noun pairs extracted from PPDB.",
"Since each of these is a multi-class dataset, we binarize the data by labeling noun pairs with a hypernym relation as positive instances, and all others as negative.",
"The combined benchmark+PPDB training set contains 76,152 noun pairs with a 1:4 hypernym:non-hypernym ratio, and the evaluation set contains 29,051 pairs.",
"We ensure lexical separation from our taxonomy induction dataset; no terms in the classifier training set appear in any of the local taxonomies.",
"We train HypeNET using our 76K-pair test set, and provide the results of evaluation on the 29K-pair test set in Table 1. The trained model achieves an overall average F1-score of 0.93 on the entire benchmark+PPDB test set.",
"The full details of our dataset creation and classifier training are provided in the supplementary material.",
"Finally, we use the trained model to predict hypernym likelihoods for each potential edge r ij in one of our local taxonomies, corresponding to an ordered pair of terms ( e i , e j ) that appear together in one of the 50 entity sets.",
"We assign a hypernym score s h ( r ij ) to each potential directed edge that equals the HypeNET predicted likelihood for that pair of terms.",
"We predict synonymy between noun pairs using distributional similarity, operationalized as the cosine similarity of PARAGRAM (Wieting et al., 2015) word embeddings.",
"4 We use PARAGRAM vectors because they perform well in semantic similarity tasks, and because they were originally extracted from PPDB and thus have 100% coverage of our entity sets.",
"The synonym score s s ( r ij ) for a potential edge r ij between entities ( e i , e j ) 4 We also tried using HypeNET to predict synonym relations, but results were significantly worse.",
"We also tune a synonymy threshold for the purpose of consolidating clusters of synonymous terms into a single node for DMST+ CLUS and NOCYC + CLUS (see Section 4.3).",
"We tune threshold = 0 .",
"76 over the benchmark+PPDB training set (binarized for synonymy) such that we predict a term pair ( e i , e j ) to be synonymous if s s ( r ij ) .",
"When evaluated over the test sets, this method achieves weighted average F1-scores of 0.707 and 0.797 for predicting synonyms in the PPDB and EVALution test sets respectively (Table 1).",
"Finally, we use the calculated hypernym and synonym scores s h ( r ij ) and s s ( r ij ) to initialize each organization algorithm as follows.",
"NOCYC and DMST: We use the hypernym scores as input, setting s ( r ij ) = s h ( r ij ) for all 328 Algorithm Constraint Hypernyms Synonyms Combined P R F P R F P R F Baseline Methods RANDOM (none) .036 .235 .061 .013 .034 .018 .033 .173 .054 MAXTRANSFOREST ** Tree/Forest .214 .758 .325 .707 .585 .586 .255 .708 .366 DMST** Tree/Forest .411 .661 .470 0.",
"NOCYC + CLUS and DMST+ CLUS : Initialization for these algorithms requires two steps.",
"First, we collapse clusters of likely synonyms into a single entity as follows.",
"For each local taxonomy, we create a graph with the extracted terms as nodes, and add an edge between every pair of terms having s s ( r ij ) (the threshold tuned on our training set).",
"We take the resulting connected components as the final entity set E .",
"See examples of synonyms clustered by this method in Table 2. Next, we calculate scores s ( r ij ) for each pair of entities.",
"When e i and e j are single-term entities (i.e. not synonym clusters), we simply set s ( r ij ) = s h ( r ij ) .",
"To obtain an edge score when one or both nodes is a cluster, we simply calculate the average hypernym score over every pair of terms ( t m , t n ) such that t m e i and t n e j : s ( r ij ) = P t m e i ; t n e j s h ( r mn ) | e i | + | e j | MAXTRANSGRAPH and MAXTRANSFOREST : Since these algorithms are designed to use entailment relation predictions as input, we set the score of each edge to be the maximum of the synonym and hypernym scores: s ( r ij ) = max( s h ( r ij ) , s s ( r ij )) .",
"Intuitively, this reflects the idea that entailment can be sub-classified as synonymy or hypernymy.",
"We conduct experiments aimed at addressing three primary research questions: (1) How does each taxonomic organization algorithm perform?",
"In particular, how do DAG algorithms compare to tree-constrained ones, and how do transitive algorithms compare to their non-transitive counterparts?",
"(2) Are any algorithms, particularly the ILP methods, too slow to use on large sets of terms?",
"(3) Given that hypernym relation prediction is far from perfect, how robust is each algorithm to noise in the predicted relations?",
"In our first experiment, we predict PPDB local taxonomies for the 45 target nouns in our test set using each of the six algorithms after the initialization described in Section 4.",
"In keeping with current work on this topic (Bordea et al., 2015, 2016), we evaluate the taxonomy organization algorithms' performance by calculating precision, recall, and F1-score of WordNet 3.0 hypernym and synonym edges for the 93% of PPDB taxonomy terms that are in WordNet.",
"When evaluating hypernym edges we consider both di-329 rect and transitive hypernym edges.",
"We report hypernym-specific scores where the set of ground-truth edges considers just WordNet hypernyms synonym-specific scores, and combined scores where all WordNet hypernym and synonym edges are taken as ground truth, and a predicted edge must have the correct start node, end node, and relation type to be correct.",
"Results are reported in Table 3. We compare the results of the six algorithms to two types of baselines.",
"As a lower bound, we implement a random baseline where edges are selected randomly with likelihood tuned on the benchmark+PPDB training set.",
"As an upper bound, we run oracle' versions of MAXTRANSFOREST and DMST where we set the score of any edge appearing in WordNet to 1. The transitive, tree-constrained MAXTRANSFOREST algorithm achieves the best average combined F-score (0.21) over all the local taxonomies, followed closely by the non-transitive, tree-constrained clustering method DMST+ CLUS (0.20).",
"These two methods, which are the only two tree-constrained methods that incorporate synonymy, outperform all DAG-constrained methods on this dataset.",
"While they perform similarly in terms of combined F-score, their results are complementary; MAXTRANSFOREST obtains a relatively high score on hypernym edges and lower score for synonym edges, while for DMST+ CLUS the results are reversed.",
"In general, these results suggest that consolidating synonyms into a single node helps tree-constrained methods by improving recall of both hypernym and synonym edges (DMST vs DMST+ CLUS ), but the same is not true for DAG-constrained methods.",
"To understand why, we examine the output taxonomies.",
"The average depth of the DAG taxonomies is greater than that of the tree taxonomies.",
"When incorrect hyponym attachments are made in a deep taxonomy, the errors in transitive hypernym links can be magnified, which is evident in the low hypernym precision of NOCYC and NOCYC + CLUS .",
"Synonym clustering prior to NOCYC + CLUS can magnify errors further, as synonyms are dragged into the incorrect hypernym relationships (see the NOCYC + CLUS example in Figure 4, where telephone is dragged along with phone into incorrect hypernym relations with battery and pile ).",
"For the shallower tree-constrained graph outputs, finding correct synonym relations helps the overall accuracy without inducing as many incorrect hypernym relations.",
"Finally, we note that transitive algorithms consistently out-perform their non-transitive counterparts.",
"For the DAG-constrained algorithms, the transitive version, MAXTRANSGRAPH , improves precision of hypernym and synonym edges over its non-transitive counterparts NOCYC and NOCYC + CLUS .",
"For the tree-constrained algorithms, the transitive MAXTRANSFOREST substantially improves recall of hypernym edges over its nontransitive counterparts DMST and DMST+ CLUS .",
"Next, we address the question of whether all algorithms are fast enough to be useful in practice.",
"We record the runtime for each algorithm on each local taxonomy, and note the number of runs that timed out at 5 minutes.",
"Results are in Table 4.",
"MAXTRANSFOREST , while most accurate on hypernyms and overall, is too slow to be use-330 ful on large inputs.",
"The average runtime over all local taxonomies was over two minutes, and the runtime on local taxonomies with as few as 50 nodes reached the five minute limit.",
"Meanwhile, DMST+ CLUS , which performed best for synonyms and competitively for hypernyms, has a runtime that is over 6,000 times faster.",
"In practice, this simpler algorithm may be preferable to use.",
"One surprising result is the speed of MAXTRANSGRAPH , which theoretically has a number of variables and constraints on the same order as that of MAXTRANSFOREST .",
"In practice, we found that the average number of active constraints for MAXTRANSGRAPH those violated at any point in the course of solving the ILP was less than one percent of the average number of active constraints in MAXTRANSFOREST .",
"Finally, given that hypernym prediction is still an open problem, we are interested in finding out how robust each algorithm is to noise in the input hypernym predictions.",
"To test this, we re-run each taxonomy organization algorithm on the local taxonomies in an oracle setting, where the score of all potential edges that are present as direct or transitive edges in WordNet is set to 1. In each iteration, we set a noise probability p , and randomly perturb edge scores (according to a Gaussian distribution with 0 mean and 0.15 standard deviation) with probability p .",
"We run this experiment with p [0 , 90] .",
"The combined F1-score is plotted against the noise level in Figure 5.",
"We find that the performance of transitive algorithms MAXTRANSGRAPH and MAXTRANSFOREST degrades more quickly than the performance of other algorithms at higher noise levels.",
"DMST performs best in the oracle setting at all levels of noise.",
"The results are shown in Figure 5.",
"The performance of the top two performing algorithms, MAXTRANSFOREST and DMST+ CLUS , in terms of combined F1-score degrades most with the introduction of noise.",
"But even with up to 40% noise, these algorithms still out-perform all others.",
"In this paper we have conducted a direct comparison of six taxonomy organization algorithms that vary in terms of their transitivity and graph structure constraints, and their treatment of synonyms.",
"Evaluating their performance over a dataset of local taxonomies drawn from PPDB, we find that transitive algorithms generally out-perform their non-transitive counterparts.",
"While the best-performing algorithm an ILP approach that constrains graphs to be transitive and tree-structured is too slow to use on large inputs, a much simpler maximum spanning tree algorithm that consolidates synonyms into a single taxonomic node has complementary performance, with a small fraction of the runtime.",
"Our results suggest that incorporating synonym detection into tree-constrained taxonomy organization algorithms is a promising area for future research.",
"This material is based in part on research sponsored by DARPA under grant number FA8750-13-2-0017 (the DEFT program) and HR0011-15-C-0115 (the LORELEI program).",
"The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes.",
"The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA and the U.S. Government.",
"The work has also been supported by the French National Research Agency under project ANR-16-CE33-0013, and by the National Physical Science Consortium.",
"We are grateful to our anonymous reviewers for their thoughtful and constructive comments."
] | [
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other"
] |
[
"Recent studies on neural networks with pre-trained weights (i.e., BERT) have mainly focused on a low-dimensional subspace, where the embedding vectors computed from input words (or their contexts) are located.",
"In this work, we propose a new approach, called OoMMix, to finding and regularizing the remainder of the space, referred to as out-of-manifold, which cannot be accessed through the words.",
"Specifically, we synthesize the out-of-manifold embeddings based on two embeddings obtained from actually-observed words, to utilize them for fine-tuning the network.",
"A discriminator is trained to detect whether an input embedding is located inside the manifold or not, and simultaneously, a generator is optimized to produce new embeddings that can be easily identified as out-of-manifold by the discriminator.",
"These two modules successfully collaborate in a unified and end-to-end manner for regularizing the out-of-manifold.",
"Our extensive evaluation on various text classification benchmarks demonstrates the effectiveness of our approach, as well as its good compatibility with existing data augmentation techniques which aim to enhance the manifold.",
"Neural networks with a word embedding table have been the most popular approach to a wide range of NLP applications.",
"The great success of transformer-based contextual embeddings as well as masked language models (Devlin et al., 2019; Liu et al., 2019b; Raffel et al., 2020) makes it possible to exploit the pre-trained weights, fully optimized by using large-scale corpora, and it brought a major breakthrough to many problems.",
"For this reason, most recent work on text classification has achieved state-of-the-art performances by fine-tuning the network initialized with the pre-trained Corresponding author weight (Devlin et al., 2019).",
"However, they suffer from extreme over-parameterization due to the large pre-trained weight, which allows them to be easily overfitted to its relatively small training data.",
"Along with outstanding performances of the pre-trained weight, researchers have tried to reveal the underlying structure encoded in its embedding space (Rogers et al., 2021).",
"One of the important findings is that the contextual embeddings computed from words usually form a low-dimensional manifold (Ethayarajh, 2019).",
"In particular, a quantitative analysis on the space (Cai et al., 2021), which measured the effective dimension size of BERT after applying PCA on its contextual embedding vectors, showed that 33% of dimensions covers 80% of the variance.",
"In other words, only the low-dimensional subspace is utilized for fine-tuning BERT, although a high-dimensional space (i.e., model weights with a high capacity) is provided for training.",
"Based on this finding on contextual embedding space, we aim to regularize the contextual embedding space for addressing the problem of over-parameterization, while focusing on the outside of the manifold (i.e., out-of-manifold) that cannot be accessed through the words.",
"In this work, we propose a novel approach to discovering and leveraging the out-of-manifold for contextual embedding regularization.",
"The key idea of our out-of-manifold regularization is to produce the embeddings that are located outside the manifold and utilize them to fine-tune the network for a target task.",
"To effectively interact with the contextual embedding of BERT, we adopt two additional modules, named as embedding generator and manifold discriminator.",
"Specifically,",
"1) the generator synthesizes the out-of-manifold embeddings by linearly interpolating two input embeddings computed from actually-observed words, and",
"2) the discriminator identifies whether an input embedding comes from the generator (i.e., the synthesized embedding) or the sequence of words (i.e., the actual embedding).",
"The joint optimization encourages the generator to output the out-of-manifold embeddings that can be easily distinguished from the actual embeddings by the discriminator, and the discriminator to learn the decision boundary between the in-manifold and out-of-manifold embeddings.",
"In the end, the fine-tuning on the synthesized out-of-manifold embeddings tightly regularizes the contextual embedding space of BERT.",
"The experimental results on several text classification benchmarks validate the effectiveness of our approach.",
"In particular, our approach using a parameterized generator significantly outperforms the state-of-the-art mixup approach whose mixing strategy needs to be manually given by a programmer.",
"Furthermore, our approach shows good compatibility with various data augmentation techniques, since the target space we focus on for regularization (i.e., out-of-manifold) does not overlap with the space the data augmentation techniques have paid attention to (i.e., in-manifold).",
"The in-depth analyses on our modules provide an insight into how the out-of-manifold regularization manipulates the contextual embedding space of BERT.",
"In this section, we briefly review two approaches to regularizing over-parameterized network based on auxiliary tasks and auxiliary data.",
"Regularization is an essential tool for good generalization capability of neural networks.",
"One representative regularization approach relies on designing auxiliary tasks.",
"Liu et al. (2019a) firstly showed promising results by unifying a bunch of heterogeneous tasks and training a single unified model for all the tasks.",
"In particular, the synthesized task that encodes desirable features or removes undesirable features turns out to be helpful for network regularization.",
"Devlin et al. (2019) introduced the task which restores masked sentences, termed as masked language model, to encode the distributional semantic in the network; this considerably boosts the overall performance of NLP applications.",
"In addition, Clark et al. (2020) regularized the network by discriminating generated tokens from a language model, and Gong et al. (2018) utilized an additional discriminator to remove the information about word frequency implicitly encoded in the word embeddings.",
"Another approach to network regularization is to take advantage of auxiliary data, mainly obtained by data augmentation, which eventually supplements the input data space.",
"Inspired by (Bengio et al., 2011) that additionally trained the network with noised (i.e., augmented) images in computer vision, Wei and Zou (2019) simply augmented sentences by adding a small perturbation to the original sentences, such as adding, deleting, and swapping words within the sentences.",
"Recent work tried to further exploit the knowledge from a pre-trained model for augmenting the sentences: sentence back translation by using a pre-trained translation model (Xie et al., 2019), and masked sentence reconstruction by using a pre-trained masked language model (Ng et al., 2020).",
"Mixup (Zhang et al., 2018) is also a kind of data augmentation but differs in that it performs linear interpolation on multiple input sentences and their corresponding labels.",
"Verma et al. (2019) validated that mixup in the hidden space (instead of the input space) is also effective for regularization, and Guo et al. (2019b) found that mixup of images can regularize the out-of-manifold in image representations.",
"In the case of NLP domain, Guo et al. (2019a) and Guo (2020) firstly adopted mixup to text data for text classification, using the traditional networks such as CNN and LSTM; they sample their mixing coefficients from the beta distribution at the sentence-level and at the word-level, respectively.",
"To fully utilize the contextual embedding of transformer-based networks, Chen et al. (2020) applied mixup in the word-level contextual embedding space using a pre-trained language model (i.e., BERT), whereas Sun et al. (2020) focused on mixup in the sentence-level embedding space specifically for improving GLUE score.",
"In this section, we propose a novel mixup approach, termed as OoMMix, to regularize the out-of-manifold in contextual embedding space for text classification.",
"We first briefly remind the architecture of BERT, then introduce two modules used for out-of-manifold regularization, which are embedding generator and manifold discriminator.",
"BERT is a stack of M transformer encoders pre-trained on the objective of the masked language model (Devlin et al., 2019).",
"First, a raw sentence is split into the sequence of tokens x { 0 , ..., | V |} L using a tokenizer with the vocabulary V , where L is the sequence length.",
"Each token is mapped into a D -dimensional vector based on the embedding table.",
"The sequence of embedding vectors h (0) RL D is transformed into the m -th contextual embedding h ( m ) RL D by m transformer layers (Vaswani et al., 2017).",
"We fine-tune the pre-trained weight to classify input texts into C classes.",
"A classifier produces the classification probability vector o RC using the last contextual embedding h ( M ) .",
"Then, the optimization problem is defined based on a labeled dataset D = { ( x 1 , y 1 ) , ..., ( x N , y N ) } .",
"minimize w f E ( x ,y ) D (cid:104) LC ( x , y ) (cid:105) LC ( x , y ) := L kl ( f ( x ) , e y ) where L kl is the Kullback-Leibler divergence and e y RC is a one-hot vector representing the label y .",
"The function f is the whole process from h (0) to o , called a target model, and w f is the trainable parameters for the function f , including the pre-trained weight of BERT and the parameters in the classifier.",
"For notation, f can be split into several sub-processes f ( x ) = ( f m (cid:48) h m (cid:48) m h m 0 )( x ) where h m (cid:48) m ( x ) maps the m -th contextual embedding into the m (cid:48) -th contextual embedding through the layers.",
"The goal of our generator network G is to synthesize an artificial contextual embedding by taking",
"two contextual embeddings (obtained from layer m g ) as its input.",
"We use linear interpolation so that the new embedding belongs to the line segment defined by the two input embeddings.",
"Since we limit the search space, the generator produces a single scalar value [0 , 1] , called a mixing coefficient.",
"We introduce the distribution of the mixing coefficient to model its uncertainty.",
"To this end, our generator network produces the lower bound and the interval by using h ( m g ) 1 and h ( m g ) 2 , so as to sample the mixing coefficient from the uniform distribution U ( , + ) .",
"To avoid massive computational overhead incurred by the concatenation of two input sequences (Reimers and Gurevych, 2019), we adopt the Siamese architecture that uses the shared weights on two different inputs.",
"The generator first transforms each sequence of contextual embedding vectors by using a single transformer layer, then obtains the sentence-level embedding by averaging all the embedding vectors in the sequence.",
"From the two sentence-level embeddings s 1 , s 2 RD , the generator obtains the concatenated embedding s = s 1 s 2 R 2 D and calculates and by using a two-layer fully-connected network with the softmax normalization.",
"Specifically, the last fully-connected layer outputs a normalized 3-dimensional vector, whose first and second values become and , thereby the range of sampling distribution ( , + ) lies in [0 , 1] .",
"In this work, we consider the structure of the generator to efficiently process the sequential input, but any other structures focusing on different aspects (e.g. the network that enlarges the search space) can be used as well.",
"For effective optimization of sampled from U ( , + ) , we apply the re-parameterization trick which decouples the sampling process from the computational graph (Kingma and Welling, 2014).",
"That is, we compute the mixing coefficient by using U (0 , 1) .",
"The optimization problem for text classification can be extended to the new embeddings and their labels, provided by the generator network.",
"minimize w fmg ,w g E ( x 1 ,y 1 ) D (cid:104) LG ( x 1 , y 1 ) (cid:105) (1) LG ( x 1 , y 1 ) := E ( x 2 ,y 2 ) D (cid:104) L kl ( f m g ( h ) , y ) (cid:105) g (cid:0) h m g 0 ( x 1 ) , h m g 0 ( x 2 ) (cid:1) h := h m g 0 ( x 1 ) + (1 ) h m g 0 ( x 2 ) y := e y 1 + (1 ) e y 2 where w f mg is the trainable parameters of the function f m g (i.e., the process from h ( m g ) to o ), and w G is the ones for the generator.",
"Similar to other mixup techniques, we impose the mixed label on the generated embedding.",
"We found that the supervision from the objective (1) is not enough to train the generator.",
"The objective optimizes the generator to produce the embeddings that are helpful for the target classification.",
"However, since the over-parameterized network tends to memorize all training data, the target model also simply memorizes the original data to minimize Equation (1).",
"In this situation, the generator is more likely to mimic the embeddings seen in the training set (memorized by the target model) rather than generate novel embeddings.",
"For this reason, we need more useful supervision for the generator, to make it output the out-of-manifold embeddings.",
"To tackle this challenge, we define an additional task that identifies whether a contextual embedding comes from the generator or actual words.",
"The purpose of this task is to learn the discriminative features between actual embeddings and generated embeddings, in order that we can easily discover the subspace which cannot be accessed through the actually-observed words.",
"For this task, we introduce a discriminator network D that serves as a binary classifier in the contextual embedding space of the m d -th transformer layer.",
"The discriminator takes a contextual embedding h ( m d ) and calculates the score s [0 , 1] which indicates the probability that h ( m d ) comes from an actual sentence (i.e., h ( m d ) is located inside the manifold).",
"Its network structure is similar to that of the generator, except that the concatenation is not needed and the output of the two-layer fully connected network produces a single scalar value.",
"As discussed in Section 3.2, any network structures for focusing on different aspects can be employed.",
"The optimization of the generator and discriminator for this task is described as follows.",
"minimize w g ,w d E ( x 1 ,y 1 ) D (cid:104) LD ( x 1 ) (cid:105) (2) LD ( x 1 ) := E ( x 2 ,y 2 ) D (cid:104) L bce ( D ( h m d m g ( h )) , 0) + L bce ( D ( h m d 0 ( x )) , 1) (cid:105) where L bce is the binary cross entropy loss.",
"By minimizing this objective, our generator can produce the out-of-manifold embeddings that are clearly distinguished from the actual (in-manifold) contextual embeddings by the discriminator.",
"We jointly optimize the two objectives to train the embedding generator.",
"Equation (1) encourages the generator to produce the embeddings which are helpful for the target task, while Equation (2) makes the generator produce the new embeddings different from the contextual embeddings obtained from the words.",
"The final objective is defined by E ( x ,y ) D [ LC ( x , y ) + LG ( x , y ) + e LD ( x )] where e regulates the two objectives.",
"The generator and discriminator collaboratively search out informative out-of-manifold embeddings for the target task while being optimized with the target model, thereby the generated embeddings can effectively regularize the out-of-manifold.",
"In this section, we present the experimental results supporting the superiority of OoMMix among the recent mixup approaches in text classification.",
"Also, we investigate its compatibility with other data augmentation techniques.",
"Finally, we provide in-depth analyses on our approach to further validate the effect of out-of-manifold regularization.",
"Our experiments consider 4 sentence classification benchmarks (Zhang et al., 2015) of various scales.",
"The statistics of the datasets are summarized in Table 1.",
"We follow the experimental setup used in (Chen et al., 2020) to directly compare the results with ours.",
"Specifically, we split the whole training set into training/validation sets, while leaving out the official test set for evaluation.",
"We choose the classification accuracy as the evaluation metric, considering the datasets are already class-balanced.",
"For the various sizes of training set from 0.5K to 35K, we apply stratified sampling to preserve the balanced class distributions.",
"In terms of optimization, we use BERT provided by huggingface for the classification tasks.",
"1 The Adam optimizer is used to fine-tune BERT with the linear warm-up for the first 1000 iterations, and the initial learning rates for the pre-trained weight and the target classifier are set to 2e-5 and 1e-3, respectively.",
"We set the batch size to 12 and the dropout probability to 0.1.",
"We attach the generator and discriminator at the third layer ( m g = 3 ) and the last layer ( m d = 12 ), respectively.",
"The two objectives equally contribute to training the generator, e = 1 , but we increase the e value if the 1 In our experiments, we use the checkpoint bert-baseuncased as the pre-trained weight.",
"discriminator fails to discriminate the embeddings.",
"The accuracy is evaluated on validation set every 200 iterations, and stop training when the accuracy does not increase for 10 consecutive evaluations.",
"We report the classification accuracy on the test set at the best validation checkpoint and repeat the experiment three times with different random seeds to report the average with its standard deviation.",
"We implement the code using PyTorch and use NVIDIA Titan Xp for parallel computation.",
"In our environment, the training spends about 30 minutes to 3 hours depending on the dataset.",
"We compare OoMMix with existing mixup techniques.",
"All the existing methods manually set the mixing coefficient, whereas we parameterize the linear interpolation by the embedding generator, optimized to produce out-of-manifold embeddings.",
"NonlinearMix (Guo, 2020) samples mixing coefficients for each word from the beta distribution, while using neural networks to produce the mixing coefficient for the label.",
"We apply this approach to BERT.",
"mixup -transformer (Sun et al., 2020) linearly interpolates the sentence-level embedding with a fixed mixing coefficient.",
"The mixing coefficient is 0.5 as the paper suggested.",
"TMix (Chen et al., 2020) performs linear interpolation on the word-level contextual embedding space and samples a mixing coefficient from the beta distribution.",
"We select the best accuracy among different alpha configurations { 0 .",
"05 , 0 .",
"1 } for the beta distribution.",
"MixText (Chen et al., 2020) additionally utilizes unlabeled data by combining TMix with its pseudo-labeling technique.",
"Table 2 reports the accuracy on various sentence classification benchmarks.",
"In most cases, OoMMix achieves the best performance among all the competing mixup approaches.",
"In the case of NonlinearMix, it sometimes shows worse performance than the baseline (i.e., fine-tuning only on original data), because its mixup strategy introduces a large degree of freedom in the search space, which loses useful semantic encoded in the pre-trained weight.",
"The state-of-the-art mixup approaches, TMix and mixup -transformer, slightly improves the accuracy over the baseline, while showing the effectiveness of the mixup approach.",
"Finally, OoMMix beats all the previous mixup approaches, which strongly indicates that the embeddings mixed by the generator are more effective for regularization, compared to the embeddings manually mixed by the existing approaches.",
"It is worth noting that OoMMix obtains a comparable performance to MixText, even without utilizing additional unlabeled data.",
"In conclusion, discovering the out-of-manifold and applying mixup for such subspace are beneficial in contextual embedding space.",
"To demonstrate that the regularization effect of OoMMix does not conflict with that of existing data augmentation techniques, we investigate the performance of BERT that adopts both OoMMix and other data augmentations together.",
"Using three popular data augmentation approaches in the NLP community, we replicate the dataset as large as the original one to use them for fine-tuning.",
"EDA (Wei and Zou, 2019) is a simple augmentation approach that randomly inserts/deletes words or swaps two words in a sentence.",
"We used the official codes 2 with the default inser-tion/deletion/swap ratio the author provided.",
"BT (Xie et al., 2019) uses the back-translation for data augmentation.",
"A sentence is translated into another language, then translated back into the original one.",
"We use the code implemented in the MixText repository 3 with the checkpoint fairseq provided.",
"4 SSMBA (Ng et al., 2020) makes use of the pre-trained masked language model.",
"They mask the original sentence and reconstruct it by filling in the masked portion.",
"We use the codes provided by the authors 5 with default 2 https://github.com/jasonwei20/eda_nlp 3 https://github.com/GT-SALT/MixText 4 transformer.wmt19.",
"Figure 2 shows the effectiveness of OoMMix when being used with the data augmentation techniques.",
"For all the cases, OoMMix shows consistent improvement.",
"Especially for the Amazon Review dataset, the data augmentation and our mixup strategy independently bring the improvement of the accuracy, because the subspaces targeted by the data augmentation and OoMMix do not overlap with each other.",
"That is, OoMMix finds out out-of-manifold embedding, which cannot be generated from the actual sentences, whereas the data augmentations (i.e., EDA, BT, and SSMBA) focus on augmenting the sentences whose embeddings are located inside the manifold.",
"Therefore, jointly applying the two techniques allows to tightly regularize the contextual embedding space, including both in-manifold and out-of-manifold.",
"Moreover, OoMMix has additional advantages over the data augmentations.",
"First, OoMMix is still effective in the case that large training data are available.",
"The data augmentation techniques result in less performance gain as the size of training data becomes larger, because there is less room for enhancing the manifold constructed by enough training data.",
"Second, the class label of the augmented sentences given by the data augmentation techniques (i.e., the same label with the original sentences) can be noisy for sentence classification, compared to the label of out-of-manifold embeddings generated by OoMMix.",
"This is because the assumption that the augmented sentences have the Figure 4: Performance changes with respect to different layers for the generator and discriminator.",
"same label with their original sentences is not always valid.",
"On the contrary, there do not exist actual (or ground truth) labels for out-of-manifold embeddings, as they do not correspond to actual sentences; this allows our mixup label to be less noisy for text classification.",
"We also investigate how the manifold discriminator affects the training of the embedding generator.",
"Precisely, we compare the distributions of mixing coefficients, obtained from two different generators; they are optimized with/without the manifold discriminator, respectively (Figure 3 Upper/Lower).",
"We partition the training process into two phases (i.e., the first and second half), and plot a histogram of the mixing coefficients in each phase.",
"The embedding generator without the discriminator gradually moves the distribution of the mixing coefficients toward zero, which means that the generated embedding becomes similar to the actual embedding.",
"Therefore, training the generator without the discriminator fails to produce novel embeddings, which cannot be seen in the original data.",
"In contrast, in the case of the generator with the discriminator, most of the mixing coefficients are located around 0.5, which implies that the generator produces the embeddings which are far from both the two actual embeddings to some extent.",
"We also observe that the average objective value for our discrimination task (Equation (2)) is 0.208 for the last 20 mini-batches; this is much lower than 0.693 at the initial point.",
"It indicates that the generated embeddings are quite clearly distinguished from the ones computed from actual sentences.",
"We further examine the effect of the location of our generator and discriminator (i.e., m g and m d ) on the final classification performance.",
"Figure 4 illustrates the changes of the classification accuracy with respect to the target contextual embedding layers the modules are attached to.",
"To sum up, BERT achieves high accuracy when the generator is attached to the contextual embedding lower than the sixth layer while the discriminator works for a higher layer.",
"It makes our out-of-manifold regularization affect more parameters in overall layers, which eventually leads to higher accuracy.",
"On the other hand, in case that we use both the generator and discriminator in the same layer, the gradient of the loss for manifold discrimination cannot guide the generator to output out-of-manifold embeddings, and as a result, the generator is not able to generate useful embeddings.",
"Finally, we visualize our contextual embedding space to qualitatively show that OoMMix discovers and leverages the space outside the manifold for regularization.",
"We apply Isomap (Tenenbaum et al., 2000), a neighborhood-based kernel PCA for dimensionality reduction, to both the actual sentence embeddings and generated embeddings.",
"We simply use the Isomap function provided by scikit-learn, and set the number of the neighbors to 15.",
"Figure 5 shows the yz-plane and xy-plane of our embedding space, whose dimensionality is reduced to 3 (i.e., x, y, and z).",
"We use different colors to represent the class of the actual embeddings as well as the predicted class of the generated embeddings.",
"classification task.",
"At the same time, the generated embeddings are located in the different region from the space enclosing most of the actual embeddings.",
"In the second plot, we colorize the generated embeddings with their predicted class.",
"The predicted class of out-of-manifold embeddings are well-aligned with that of the actual embeddings, which means that OoMMix imposes the classification capability on the out-of-manifold region as well.",
"We change the camera view to xy-plane and repeat the same process to show the alignment of class distribution clearly (in the third/fourth plots).",
"By imposing the classification capability on the extended dimension/subspace (i.e., out-of-manifold), OoMMix significantly improves the classification performance for the original dimension/subspace (i.e., in-manifold).",
"This paper proposes OoMMix to regularize out-of-manifold in the contextual embedding space.",
"Our main motivation is that the embeddings computed from the words only utilize a low-dimensional manifold while a high-dimensional space is available for the model capacity.",
"Therefore, OoMMix discovers the embeddings that are useful for the target task but cannot be accessed through the words.",
"With the help of the manifold discriminator, the embedding generator successfully produces out-of-manifold embeddings with their labels.",
"We demonstrate the effectiveness of OoMMix and its compatibility with the existing data augmentation techniques.",
"Our approach is a bit counter-intuitive in that the embeddings that cannot be accessed through the actual words are helpful for the target model.",
"As the discrete features from texts (i.e., words), embedded into the high-dimensional continuous space where their contexts are encoded, cannot cover the whole space, the uncovered space also should be carefully considered for any target tasks.",
"In this sense, we need to regularize the out-of-manifold to prevent anomalous behavior in that space, which is especially important for a large pre-trained contextual embedding space.",
"This work was supported by the NRF grant funded by the MSIT (No. 2020R1A2B5B03097210), and the IITP grant funded by the MSIT (No. 2018-0-00584, 2019-0-01906)."
] | [
"abstain",
"objective",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"other"
] |
[
"Building equitable and inclusive NLP technologies demands consideration of whether and how social attitudes are represented in ML models.",
"In particular, representations encoded in models often inadvertently perpetuate undesirable social biases from the data on which they are trained.",
"In this paper, we present evidence of such undesirable biases towards mentions of disability in two different English language models: toxicity prediction and sentiment analysis.",
"Next, we demonstrate that the neural embeddings that are the critical first step in most NLP pipelines similarly contain undesirable biases towards mentions of disability.",
"We end by highlighting topical biases in the discourse about disability which may contribute to the observed model biases; for instance, gun violence, homelessness, and drug addiction are over-represented in texts discussing mental illness.",
"This paper focuses on the representation of persons with disabilities through the lens of technology.",
"Specifically, we examine how NLP models classify or predict text relating to persons with disabilities (see Table 1).",
"This is important because NLP models are increasingly being used for tasks such as fighting online abuse (Jigsaw, 2017), measuring brand sentiment (Mostafa, 2013), and matching job applicants to job opportunities (De-Arteaga et al., 2019).",
"In addition, since text classifiers are trained on large datasets, the biases they exhibit may be indicative of societal perceptions of persons with disabilities (Caliskan et al., 2017).",
"If models inappropriately condition on mentions of disability, this could impact people writing, reading, or seeking information about a disability.",
"Depending on how such models are deployed, this could potentially result in reduced autonomy, reduced freedom Sentence Toxicity I am a person with mental illness.",
"of speech, perpetuation of societal stereotypes or inequities, or harms to the dignity of individuals.",
"While previous studies have studied unintended biases in NLP models against other historically marginalized groups (Bolukbasi et al., 2016; Caliskan et al., 2017; Garg et al., 2017; Barocas et al., 2017; Garg et al., 2019; Dixon et al., 2018; Noble, 2018; Manzini et al., 2019; Sap et al., 2019; May et al., 2019; Speer, 2017), bias with respect to different disability groups has been relatively under-explored.",
"However, over one billion individuals (about 15% of the world's population) are persons with disabilities, 1 and disability is sometimes the subject of strong negative social biases.",
"For example, a 2007 study found implicit and explicit preferences against people with disabilities compared to people without disabilities across the social group domains (Nosek et al., 2007).",
"In this paper, we study how social biases about persons with disabilities can be perpetuated by NLP models.",
"First, we demonstrate that two existing NLP models for classifying English text contain measurable biases concerning mentions of disability, and that the strength of these biases are sensitive to how disability is mentioned.",
"Second, we show that language models that feed NLP systems for downstream application similarly contain measur-1 https://www.worldbank.org/en/topic/disability 5492 able biases around disability.",
"Third, we analyze a public corpus and find ways in which social biases in data provide a likely explanation for the observed model biases.",
"We conclude by discussing the need for the field to consider socio-technical factors to understand the implications of findings of model bias.",
"Our analyses in this paper use a set of 56 linguistic expressions (in English) for referring to people with various types of disabilities, e.g. a deaf person .",
"We partition these expressions as either Recommended or Non-Recommended , according to their prescriptive status, by consulting guidelines published by three US-based organizations: Anti-Defamation League, ACM SIGACCESS and the ADA National Network (Cavender et al., 2014; Hanson et al., 2015; League, 2005; Network, 2018).",
"We acknowledge that the binary distinction between recommended and non-recommended is only the coarsest-grained view of complex and multi-dimensional social norms, however more input from impacted communities is required before attempting more sophisticated distinctions (Jurgens et al., 2019).",
"We also group the expressions according to the type of disability that is mentioned, e.g. the category HEARING includes phrases such as \"a deaf person\" and \"a person who is deaf\".",
"Table 2 shows a few example terms we use.",
"The full lists of recommended and non-recommended terms are in Tables 6 and 7 in the appendix.",
"Following (Garg et al., 2019; Prabhakaran et al., 2019), we use the notion of perturbation , whereby the phrases for referring to people with disabilities, described above, are all inserted into the same slots in sentence templates.",
"We start by first retrieving a set of naturally-occurring sentences that contain the pronouns he or she .",
"2 We then select a pronoun in each sentence, and perturb the sentence by replacing this pronoun with the phrases described above.",
"Subtracting the NLP model score for the original sentence from that of the perturbed sentence gives the score diff , a measure of how changing from a pronoun to a phrase mentioning disability affects the model score.",
"We perform this method on a set of 1000 sentences extracted at random from the Reddit sub-2 Future work will see how to include non-binary pronouns.",
"corpus of (Voigt et al., 2018).",
"Figure 1a shows the results for toxicity prediction (Jigsaw, 2017), which outputs a score [ 0 , 1 ] , with higher scores indicating more toxicity.",
"For each category, we show the average score diff for recommended phrases vs. non-recommended phrases along with the associated error bars.",
"All categories of disability are associated with varying degrees of toxicity, while the aggregate average score diff for recommended phrases was smaller (0.007) than that for non-recommended phrases (0.057).",
"Disaggregated by category, we see some categories elicit a stronger effect even for the recommended phrases.",
"Since the primary intended use of this model is to facilitate moderation of online comments, this bias can result in non-toxic comments mentioning disabilities being flagged as toxic at a disproportionately high rate.",
"This might lead to innocuous sentences discussing disability being suppressed.",
"Figure 1b shows the results for a sentiment analysis model (Google, 2018) that outputs scores [ 1 , + 1 ] ; higher score means positive sentiment.",
"Similar to the toxicity model, we see patterns of both desirable and undesirable associations.",
"Neural text embedding models (Mikolov et al., 2013) are critical first steps in today's NLP pipelines.",
"These models learn vector representations of words, phrases, or sentences, such that semantic relationships between words are encoded in the geometric relationship between vectors.",
"Text embedding models capture some of the complexities and nuances of human language.",
"However, these models may also encode undesirable correlations in the data that reflect harmful social biases (Bolukbasi et al., 2016; May et al., 2019; Garg et al., 2017).",
"Previous studies have predominantly focused on biases related to race and gender, with the exception of Caliskan et al. (2017), who considered physical and mental illness.",
"Biases with respect to 5493 CEREBRAL_PALSY -OiP.ONIC_ILL.NESS COGNITIVE -OOWNS_SYNDROME EPILEPSY -f-EARINGr-ENTAL_HEALTH -Ji:IBILITYPHYSICAL -SHOP.T_STATURE SIGHT-UNSPECIFIED WITHOUT -0.05 ' ' ' ., ' ' '",
"broader disability groups remain under-explored.",
"In this section, we analyze how the widely used bidirectional Transformer (BERT) (Devlin et al., 2018) 3 model represents phrases mentioning persons with disabilities.",
"Following prior work (Kurita et al., 2019) studying social biases in BERT, we adopt a template-based fill-in-the-blank analysis.",
"Given a query sentence with a missing word, BERT predicts a ranked list of words to fill in the blank.",
"We construct a set of simple hand-crafted templates <phrase> is .' , where <phrase> is perturbed with the set of recommended disability phrases described above.",
"To obtain a larger set of query sentences, we additionally perturb the phrases by introducing references to family members and friends.",
"For example, in addition to a person', we include my sibling', my parent', my friend', etc.",
"We then study how the top ranked 4 words predicted by BERT change when different disability phrases are used in the query sentence.",
"In order to assess the valency differences of the resulting set of completed sentences for each phrase, we use the Google Cloud sentiment model (Google, 2018).",
"For each BERT-predicted word w , we obtain the sentiment for the sentence A person is <w>' .",
"We use the neutral a person instead of the original phrase, so that we are assessing only the differences in sentiment scores for the words predicted by BERT and not the biases associated 3 We use the 1024-dimensional large' uncased version, available at https://github.com/google-research/ .",
"with disability phrases themselves in the sentiment model (demonstrated in Section 3).",
"Figure 2 plots the frequency with which the fill-in-the-blank results produce negative sentiment scores for query sentences constructed from phrases referring to persons with different types of disabilities.",
"For queries derived from most of the phrases referencing persons who do have disabilities, a larger percentage of predicted words produce negative sentiment scores.",
"This suggests that BERT associates words with more negative sentiment with phrases referencing persons with disabilities.",
"Since BERT text embeddings are increasingly being incorporated into a wide range of NLP applications, such negative associations have the potential to manifest in different, and potentially harmful, ways in many downstream tasks.",
"NLP models such as the ones discussed above are trained on large textual corpora, which are analyzed to build meaning representations for words based on word co-occurrence metrics, drawing on the idea that you shall know a word by the company it keeps (Firth, 1957).",
"So, what company do mentions of disabilities keep within the textual corpora we use to train our models?",
"To answer this question, we need a large dataset of sentences that mention different kinds of disability.",
"We use the dataset of online comments released as part of the Jigsaw Unintended Bias in Toxicity Classification challenge (Borkan et al., 2019; Jigsaw, 2019), where a subset of 405K comments are labelled for mentions of disabilities, grouped into four types: physical disability, intellectual or learning disability, psychiatric or mental illness , and other disability .",
"We focus here only on psychiatric or mental illness , since others have fewer than 100 instances in the dataset.",
"Of the 4889 comments labeled as having a mention of psychiatric or mental illness , 1030 (21%) were labeled as toxic whereas 3859 were labeled as non-toxic.",
"5 Our goal is to find words and phrases that are statistically more likely to appear in comments that mention psychiatric or mental illness compared to those that do not.",
"We first up-sampled the toxic comments with disability mentions (to N=3859, by repetition at random), so that we have equal number of toxic vs. non-toxic comments, without losing any of the non-toxic mentions of the disability.",
"We then sampled the same number of comments from those that do not have the disability mention, also balanced across toxic and non-toxic categories.",
"In total, this gave us 15436 (=4*3859) comments.",
"Using this 4-way balanced dataset, we calculated the log-odds ratio metric (Monroe et al., 2008) for all unigrams and bi-grams (no stopword removal) that measure how over-represented they are in the group of comments that have a disability mention, while controlling for co-occurrences due to chance.",
"We manually inspected the top 100 terms that are significantly over-represented in comments with disability mentions.",
"Most of them fall into one of the following five categories: 6 CONDITION : terms that describe the disability TREATMENT : terms that refer to treatments or care for persons with the disability INFRASTRUCTURE : terms that refer to infrastructure that supports people with the disability LINGUISTIC : phrases that are linguistically associated when speaking about groups of people SOCIAL : terms that refer to social associations Table 3 show the top 10 terms in each of these categories, along with the log odds ratio score that denote the strength of association.",
"As expected, the CONDITION phrases have the highest association.",
"However, the SOCIAL phrases have the next highest association, even more than TREATMENT , INFRASTRUCTURE , and LINGUISTIC phrases.",
"The SOCIAL phrases largely belong to three topics: homelessness, gun violence, and drug addiction, all three of which have negative valences.",
"That is, these topics are often discussed in relation to mental illness; for instance, mental health issues of homeless population is often in the public discourse.",
"While these associations are perhaps not surprising, it is important to note that these associations with topics of arguably negative valence significantly shape the 5 Note that this is a high proportion compared to the per-6 We omit a small number of phrases that do not belong to centage of toxic comments (8%) in the overall dataset one of these, for lack of space.",
"We have so far worked in a purely technical framing of model biasesi.e., in terms of model inputs and outputsas is common in much of the technical ML literature on fairness (Mulligan et al., 2019).",
"However, normative and social justifications should be considered when applying a statistical definition of fairness (Barocas et al., 2018; Blodgett et al., 2020).",
"Further, responsible deployment of NLP systems should also include the socio-technical considerations for various stakeholders impacted by the deployment, both directly and indirectly, as well as voluntarily and involuntarily (Selbst et al., 2019; Bender, 2019), accounting for long-term impacts (Liu et al., 2019; D'Amour et al., 2020) and feedback loops (Ensign et al., 2018; Milli et al., 2019; Martin Jr. et al., 2020).",
"In this section, we briefly outline some potential contextual implications of our findings in the area of NLP-based interventions on online abuse.",
"Following Dwork et al. (2012) and Cao and Daum III (2020), we use three hypothetical scenarios to illustrate some key implications.",
"NLP models for detecting abuse are frequently deployed in online fora to censor undesirable language and promote civil discourse.",
"Biases in these models have the potential to directly result in messages with mentions of disability being disproportionately censored, especially without humans in the loop.",
"Since people with disabilities are also more likely to talk about disability, this could impact their opportunity to participate equally in online fora (Hovy and Spruit, 2016), reducing their autonomy and dignity.",
"Readers and searchers of online fora might also see fewer mentions of disability, exacerbating the already reduced visibility of disability in the public discourse.",
"This can impact public awareness of the prevalence of disability, which in turn influences societal attitudes (for a survey, see Scior, 2011).",
"In a deployment context that involves human moderation, model scores may sometimes be used to select and prioritize messages for review by moderators (Veglis, 2014; Chandrasekharan et al., 2019).",
"Are messages with higher model scores reviewed first?",
"Or those with lower scores?",
"Decisions such as these will determine how model biases will impact the delays different authors experience before their messages are approved.",
"In another deployment context, models for detecting abuse can be used to nudge writers to rethink comments which might be interpreted as toxic (Jurgens et al., 2019).",
"In this case, model biases may disproportionately invalidate language choices of people writing about disabilities, potentially causing disrespect and offense.",
"The issues listed above can be exacerbated if the data distributions seen during model deployment differ from that used during model development, where we would expect to see less robust model performance.",
"Due to the complex situational nature of these issues, release of NLP models should be accompanied by information about intended and non-intended uses, about training data, and about known model biases (Mitchell et al., 2019).",
"Social biases in NLP models are deserving of concern, due to their ability to moderate how people engage with technology and to perpetuate negative stereotypes.",
"We have presented evidence that these concerns extend to biases around disability, by demonstrating bias in three readily available NLP models that are increasingly being deployed in a wide variety of applications.",
"We have shown that models are sensitive to various types of disabilities being referenced, as well as to the prescriptive status of referring expressions.",
"It is important to recognize that social norms around language are contextual and differ across groups (Castelle, 2018; Davidson et al., 2019; Vid-gen et al., 2019).",
"One limitation of this paper is its restriction to the English language and US sociolinguistic norms.",
"Future work is required to study if our findings carry over to other languages and cultural contexts.",
"Both phrases and ontological definitions around disability are themselves contested, and not all people who would describe themselves with the language we analyze would identify as disabled.",
"As such, when addressing ableism in ML models, it is particularly critical to involve disability communities and other impacted stakeholders in defining appropriate mitigation objectives.",
"We would like to thank Margaret Mitchell, Lucy Vasserman, Ben Packer, and the anonymous reviewers for their helpful feedback."
] | [
"abstain",
"abstain",
"method",
"objective",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"objective",
"other",
"result",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other"
] |
[
"A grand goal in AI is to build a robot that can accurately navigate based on natural language instructions, which requires the agent to perceive the scene, understand and ground language, and act in the real-world environment.",
"One key challenge here is to learn to navigate in new environments that are unseen during training.",
"Most of the existing approaches perform dramatically worse in unseen environments as compared to seen ones.",
"In this paper, we present a generalizable navigational agent.",
"Our agent is trained in two stages.",
"The first stage is training via mixed imitation and reinforcement learning, combining the benefits from both off-policy and on-policy optimization.",
"The second stage is fine-tuning via newly-introduced unseen' triplets (envi-ronment, path, instruction).",
"To generate these unseen triplets, we propose a simple but effective environmental dropout' method to mimic unseen environments, which overcomes the problem of limited seen environment variability.",
"Next, we apply semi-supervised learning (via back-translation) on these dropped-out environments to generate new paths and instructions.",
"Empirically, we show that our agent is substantially better at generalizability when fine-tuned with these triplets, outperforming the state-of-art approaches by a large margin on the private unseen test set of the Room-to-Room task, and achieving the top rank on the leaderboard.",
"1 1 Introduction One of the important goals in AI is to develop a robot/agent that can understand instructions from humans and perform actions in complex environments.",
"In order to do so, such a robot is required to perceive the surrounding scene, understand our spoken language, and act in a real-world 1 Our code, data, and models publicly available at: https://github.com/airsplay/R2R-EnvDrop Walk past the piano through an archway directly in front.",
"house.",
"Recent years have witnessed various types of embodied action based NLP tasks being proposed (Correa et al., 2010; Walters et al., 2007; Hayashi et al., 2007; Zhu et al., 2017b; Das et al., 2018; Anderson et al., 2018b).",
"In this paper, we address the task of instruction-guided navigation, where the agent seeks a route from a start viewpoint to an end viewpoint based on a given natural language instruction in a given environment, as shown in Fig. 1. The navigation simulator we use is the recent Room-to-Room (R2R) simulator (Anderson et al., 2018b), which uses real images from the Matterport3D (Chang et al., 2017) indoor home environments and collects complex navigable human-spoken instructions inside the environments, hence connecting problems in vision, language, and robotics.",
"The instruction in Fig. 1 is Walk past the piano through an archway directly in front. Go through the hallway when you see the window door. Turn right to the hanged pictures... .",
"At each position (viewpoint), the agent perceives panoramic views (a set of surrounding images) and selects one of them to step into.",
"In this challenging task, the agent is required to understand each piece of the instruction and localize key views ( pi-ano , hallway , door , etc.) for making actions at each time step.",
"Another crucial challenge is to generalize the agent's navigation understanding capability to unseen test room environments, considering that the R2R task has substantially different unseen (test) rooms as compared to seen (trained) ones.",
"Such generalization ability is important for developing a practical navigational robot that can operate in the wild.",
"Recent works (Fried et al., 2018; Wang et al., 2019, 2018a; Ma et al., 2019) have shown promising progress on this R2R task, based on speaker-follower, reinforcement learning, imitation learning, cross-modal, and look-ahead models.",
"However, the primary issue in this task is that most models perform substantially worse in unseen environments than in seen ones, due to the lack of generalizability.",
"Hence, in our paper, we focus on improving the agent's generalizability in unseen environments.",
"For this, we propose a two-stage training approach.",
"The first stage is training the agent via mixed imitation learning (IL) and reinforcement learning (RL) which combines off-policy and on-policy optimization; this significantly outperforms using IL or RL alone.",
"The second, more important stage is semi-supervised learning with generalization-focused environmental dropout'.",
"Here, the model is fine-tuned using additional training data generated via back-translation.",
"This is usually done based on a neural speaker model (Fried et al., 2018) that synthesizes new instructions for additional routes in the existing environments.",
"However, we found that the bottleneck for this semi-supervised learning method is the limited variability of given (seen) environments.",
"Therefore, to overcome this, we propose to generate novel and diverse environments via a simple but effective environmental dropout' method based on viewand viewpoint-consistent masking of the visual features.",
"Next, the new navigational routes are collected from these new environments, and lastly the new instructions are generated by a neural speaker on these routes, and these triplets are employed to fine-tune the model training.",
"Overall, our fine-tuned model based on back-translation with environmental dropout substantially outperforms the previous state-of-the-art models, and achieves the most recent rank-1 on the Vision and Language Navigation (VLN) R2R challenge leaderboard's private test data, outperforming all other entries in success rate under all evaluation setups (single run, beam search, and pre-exploration).",
"2 We also present detailed ablation and analysis studies to explain the effectiveness of our generalization method.",
"Embodied Vision-and-Language Recent years are witnessing a resurgence of active vision.",
"For example, Levine et al. (2016) used an end-to-end learned model to predict robotic actions from raw pixel data, Gupta et al. (2017) learned to navigate via mapping and planning, Sadeghi and Levine (2017) trained an agent to fly in simulation and show its performance in the real world, and Gandhi et al. (2017) trained a self-supervised agent to fly from examples of drones crashing.",
"Meanwhile, in the intersection of active perception and language understanding, several tasks have been proposed, including instruction-based navigation (Chaplot et al., 2018; Anderson et al., 2018b), target-driven navigation (Zhu et al., 2017b; Gupta et al., 2017), embodied question answering (Das et al., 2018), interactive question answering (Gordon et al., 2018), and task planning (Zhu et al., 2017a).",
"While these tasks are driven by different goals, they all require agents that can perceive their surroundings, understand the goal (either presented visually or in language instructions), and act in a virtual environment.",
"Instruction-based Navigation For instruction-based navigation task, an agent is required to navigate from start viewpoint to end viewpoint according to some given instruction in an environment.",
"This task has been studied by many works (Tellex et al., 2011; Chen and Mooney, 2011; Artzi and Zettlemoyer, 2013; Andreas and Klein, 2015; Mei et al., 2016; Misra et al., 2017) in recent years.",
"Among them, (Anderson et al., 2018b) differs from the others as it introduced a photo-realistic dataset Room-to-Room (R2R), where all images are real ones taken by Matterport3D (Chang et al., 2017) and the instructions are also natural.",
"In R2R 2 https://evalai.cloudcv.org/web/ challenges/challenge-page/97/overview environments, the agent's ability to perceive real-world images and understanding natural language becomes even more crucial.",
"To solve this challenging task, a lot of works (Fried et al., 2018; Wang et al., 2018a, 2019; Ma et al., 2019) have been proposed and shown some potential.",
"The most relevant work to us is Fried et al. (2018), which proposed to use a speaker to synthesize new instructions and implement pragmatic reasoning.",
"However, we observe there is some performance gap between seen and unseen environments.",
"In this paper, we focus on improving the agent's generalizability in unseen environment.",
"Back-translation Back translation (Sennrich et al., 2016), a popular semi-supervised learning method, has been well studied in neural machine translation (Hoang et al., 2018; Wang et al., 2018b; Edunov et al., 2018; Prabhumoye et al., 2018).",
"Given paired data of source and target sentences, the model first learns two translators a forward translator from source to target and a backward translator from target to source.",
"Next, it generates more source sentences using the back translator on an external target-language corpus.",
"The generated pairs are then incorporated into the training data for fine-tuning the forward translator, which proves to improve the translation performance.",
"Recently, this approach (also known as data augmentation) was applied to the task of instruction-based navigation (Fried et al., 2018), where the source and target sentences are replaced with instructions and routes.",
"Navigation in the Room-to-Room task (Anderson et al., 2018b) requires an agent to find a route R (a sequence of viewpoints) from the start viewpoint S to the target viewpoint T according to the given instruction I .",
"The agent is put in a photo-realistic environment E .",
"At each time step t , the agent's observation consists of a panoramic view and navigable viewpoints.",
"The panoramic view o t is discretized into 36 single views { o t,i } 36 i =1 .",
"Each single view o t,i is an RGB image v t,i accompanied with its orientation ( t,i , t,i ) , where t,i and t,i are the angles of heading and elevation, respectively.",
"The navigable viewpoints { l t,k } N t k =1 are the N t reachable and visible locations from the current viewpoint.",
"Each navigable viewpoint l t,k is represented by the orientation ( t,k , t,k ) from current viewpoint to the next viewpoints.",
"The agent needs to select the moving action a t from the list of navigable viewpoints { l t,k } according to the given instruction I , history/current panoramic views { o } t =1 , and history actions { a } t 1 =1 .",
"Following Fried et al. (2018), we concatenate the ResNet (He et al., 2016) feature of the RGB image and the orientation as the view feature f t,i : f t,i = [ ResNet( v t,i ); (cos t,i , sin t,i , cos t,i , sin t,i ) ] (1) The navigable viewpoint feature g t,k is extracted in the same way.",
"For our base instruction-to-navigation translation agent, we implement an encoder-decoder model similar to Fried et al. (2018).",
"The encoder is a bidirectional LSTM-RNN with an embedding layer: w j = embedding ( w j ) (2) u 1 , u 2 , , u L = Bi LSTM( w 1 , , w L ) (3) where u j is the j -th word in the instruction with a length of L .",
"The decoder of the agent is an attentive LSTM-RNN.",
"At each decoding step t , the agent first attends to the view features { f t,i } computing the attentive visual feature f t : t,i = softmax i ( f (cid:124) t,i WF h t 1 ) (4) f t = (cid:88) i t,i f t,i (5) The input of the decoder is the concatenation of the attentive visual feature f t and the embedding of the previous action a t 1 .",
"The hidden output h t of the LSTM is combined with the attentive instruction feature u t to form the instruction-aware hidden output h t .",
"The probability of moving to the k -th navigable viewpoint p t ( a t,k ) is calculated as softmax of the alignment between the navigable viewpoint feature g t,k and the instruction-aware hidden output h t .",
"h t = LSTM (cid:16) [ f t ; a t 1 ] , h t 1 (cid:17) (6) t,j = softmax j (cid:16) u (cid:124) j WU h t (cid:17) (7) u t = (cid:88) j t,j u j (8) h t = tanh ( W [ u t ; h t ]) (9) p t ( a t,k ) = softmax k (cid:16) g (cid:124) t,k WG h t (cid:17) (10) Agent Walk past the bedroom, go down the stairs and go through the door Path Env Drop New Env Speaker Path Train Env Back Translation Environmental Dropout Trained with Agent Agent Agent Teacher Actions <BOS> Agent Agent Agent <BOS> Sampling Sampling Mixture of IL + RL RL: IL: Walk past the shelves and out of the garage.",
"Different from Fried et al. (2018), we take the instruction-aware hidden vector h t 1 as the hidden input of the decoder instead of h t 1 .",
"Thus, the information about which parts of the instruction have been attended to is accessible to the agent.",
"We discuss our IL+RL supervised learning method in this section.",
"3 Imitation Learning (IL) In IL, an agent learns to imitate the behavior of a teacher.",
"The teacher demonstrates a teacher action a t at each time step t .",
"In the task of navigation, a teacher action a t selects the next navigable viewpoint which is on the shortest route from the current viewpoint to the target T .",
"The off-policy 4 agent learns from this weak supervision by minimizing the negative log probability of the teacher's action a t .",
"The loss of IL is as follows: LIL = (cid:88) t LIL t = (cid:88) t log p t ( a t ) (11) For exploration, we follow the IL method of Behavioral Cloning (Bojarski et al., 2016), where the agent moves to the viewpoint following the teacher's action a t at time step t .",
"Reinforcement Learning (RL) Although the route induced by the teacher's actions in IL is the shortest, this selected route is not guaranteed to satisfy the instruction.",
"Thus, the agent using IL is biased towards the teacher's actions instead of 3 As opposed to semi-supervised methods in Sec. 3.4, in this section we view both imitation learning and reinforcement learning as supervised learning.",
"4 According to Poole and Mackworth (2010), an off-policy learner learns the agent policy independently of the agent's navigational actions.",
"An on-policy learner learns the policy from the agent's behavior including the exploration steps.",
"finding the correct route indicated by the instruction.",
"To overcome these misleading actions, the on-policy reinforcement learning method Advantage Actor-Critic (Mnih et al., 2016) is applied, where the agent takes a sampled action from the distribution { p t ( a t,k ) } and learns from rewards.",
"If the agent stops within 3 m around the target viewpoint T , a positive reward +3 is assigned at the final step.",
"Otherwise, a negative reward 3 is assigned.",
"We also apply reward shaping (Wu et al., 2018): the direct reward at each non-stop step t is the change of the distance to the target viewpoint.",
"IL+RL Mixture To take the advantage of both off-policy and on-policy learners, we use a method to mix IL and RL.",
"The IL and RL agents share weights, take actions separately, and navigate two independent routes (see Fig. 2).",
"The mixed loss is the weighted sum of LIL and LRL : LMIX = LRL + ILLIL (12) IL can be viewed as a language model on action sequences, which regularizes the RL training.",
"5 3.4 Semi-Supervised Learning: Back Translation with Environmental Dropout 3.4.1 Back Translation Suppose the primary task is to learn the mapping of X (cid:1) Y with paired data { ( X , Y ) } and unpaired data { Y (cid:48) } .",
"In this case, the back translation method first trains a forward model PX (cid:1) Y and a backward model PY (cid:1) X , using paired data { ( X , Y ) } .",
"Next, it generates additional datum X (cid:48) 5 This approach is similar to the method ML+RL in Paulus et al. (2018) for summarization.",
"Recently, Wang et al. (2018a) combines purely supervised learning and RL training however, they use a different algorithm named MIXER (Ranzato et al., 2015), which computes cross entropy (XE) losses for the first k actions and RL losses for the remaining.",
"from the unpaired Y (cid:48) using the backward model PY (cid:1) X .",
"Finally, ( X (cid:48) , Y (cid:48) ) are paired to further fine-tune the forward model PX (cid:1) Y as additional training data (also known as data augmentation').",
"Back translation was introduced to the task of navigation in Fried et al. (2018).",
"The forward model is a navigational agent PE , I (cid:1) R (Sec. 3.2), which navigates inside an environment E , trying to find the correct route R according to the given instruction I .",
"The backward model is a speaker PE , R (cid:1) I , which generates an instruction I from a given route R inside an environment E .",
"Our speaker model (details in Sec. 3.4.3) is an enhanced version of Fried et al. (2018), where we use a stacked bidirectional LSTM-RNN encoder with attention flow.",
"For back translation, the Room-to-Room dataset labels around 7% routes { R } in the training environments 6 , so the rest of the routes { R (cid:48) } are unlabeled.",
"Hence, we generate additional instructions I (cid:48) using PE , R (cid:1) I ( E , R (cid:48) ) , so to obtain 6 The number of all possible routes (shortest paths) in the 60 existing training environments is 190K.",
"Of these, the Room-to-Room dataset labeled around 14K routes with one navigable instruction for each, so the amount of labeled routes is around 7% of 190K.",
"the new triplets ( E , R (cid:48) , I (cid:48) ) .",
"The agent is then fine-tuned with this new data using the IL+RL method described in Sec. 3.3.",
"However, note that the environment E in the new triplet ( E , R (cid:48) , I (cid:48) ) for semi-supervised learning is still selected from the seen training environments.",
"We demonstrate that the limited amount of environments { E } is actually the bottleneck of the agent performance in Sec. 7.1 and Sec. 7.2.",
"Thus, we introduce our environmental dropout method to mimic the new environment E (cid:48) , as described next in Sec. 3.4.2.",
"Failure of Feature Dropout Different from dropout on neurons to regularize neural networks, we drop raw feature dimensions (see Fig. 4a) to mimic the removal of random objects from an RGB image (see Fig. 3a).",
"This traditional feature dropout (with dropout rate p ) is implemented as an element-wise multiplication of the feature f and the dropout mask f .",
"Each element fe in the dropout mask f is a sample of a random variable which obeys an independent and identical Bernoulli distribution multiplied by 1 / (1 p ) .",
"And for different features, the distributions of dropout masks are independent as well.",
"Because of this independence among dropout masks, the traditional feature dropout fails in augmenting the existing environments because the removal' is inconsistent in different views at the same viewpoint, and in different viewpoints.",
"To illustrate this idea, we take the four RGB views in Fig. 3a as an example, where the chairs are randomly dropped from the views.",
"The removal of the left chair (marked with a red polygon) from view o t, 2 is inconsistent because it also appears in view o t, 1 .",
"Thus, the speaker could still refer to it and the agent is aware of the existence of the chair.",
"Moreover, another chair (marked with a yellow polygon) is completely removed from viewpoint observation o t , but the views in next viewpoint o t +1 provides conflicting information which would confuse the speaker and the agent.",
"Therefore, in order to make generated environments consistent, we propose our environmental dropout method, described next.",
"The view feature f (cid:48) t,i observed from the new environment E (cid:48) is calculated as an element-wise multiplication of the original feature f t,i and the environmental dropout mask E (see Fig. 4b):",
"t,i E e 1 1 p Ber(1 p ) (17)",
"To maintain the spatial structure of viewpoints, only the image feature ResNet( v t,i ) is dropped while the orientation feature (cos( t,i ) , sin( t,i ) , cos( t,i ) , sin( t,i )) is fixed.",
"As illustrated in Fig. 3b, the idea behind environmental dropout is to mimic new environments by removing one specific class of object (e.g., the chair).",
"We demonstrate our idea by running environmental dropout on the ground-truth semantic views in Sec. 7.3, which is proved to be far more effective than traditional feature dropout.",
"In practice, we perform the environmental dropout on image's visual feature where certain struc-tures/parts are dropped instead of object instances, but the effect is similar.",
"We apply the environmental dropout to the back translation model as mentioned in Sec. 3.4.1.",
"Note the environmental dropout method still preserves the connectivity of the viewpoints, thus we use the same way (Fried et al., 2018) to collect extra unlabeled routes { R (cid:48) } .",
"We take speaker to generate an additional instruction I (cid:48) = PE , R (cid:1) I ( E (cid:48) , R (cid:48) ) in the new environment E (cid:48) .",
"At last, we use IL+RL (in Sec. 3.3) to fine-tune the model with this new triplet ( E (cid:48) , R (cid:48) , I (cid:48) ) .",
"Our speaker model is an enhanced version of the encoder-decoder model of Fried et al. (2018), with improvements on the visual encoder: we stack two bi-directional LSTM encoders: a route encoder and a context encoder.",
"The route encoder takes features of ground truth actions { a t } Tt =1 from the route as inputs.",
"Each hidden state r t then attends to surrounding views { f t,i } 36 i =1 at each viewpoint.",
"The context encoder then reads the attended features and outputs final visual encoder representations: r 1 , ..., r T = Bi LSTMRTE ( g 1 ,a 1 , ..., g T ,a T ) (18) t,i = softmax i ( f (cid:124) t,i WR r t ) (19) f t = (cid:88) i t,i f t,i (20) c 1 , ..., c T = Bi LSTMCTX ( f 1 , ..., f T ) (21) The decoder is a regular attentive LSTM-RNN, which is discussed in Sec. 3.2.",
"Empirically, our enhanced speaker model improves the BLEU-4 score by around 3 points.",
"Dataset and Simulator We evaluate our agent on the Matterport3D simulator (Anderson et al., 2018b).",
"Navigation instructions in the dataset are collected via Amazon Mechanical Turk by showing them the routes in the Matterport3D environment (Chang et al., 2017).",
"The dataset is split into training set (61 environments, 14,025 instruc-tions), seen validation set (61 environments, 1,020 instructions), unseen validation set (11 environments, 2,349 instructions), and unseen test set (18 environments, 4,173 instructions).",
"The unseen sets only involve the environments outside the training set.",
"Evaluation Metrics For evaluating our model, Success Rate (SR) is the primary metric.",
"The execution route by the agent is considered a success when the navigation error is less than 3 meters.",
"Besides success rate, we use three other metrics 7 : Navigation Length (NL), Navigation Error (NE), and Success rate weighted by Path Length (SPL) (Anderson et al., 2018a).",
"Navigation Error (NE) is 7 The Oracle Success Rate (OSR) is not included because it's highly correlated with the Navigation Length.",
"Rate and Success rate weighted by Path Length.",
"The primary metric for each setup is in italics.",
"The best results are in bold font and the second best results are underlined.",
"Implementation Details Similar to the traditional dropout method, the environmental dropout mask is computed and applied at each training iteration.",
"Thus, the amount of unlabeled semi-supervised data used is not higher in our dropout method.",
"We also find that sharing the environmental dropout mask in different environments inside a batch will stabilize the training.",
"To avoid over-fitting, the model is early-stopped according to the success rate on the unseen validation set.",
"More training details in appendices.",
"In this section, we compare our agent model with the models in previous works on the Vision and Language Navigation (VLN) leaderboard.",
"The models on the leaderboard are evaluated on a private unseen test set which contains 18 new environments.",
"We created three columns in Table 1 for different experimental setups: single run, beam search, and unseen environments pre-exploration.",
"For the result, our model outperforms all other models in all experimental setups.",
"Single Run Among all three experimental setups, single run is the most general and highly correlated to the agent performance.",
"Thus, it is considered as the primary experimental setup.",
"In this setup, the agent navigates the environment once and is not allowed 8 to: (1) run multiple trials, (2) explore nor map the test environments before starting.",
"Our result is 3 .",
"5% and 9% higher than the second-best in Success Rate and SPL, resp.",
"Beam Search In the beam search experimental setup, an agent navigates the environment, col-8 According to the Vision and Language Navigation (VLN) challenge submission guidelines lects multiple routes, re-ranks them, and selects the route with the highest score as the prediction.",
"Besides showing an upper bound, beam search is usable when the environment is explored and saved in the agent's memory but the agent does not have enough computational capacity to fine-tune its navigational model.",
"We use the same beam-search algorithm, state factored Dijkstra algorithm, to navigate the unseen test environment.",
"Success Rate of our model is 5 .",
"9% higher than the second best.",
"SPL metric generally fails in evaluating beam-search models because of the long Navigation Length (range of SPL is 0 . 01 0 . 02 ).",
"Pre-Exploration The agent pre-explores the test environment before navigating and updates its agent model with the extra information.",
"When executing the instruction in the environment, the experimental setup is still single run.",
"The pre-exploration agent mimics the domestic robots (e.g., robot vacuum) which only needs to navigate the seen environment most of the time.",
"For submitting to the leaderboard, we simply train our agent via back translation with environmental dropout on test unseen environments (see Sec.7.2).",
"Our result is 3 .",
"4% higher than Wang et al. (2019) in Success Rate and 2 .",
"0% higher in SPL.",
"9 6 Ablation Studies Supervised Learning We first show the effectiveness of our IL+RL method by comparing it with the baselines (Table 2).",
"We implement Behavioural Cloning 10 and Advantage Actor-Critic 9 To fairly compare with Wang et al. (2019), we exclude the exploration route in calculating Navigation Length.",
"10 The Behavioral Cloning (IL) baseline is the same as the panoramic view baseline in Fried et al. (2018) except for two differences: (1) The agent takes the teacher action instead of the sampled action from the distribution (see imitation learning of Sec. 3.3), (2) The hidden input of the LSTM is the instruction-aware hidden from the previous step (see Sec. 3.2).",
"We improve our baseline result with these modifi-Models Val Seen Val Unseen NL(m) NE(m) SR(%) SPL NL(m) NE(m) SR(%) SPL SUPERVISED LEARNING Behavioral Cloning (IL) 10.3 5.39 48.4 0.46 9.15 6.25 43.6 0.40 Advantage Actor-Critic (RL) 73.8 7.11 22.0 0.03 73.8 7.32 24.0 0.03 IL + RL 10.1 4.71 55.3 0.53 9.37 5.49 46.5 0.43 SEMI-SUPERVISED LEARNING Back Translation 10.3 4.19 58.1 0.55 10.5 5.43 48.2 0.44 + Feat Drop 10.3 4.13 58.4 0.56 9.62 5.43 48.4 0.45 + Env Drop (No Tying) 10.3 4.32 57.3 0.55 9.51 5.27 49.0 0.46 + Env Drop (Tying) 11.0 3.99 62.1 0.59 10.7 5.22 52.2 0.48 FULL MODEL Single Run 11.0 3.99 62.1 0.59 10.7 5.22 52.2 0.48 Beam Search 703 2.52 75.7 0.01 663 3.08 69.0 0.01 Pre-Explore 9.92 4.84 54.7 0.52 9.57 3.78 64.5 0.61 Table 2: For the ablation study, we show the results of our different methods on validation sets.",
"as our imitation learning (IL) and reinforcement learning (RL) baselines, respectively.",
"The mixture of IL+RL (see Sec. 3.3) outperforms the IL-only model and RL-only model by 2 .",
"9% and 22 .",
"5% , which means that our IL+RL could overcome the misleading teacher actions in IL and significantly stabilize the training of RL.",
"Semi-Supervised Learning We then fine-tune our best supervised model (i.e., IL+RL) with back translation.",
"Besides providing a warm-up, IL+RL is also used to learn the new generated data triplets in back translation.",
"As shown in Table 2, back translation with environmental dropout improves the best supervised model by 5 .",
"7% , where the improvement is 3 times more than the back translation without new environments.",
"We then show the results of the alternatives to environmental dropout.",
"The performance with feature dropout is almost the same to the original back translation, which is 3 .",
"8% lower than the environmental dropout.",
"We also prove that the improvement from the environmental dropout method does not only come from generating diverse instructions introduced by dropout in the speaker, but also comes from using the same dropout mask in the follower agent too.",
"To show this, we use two independent (different) environmental dropout masks for the speaker and the follower (i.e., no tying of the dropout mask), and the result drops a lot as compared to when we tie the speaker and follower dropout masks.",
"Full Model Finally, we show the performance of our best agent under different experimental setups.",
"The single run result is copied from the best semi-supervised model for comparison.",
"The state-factored Dijkstra algorithm (Fried et al., 2018) is used for the beam search result.",
"The method for pre-exploration is described in Sec. 7.2, where the agent applies back translation with environmental dropout on the validation unseen environment.",
"In this section, we present analysis experiments that first exposed the limited environments bottleneck to us, and hence inspired us to develop our environmental dropout method to break this bottleneck.",
"In order to show that more environments are crucial for better performance of agents, in Fig. 5, we present the result of Supervised Learning (SL) with different amounts of data selected by two different data-selection methods.",
"The first method gradually uses more #environments (see the blue line SL with more envs) while the second method selects data from the whole training data with all 60 training environments (see the red line SL with more data).",
"Note that the amounts of data in the two setups are the same for each plot point As shown in Fig. 5, the more envs selection method shows higher growth rate in success rate than the more data method.",
"We also predict the success rates (in dashed line) with the prediction method in Sun et al. (2017).",
"The predicted result is much higher when training with more environments.",
"The predicted result (the right end of the red line) also shows that the upper bound of Success Rate is around 52% if all the 190K routes in the training environments is labeled by human (instead of being generated by speaker via back translation), which indicates the need for new environments.",
"In this subsection, we show that back translation could significantly improve the performance when it uses new data triplets from testing environments the unseen validation environments where the agent is evaluated in.",
"Back translation (w.o. Env Drop) on these unseen environments achieves a success rate of 61 .",
"9% , while the back translation on the training environments only achieves 46 .",
"5% .",
"The large margin between the two results indicates the need of new environments in back translation.",
"Moreover, our environmental dropout on testing environments could further improve the result to 64 .",
"5% , which means that the amount of environments in back translation is far from enough.",
"To demonstrate our intuition of the success of environmental dropout (in Sec. 3.4.2), we replace the image feature ResNet( v t,i ) with the semantic view feature.",
"The semantic views (as shown in Fig. 6) are rendered from the Matterport3D dataset (Chang et al., 2017), where different colors indicate different types of objects.",
"Thus, dropout on the semantic view feature would remove the object from the view.",
"With the help of this additional information (i.e., the semantic view), the success rate of IL+RL is 49 .",
"5% on the unseen validation set.",
"Back translation (without dropout) slightly improves the result to 50 .",
"5% .",
"The result with feature dropout is 50 .",
"2% while the environmental dropout could boost the result to 52 .",
"0% , which supports our claim in Sec. 3.4.2.",
"We presented a navigational agent which better generalizes to unseen environments.",
"The agent is supervised with a mixture of imitation learning and reinforcement learning.",
"Next, it is fine-tuned with semi-supervised learning, with speaker-generated instructions.",
"Here, we showed that the limited variety of environments is the bottleneck of back translation and we overcome it via environmental dropout' to generate new unseen environments.",
"We evaluate our model on the Room-to-Room dataset and achieve rank-1 in the Vision and Language Navigation (VLN) challenge leaderboard under all experimental setups.",
"We thank the reviewers for their helpful comments.",
"This work was supported by ARO-YIP Award #W911NF-18-1-0336, ONR Grant #N00014-18-1-2871, and faculty awards from Google, Facebook, Adobe, Baidu, and Salesforce.",
"The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency."
] | [
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"result",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"result",
"other",
"other",
"other"
] |
[
"Pre-trained language models (PrLMs) have demonstrated superior performance due to their strong ability to learn universal language representations from self-supervised pre-training.",
"However, even with the help of the powerful PrLMs, it is still challenging to effectively capture task-related knowledge from dialogue texts which are enriched by correlations among speaker-aware utterances.",
"In this work, we present SPIDER, Structural Pre-traIned DialoguE Reader, to capture dialogue exclusive features.",
"To simulate the dialogue-like features, we propose two training objectives in addition to the original LM objectives: 1) utterance order restoration, which predicts the order of the permuted utterances in dialogue context; 2) sentence backbone regularization, which regularizes the model to improve the factual correctness of summarized subject-verb-object triplets.",
"Experimental results on widely used dialogue benchmarks verify the effectiveness of the newly introduced self-supervised tasks.",
"Recent advances in large-scale pre-training language models (PrLMs) have achieved remarkable successes in a variety of natural language processing (NLP) tasks (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019a; Yang et al., 2019; Clark et al., 2020; Zhang et al., 2020d).",
"Providing fine-grained contextualized embedding, these pre-trained models are widely employed as encoders for various downstream NLP tasks.",
"Although the PrLMs demonstrate superior perforCorresponding author.",
"This paper was partially supported by National Key Research and Development Program of China (No. 2017YFB0304100), Key Projects of National Natural Science Foundation of China (U1836222 and 61733011), Huawei-SJTU long term AI project, Cutting-edge Machine Reading Comprehension and Language Model.",
"This work was supported by Huawei Noah's Ark Lab.",
"U 1 : how can i connect to URL streaming server ?",
"U 2 : i think mplayer does it U 3 : i use kino , could be ?",
"U 4 : kino is editing software .",
"i don't think it supports media .",
"U 5 : sorry EMOJI , i mean movie player U 6 : not sure about it .",
"if it can't try vlc and mplayer too .",
"U 7 : is mplayer aptable ?",
"U 8 : yes but i 'm not sure if it is in the main repo .",
"U 9 : have you ever updated your sources.list ?",
"U 10 : i have no",
"idea..i use adept on kubuntu .",
"mance due to their strong representation ability from self-supervised pre-training, it is still challenging to effectively adapt task-related knowledge during the detailed task-specific training which is usually in a way of fine-tuning (Gururangan et al., 2020).",
"Generally, those PrLMs handle the whole input text as a linear sequence of successive tokens and implicitly capture the contextualized representations of those tokens through self-attention.",
"Such fine-tuning paradigm of exploiting PrLMs would be suboptimal to model dialogue task which holds exclusive text features that plain text for PrLM training may hardly embody.",
"Therefore, we explore a fundamental way to alleviate this difficulty by improving the training of PrLM.",
"This work devotes itself to designing the natural way of adapting the language modeling to the dialogue scenario motivated by the natural characteristics of dialogue contexts.",
"As an active research topic in the NLP field, multi-turn dialogue modeling has attracted great interest.",
"The typical task is response selection (Lowe et al., 2015; Wu et al., 2017; Zhang et al., 2018) that aims to select the appropriate response according to a given dialogue context containing a number of utterances, which is the focus in this work.",
"However, selecting a coherent and informative response for a given dialogue context remains a challenge.",
"The multi-turn dialogue typically involves two or more speakers that engage in various conversation topics, intentions, thus the utterances are rich in interactions, e.g., with criss-cross discourse structures (Li et al., 2020a; Bai and Zhao, 2018; Qin et al., 2016, 2017).",
"A critical challenge is the learning of rich and robust context representations and interactive relationships of dialogue utterances, so that the resulting model is capable of adequately capturing the semantics of each utterance, and the relationships among all the utterances inside the dialogue.",
"Inspired by the effectiveness for learning universal language representations of PrLMs, there are increasing studies that employ PrLMs for conversation modeling (Mehri et al., 2019; Zhang et al., 2020b; Rothe et al., 2020; Whang et al., 2020; Han et al., 2021).",
"These studies typically model the response selection with only the context-response matching task and overlook many potential training signals contained in dialogue data.",
"Although the PrLMs have learned contextualized semantic representation from token-level or sentence-level pre-training tasks like MLM, NSP, they all do not consider dialogue related features like speaker role, continuity and consistency.",
"One obvious issue of these approaches is that the relationships between utterances are harder to capture using word-level semantics.",
"Besides, some latent features, such as user intent and conversation topic, are under-discovered in existing works (Xu et al., 2021).",
"Therefore, the response retrieved by existing dialogue systems supervised by the conventional way still faces critical challenges, including incoherence and inconsistency.",
"In this work, we present SPIDER (Structural Pre-traIned DialoguE Reader), a structural language modeling method to capture dialogue exclusive features.",
"Motivated to efficiently and explicitly model the coherence among utterances and the key facts in each utterance, we propose two training objectives in analogy to the original BERT-like language model (LM) training: 1) utterance order restoration (UOR), which predicts the order of the permuted utterances in dialogue context; 2) sentence backbone regularization (SBR), which regularizes the model to improve the factual correctness of summarized subject-verb-object (SVO) triplets.",
"Experimental results on widely used benchmarks show that SPDER boosts the model performance for various multi-turn dialogue comprehension tasks including response selection and dialogue reasoning.",
"Recent works have explored various architecture choices and training objectives for large-scale LM pre-training (Zhou et al., 2020b,a; Xu et al., 2020a,b; Li et al., 2021, 2020b).",
"Most of the PrLMs are based on the encoder in Transformer, among which Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019b) is one of the most representative work.",
"BERT uses multiple layers of stacked Transformer Encoder to obtain contextualized representations of the language at different levels.",
"BERT has helped achieve great performance improvement in a broad range of NLP tasks (Bai and Zhao, 2018; Zhang et al., 2020a; Luo and Zhao, 2020; Zhang et al., 2021).",
"Several subsequent variants have been proposed to further enhance the capacity of PrLMs, such as XLNet (Yang et al., 2019), RoBERTa (Liu et al., 2019), ALBERT (Lan et al., 2020), ELECTRA (Clark et al., 2020).",
"For simplicity and convenient comparison with public studies, we select the most widely used BERT as the backbone in this work.",
"There are two ways of training PrLMs on dialogue scenarios, including open-domain pretraining and domain-adaptive post-training.",
"Some studies perform training on open-domain conversational data like Reddit for response selection or generation tasks (Wolf et al., 2019; Zhang et al., 2020c; Henderson et al., 2020; Bao et al., 2020), but they are limited to the original pre-training tasks and ignore the dialogue related features.",
"For domain-adaptive post-training, prior works have indicated that the order information would be important in the text representation, and the well-known next-sentence-prediction (Devlin et al., 2019b) and sentence-order-prediction (Lan et al., 2020) can be viewed as special cases of order prediction.",
"Especially in the dialogue scenario, predicting the word order of utterance, as well as the utterance order in the context, has shown effectiveness in the dialogue generation task (Kumar et al., 2020; Gu et al., 2020b), where the order information is well recognized (Chen et al., 2019).",
"However, there is little attention paid to dialogue comprehension tasks such as response selection (Lowe et al., 2015; Wu et al., 2017; Zhang et al., 2018).",
"The potential difficulty is that utterance order restoration involves much more ordering possibilities for utterances that may have a quite flexible order inside dialogue text than NSP and SOP which only handle the predication of two-class ordering.",
"Our work is also profoundly related to auxiliary multi-task learning, whose common theme is to guide the language modeling Transformers with explicit knowledge and complementing objectives (Zhang et al., 2019; Sun et al., 2019b; Xu et al., 2020a).",
"A most related work is Xu et al. (2020a), which introduces four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.",
"Our work differs from Xu et al. (2020a) by three sides.",
"1) Motivation: our method is designed for a general-purpose in broad dialogue comprehension tasks whose goals may be either utterance-level discourse coherence or inner-utterance factual correctness, instead of only motivated for downstream context-response matching, whose goal is to measure if two sequences are related or not.",
"2) Technique: we propose both sides of intraand interutterance objectives.",
"In contrast, the four objectives proposed in Xu et al. (2020a) are natural variants of NSP in BERT, which are all utterance-level.",
"3) Training: we empirically evaluate domain-adaptive training and multi-task learning, instead of only employing multi-task learning, which requires many efforts of optimizing coefficients in the loss functions, which would be time-consuming.",
"In terms of factual backbone modeling, compared with the existing studies that enhance the PrLMs by annotating named entities or incorporating external knowledge graphs (Eric et al., 2017; Liu et al., 2018), the SVO triplets extracted in our sentence backbone predication objective (SBP) method, appear more widely in the text itself.",
"Such triplets ensure the correctness of SVO and enable our model to discover the salient facts from the lengthy texts, sensing the intuition of who did what.",
"Multi-turn dialogue comprehension aims to teach machines to read dialogue contexts and solve tasks such as response selection (Lowe et al., 2015; Wu et al., 2017; Zhang et al., 2018) and answering questions (Sun et al., 2019a; Cui et al., 2020), whose common application is building intelligent human-computer interactive systems (Chen et al., 2017a;",
"Shum et al., 2018; Li et al., 2017; Zhu et al., 2018b).",
"Early studies mainly focus on the matching between the dialogue context and question (Huang et al., 2019; Zhu et al., 2018a).",
"Recently, inspired by the impressive performance of PrLMs, the mainstream is employing PrLMs to handle the whole input texts of context and question, as a linear sequence of successive tokens and implicitly capture the contextualized representations of those tokens through self-attention (Qu et al., 2019; Liu et al., 2020).",
"Such a way of modeling would be suboptimal to capture the high-level relationships between utterances in the dialogue history.",
"In this work, we are motivated to model the structural relationships between utterances from utterance order restoration and the factual correctness inside each utterance in the perspective of language modeling pre-training instead of heuristically stacking deeper model architectures.",
"This section presents our proposed method SPIDER (Structural Pre-traIned DialoguE Reader).",
"First, we will present the standard dialogue comprehension model as the backbone.",
"Then, we will introduce our designed language modeling objectives for dialogue scenarios, including utterance order restoration (UOR) and sentence backbone regularization (SBR).",
"In terms of model training, we employ two strategies, i.e., 1) domain adaptive post-training that first trains a language model based on newly proposed objectives and then fine-tunes the response selection task; 2) multi-task fine-tuning that trains the model for downstream tasks, along with LM objectives.",
"We first employ a pre-trained language model such as BERT (Devlin et al., 2019a) to obtain the initial word representations.",
"The utterances and response are concatenated and then fed into the encoder.",
"Given the context C and response R , we concatenate all utterances in the context and the response candidate as a single consecutive token sequence with special tokens separating them: X = { [CLS] R [SEP] U 1 [EOU] . . . [EOU] U n [SEP] } , where [CLS] and [SEP] are special tokens.",
"[EOU] is the End Of Utterance tag designed for multiturn context.",
"X is then fed into the BERT encoder, which is a deep multi-layer bidirectional Transformer, to obtain a contextualized U 1: Well, I'm afraid my cooking isn't to your taste.",
"In detail, let X = { x 1 , . . . , x n } be the embedding of the sequence, which are features of encoding sentence words of length n .",
"The input embeddings are then fed into the multi-head attention layer to obtain the contextual representations.",
"The embedding sequence X is processed to a multi-layer bidirectional Transformer for learning contextualized representations, which is defined as H = FFN ( MultiHead ( K, Q, V )) , (1) where K,Q,V are packed from the input sequence representation X .",
"As the common practice, we set K = Q = V in the implementation.",
"For the following part, we use H = { h 1 , . . . , h n } to denote the last-layer hidden states of the input sequence.",
"To simulate the dialogue-like features, we propose two pre-training objectives in addition to the original LM objectives: 1) utterance order restoration, which predicts the order of the permuted utterances in dialogue context; 2) sentence backbone regularization, which regularizes the model to improve the factual correctness of summarized subject-verb-object triplets.",
"The utterance manipulations are shown in Figure",
"2. The following subsections describe the objectives in turn.",
"Coherence is an essential aspect of conversation modeling.",
"In a coherent discourse, utterances should respect specific orders of relations and logic.",
"The ordering of utterances in a dialogue context determines the semantic of the conversation.",
"Therefore, learning to order a set of disordered utterances in such a way that maximizes the discourse coherence will have a critical impact in learning the representation of dialogue contexts.",
"However, most previous studies focused on semantic relevance between context and response candidate.",
"Here we introduce utterance-level position modeling, i.e., utterance order restoration to encourage the model to be aware of the semantic connections among utterances in the context.",
"The idea is similar to autoencoding (AE) which aims to reconstruct the original data from corrupted input (Yang et al., 2019).",
"Given permuted dialogue contexts that comprise utterances in random orders, we maximize the expected log-likelihood of a sequence of the original ground-truth order.",
"The goal of the utterance order restoration is to organize randomly shuffled utterances of a conversation into a coherent dialogue context.",
"We extract the hidden states of [EOU] from H as the representation of each utterance.",
"Formally, given an utterance sequence denoted as C (cid:48) = [ H u 1 ; H u 2 ; . . . ; H u K ] with order o = [ o 1 ; o 2 ; . . . ; o K ] , where K means the number of maximum positions to be predicted.",
"We expect an ordered context C = [ u o 1 ; u o 2 ; . . . ; u o K ] is the most coherent permutation of utterances.",
"As predicting the permuted orders is a more challenging optimization problem than NSP and SOP tasks due to the large searching space of permutations and causes slow convergence in preliminary experiments, we choose to only predict the order of the last few permuted utterances by a permutation ratio to control the maximum number of permutations: K (cid:48) = K .",
"The UOR training objective is then formed as: L uor = K (cid:48) (cid:88) k =1 [ o k log o k ] , (2) where o k denotes the predicted order.",
"The sentence backbone regularization objective is motivated to guide the model to distinguish the internal relation of the fact triplets that are extracted from each utterance, which would be helpful to improve the ability to capture the key facts of the utterance as well as the correctness.",
"First, we apply a fact extractor to conduct the dependency parsing of each sentence.",
"After that, we extract the subject, the root verb, and the object tokens as an SVO triplet corresponding to each utterance.",
"Inspired by Bordes et al. (2013) where the embedding of the tail entity should be close to the embedding of the head entity plus some vector that depends on the relationship, we assume that given the dialogue input, in the hidden representation space, the summation of the subject and the verb should be close to the object as much as possible, i.e., h subject + h verb h object .",
"Consequently, based on the sequence hidden states h i where i = 1 , ..., L y , we introduce a regularization for the extracted facts:",
"where m is the total number of fact tuples extracted from the summary and k indicates the k -th triplet.",
"subj k , verb k , and obj k are indexes of the k -th fact tuple's subject, verb, and object.",
"In our implementation, since PrLMs take sub-words as input while the SVO extraction performs in word-level, we use the first-token hidden state as the representation of the original word following the way in Devlin et al. (2019a) for named entity recognition.",
"In this section, we introduce two training methods to take the newly proposed language modeling objectives into account, namely domain-adaptive post-training and multi-task fine-tuning, as illustrated in Figure",
"3. 4.1 Domain Adaptive Post-training Similar to BERT, we also adopt the masked language model (MLM) and the next sentence prediction (NSP) as LM-training tasks to enable our model to capture lexical and syntactic information from tokens in text.",
"More details of the LM training tasks can be found from Devlin et al. (2019a).",
"The overall post-training loss is the sum of the MLM, NSP, UOR, and SBR loss.",
"Our full model is trained by a joint loss by combining both of the objectives above: L = 1 ( L mlm + L nsp ) + 2 L uor + 3 L sbr , (5) where 1 , 2 , 3 are hyper-parameters.",
"After post-training the language model on the dialogue corpus, we load the pre-trained weights as the same way of using BERT (Devlin et al., 2019a), to fine-tune the downstream tasks such as response selection and dialogue reasoning as focused in this work (details in Section 5.1).",
"Since our objectives can well share the same input as the downstream tasks, there is an efficient way of using multi-task fine-tuning (MTF) to directly train the task-specific models along with our SPIDER objectives.",
"Therefore, we feed the permuted context to the dialogue comprehension model and combine the three losses for training: L = 1 L dm + 2 L uor + 3 L sbr , (6) where 1 , 2 , 3 are hyper-parameters.",
"In order to train a task-specific model for dialogue comprehension, the hidden states H will be fed into a classifier with a fully connected and softmax layer.",
"We learn model g ( , ) by minimizing cross entropy loss with dataset D .",
"Let denote the parameters, for binary classification like the response selection task, the objective function L ( D , ) can be formulated as: L dm = N (cid:88) i =1 [ y i log( g ( c i , r i ))+ (1 y i ) log(1 g ( c i , r i ))] .",
"where N denotes the number of examples.",
"For multiple choice task like MuTual, the loss function is: L dm = N (cid:88) i =1 C (cid:88) k =1 y i,c log( g ( c i , r i,k )) .",
"where C is the number of choice.",
"We evaluated our model on two English datasets: Ubuntu Dialogue Corpus (Ubuntu) (Lowe et al., 2015) and Multi-Turn Dialogue Reasoning (Mu-Tual) (Cui et al., 2020), 1 and two Chinese datasets: Douban Conversation Corpus (Douban) (Wu et al., 2017) and E-commerce Dialogue Corpus (ECD) (Zhang et al., 2018).",
"Ubuntu (Lowe et al., 2015) consists of English multi-turn conversations about technical support collected from chat logs of the Ubuntu forum.",
"The dataset contains 1 million context-response pairs, 0.5 million for validation and 0.5 million for testing.",
"In training set, each context has one positive response generated by human and one negative response sampled randomly.",
"In validation and test sets, for each context, there are 9 negative responses and 1 positive response.",
"1 Actually, MuTual is a retrieval-based dialogue corpus in form, but the theme is English listening comprehension exams, thus we regard as a reading comprehension corpus in this work.",
"Because the test set of MuTual is not publicly available, we conducted the comparison with our baselines on the Dev set for convenience.",
"Douban (Wu et al., 2017) is different from Ubuntu in the following ways.",
"First, it is an open domain where dialogues are extracted from Douban Group.",
"Second, response candidates on the test set are collected by using the last turn as the query to retrieve 10 response candidates and labeled by humans.",
"Third, there could be more than one correct response for a context.",
"ECD (Zhang et al., 2018) dataset is extracted from conversations between customer and service staff on Taobao.",
"It contains over 5 types of conversations based on over 20 commodities.",
"There are also 1 million context-response pairs in the training set, 0.5 million in the validation set, and 0.5 million in the test set.",
"MuTual (Cui et al., 2020) consists of 8860 manually annotated dialogues based on Chinese student English listening comprehension exams.",
"For each context, there is one positive response and three negative responses.",
"The difference compared to the above three datasets is that only MuTual is reasoning-based.",
"There are more than 6 types of reasoning abilities reflected in MuTual.",
"For the sake of computational efficiency, the maximum number of utterances is specialized as 20.",
"The concatenated context, response, [CLS] and [SEP] in one sample is truncated according to the longest first rule or padded to a certain length, which is 256 for MuTual and 384 for the other three datasets.",
"For the hyper-parameters, we empirically set 1 = 2 = 3 = 1 = 2 = 1 .",
"Our model is implemented using Pytorch and based on the Transformer Library.",
"2 We use BERT (Devlin et al., 2019a) as our backbone model.",
"AdamW (Loshchilov and Hutter, 2019) is used as our optimizer.",
"The batch size is 24 for MuTual, and 64 for others.",
"The initial learning rate is 4 10 6 for MuTual and 3 10 5 for others.",
"The ratio is set to 0.4 in our implementation by default.",
"We run 3 epochs for MuTual and 2 epochs for others and select the model that achieves the best result in validation.",
"The training epochs are 3 for DAP.",
"Our domain adaptive post-training for the corresponding response selection tasks is based on the three large-scale dialogue corpus including Ubuntu, Douban, and ECD, respectively.",
"3 The data statistics are in Table 1.",
"Since domain adaptive post-training is time-consuming, following previous studies (Gu et al., 2020a), we use bert-base-uncased , and bert-base-chinese for the English and 2 Our source code is available at https://github.",
"com/cooelf/SPIDER .",
"3 Since phrases are quite common in Chinese, making it inaccurate to calculate the SVO relations according to Eq.",
"3, thus we did not use the SBR objective for the two Chinese tasks in this work.",
"Chinese datasets, respectively.",
"Because there is no appropriate domain data for the small-scale Mutual dataset, we only report the multi-task fine-tuning results with our SPIDER objectives, and also present the results with other PrLMs such as ELECTRA (Clark et al., 2020) for general comparison.",
"We include the following models for comparison: Multi-turn matching models : Sequential Matching Network (SMN) (Wu et al., 2017), Deep Attention Matching Network (DAM) (Zhou et al., 2018), Deep Utterance Aggregation (DUA) (Zhang et al., 2018), Interaction-over-Interaction (IoI) (Tao et al., 2019b) have been stated in Section 2.2.",
"Besides, Multi-Representation Fusion Network (MRFN) (Tao et al., 2019a) matches context and response with multiple types of representations.",
"Multi-hop Selector Network (MSN) (Yuan et al., 2019) utilizes a multi-hop selector to filter necessary utterances and matches among them.",
"PrLMs-based models : BERT (Devlin et al., 2019b), SA-BERT (Gu et al., 2020a), and ELECTRA (Clark et al., 2020).",
"Following (Lowe et al., 2015; Wu et al., 2017), we calculate the proportion of true positive response among the topk selected responses from the list of n available candidates for one context, denoted as R n @ k .",
"Besides, additional conventional metrics of information retrieval are employed on Douban: Mean Average Precision (MAP) (Baeza-Yates et al., Model MRRR 4 @1 R 4 @2 BERT base 80.0 65.3 86.0 + UOR 80.7 66.1 86.7 + SBR 81.3 67.4 87.1 + SPIDER 81.6 67.6 87.3 BERT large 82.2 69.1 87.9 + UOR 82.8 69.8 88.6 + SBR 83.4 71.0 89.4 + SPIDER 83.9 71.8 89.2 ELECTRA base 86.5 76.2 91.6 + UOR 86.9 76.6 91.8 + SBR 87.6 77.1 92.0 + SPIDER 88.2 79.2 92.3 ELECTRA large 94.9 90.6 97.7 + UOR 95.3 91.3 97.8 + SBR 95.5 91.6 97.8 + SPIDER 95.6 92.0 97.9 Table 3: Results on MuTual dataset. 1999), Mean Reciprocal Rank (MRR) (Voorhees et al., 1999), and precision at position 1 (P@1).",
"Tables 2-3 show the results on the four benchmark datasets.",
"We have the following observations: 1) Generally, the previous models based on multi-turn matching networks perform worse than simple PrLMs-based ones, illustrating the power of contextualized representations in context-sensitive dialogue modeling.",
"PrLM can perform even better when equipped with our SPIDER objectives, verifying the effectiveness of dialogue-aware language modeling, where inter-utterance position information and inner-utterance key facts are better exploited.",
"Compared with SA-BERT that involves more complex architecture and more parameters by injecting extra speaker-aware embeddings, SPIDER keeps the same model size as the backbone BERT, and even surpasses SA-BERT on most of the metrics.",
"2) In terms of the training methods, DAP generally works better than MTF, with the merits of two-step procedures including the pure LM-based post-training.",
"According to the ablation study in Table 4, we see that both of the dialogue-aware LM objectives are essentially effective and combining them (SPIDER) gives the best performance, which verifies the necessity of modeling the utterance order and factual correctness.",
"We also notice that UOR shows better performance than SBR in DAP, while gives relative descent in MFT.",
"The most plau-Model R 10 @1 R 10 @2 R 10 @5 SPIDERDAP 86.9 93.8 98.7 w/o UOR 86.2 93.3 98.6 w/o SBR 86.4 93.5 98.6 w/o Both 85.7 93.0 98.5 SPIDERMTF 83.1 91.3 98.0 w/o UOR 82.6 91.0 97.9 w/o SBR 82.3 90.8 97.8 w/o Both 81.7 90.4 97.7 Table 4: Ablation study on the Ubuntu dataset.",
"sible reason would be that UOR would permute the utterances in the dialogue context which helps the language model learn the utterance in UOR.",
"However, in MFT, the major objective is the downstream dialogue comprehension task.",
"The permutation of the context would possibly bring some negative effects to the downstream task training.",
"For the UOR objective, a hyper-parameter is set to control the maximum number of permutations (as described in Section 3.2.1), which would possibly influence the overall model performance.",
"To investigate the effect, we set the permutation ratio from [0, 20%, 40%, 60%, 80%, 100%].",
"The result is depicted in Figure 4, in which our model outperforms the baseline in general, showing that the permutation indeed strengthens the baseline.",
"Context length can be measured by the number of turns and average utterance length in a conversation respectively.",
"We split test instances from the Ubuntu dataset into several buckets and compare SPIDER with UOR with the BERT baseline.",
"According to the results depicted in Figure 5, we observe that SPIDER performs much better on contexts with long utterances, and it also performs robustly and is significantly and consistently superior to the baseline.",
"The results indicate the benefits of modeling the utterance order for dialogue comprehension.",
"To compare the improvements of SPIDER over the baseline on factual correctness, we extract the error cases of the BERT baseline on MuTual (102 in total) and 42 (41.2%) are correctly answered",
"Among the 42 solved cases, 33/42 (78.6%) are entailed with SVO facts in contexts, indicating the benefits of factual correctness.",
"Hongxiao Bai and Hai Zhao.",
"2018.",
"Deep enhanced representation for implicit discourse relation recognition.",
"In Proceedings of the 27th International Conference on Computational Linguistics , pages 571 583, Santa Fe, New Mexico, USA.",
"Association for Computational Linguistics.",
"Siqi Bao, Huang He, Fan Wang, Hua Wu, and Haifeng Wang.",
"2020.",
"PLATO: Pre-trained dialogue generation model with discrete latent variable.",
"In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 8596, Online.",
"Association for Computational Linguistics.",
"Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jiliang Tang.",
"2017a.",
"A survey on dialogue systems: Recent advances and new frontiers.",
"In ACM SIGKDD Explorations Newsletter .",
"19(2):2535.",
"Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning.",
"2020.",
"ELECTRA: pretraining text encoders as discriminators rather than generators.",
"In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020 .",
"OpenReview.net.",
"Leyang Cui, Yu Wu, Shujie Liu, Yue Zhang, and Ming Zhou.",
"2020.",
"MuTual: A dataset for multi-turn dialogue reasoning.",
"In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 14061416, Online.",
"Association for Computational Linguistics.",
"by SPIDER.",
"In this paper, we focus on the task-related adaptation of the pre-trained language models and propose SPIDER (Structural Pre-traIned DialoguE Reader), a structural language modeling method to capture dialogue exclusive features.",
"To explicitly model the coherence among utterances and the key facts in each utterance, we introduce two novel dialogue-aware language modeling tasks including utterance order restoration and sentence backbone regularization objectives.",
"Experiments on widely-used multi-turn dialogue comprehension benchmark datasets show the superiority over baseline methods.",
"Our work reveals a way to make better use of the structure learning of the contextualized representations from pre-trained language models and gives insights on how to adapt the language modeling training objectives in downstream tasks."
] | [
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"objective",
"objective",
"other",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective"
] |
[
"State-of-the-art Machine Reading Comprehension (MRC) models for Open-domain Question Answering (QA) are typically trained for span selection using distantly supervised positive examples and heuristically retrieved negative examples.",
"This training scheme possibly explains empirical observations that these models achieve a high recall amongst their top few predictions, but a low overall accuracy, motivating the need for answer re-ranking.",
"We develop a successful re-ranking approach (RECONSIDER ) for span-extraction tasks that improves upon the performance of MRC models, even beyond large-scale pre-training.",
"RECONSIDER is trained on positive and negative examples extracted from high confidence MRC model predictions, and uses in-passage span annotations to perform span-focused reranking over a smaller candidate set.",
"As a result, RECONSIDER learns to eliminate close false positives, achieving a new extractive state of the art on four QA tasks, with 45.5% Exact Match accuracy on Natural Questions with real user questions, and 61.7% on TriviaQA.",
"We will release all related data, models, and code 1 .",
"Open-domain Question Answering (Voorhees et al., 1999) (QA) involves answering questions by extracting correct answer spans from a large corpus of passages, and is typically accomplished by a light-weight passage retrieval model followed by a heavier Machine Reading Comprehension (MRC) model (Chen et al., 2017).",
"The span selection components of MRC models are trained on distantly supervised positive examples (containing the answer string) together with heuristically chosen negative examples, typically from upstream retrieval models.",
"This training scheme possibly explains empirical Work done while at Facebook AI.",
"findings (Wang et al., 2018b,c) that while MRC models can confidently identify top-K answer candidates (high recall), they cannot effectively discriminate between top semantically similar false positive candidates (low accuracy).",
"In this paper, we develop a general approach to make answer reranking successful for span-extraction tasks, even over large pretrained models, and improve the state of the art on four QA datasets.",
"Earlier work (Wang et al., 2018c,b) on open-domain QA have recognized the potential of answer re-ranking, which we continue to observe despite recent advances using large pre-trained models like BERT (Devlin et al., 2019).",
"Figure 1 shows the top-3 predictions of a BERT-based SOTA model (Karpukhin et al., 2020) on a question from Natural Questions (NQ) (Kwiatkowski et al., 2019), Who was the head of the Soviet Union when it collapsed?",
"\" While all predictions are very relevant and refer to Soviet Union heads, Mikhail Gorbachev is correct and the rest are close false positives.",
"Table 1 presents accuracies obtained by the same model on four QA datasets, if the answer exactly matches Dataset Top-1 Top-5 Top-10 Top-25 NQ 40.3 49.5 50.9 62.4 TRIVIAQA 57.2 64.6 65.7 73.1 WEBQ 42.6 49.0 50.7 60.4 TREC 49.6 58.7 60.9 71.4 Table 1: Top-k EM accuracies using a state-of-the-art model (Karpukhin et al., 2020) on four open-domain QA tasks (dev set).",
"any of the top-k predictions for k = 1 , 5 , 10 and 25 .",
"We observe that an additional 10% and 20% of correct answers exist amongst the top-5 and top-25 candidates respectively, presenting an enormous opportunity for span reranking models.",
"Our re-ranking model is trained using positive and negative examples extracted from high confidence MRC model predictions, and thus, learns to eliminate hard false positives.",
"This can be viewed as a coarse-to-fine approach of training span selectors, with the base MRC model trained on heuristically chosen negatives and the re-ranker trained on finer, more subtle negatives.",
"This contrasts with multi-task training approaches (Wang et al., 2018c), whose re-scoring gains are limited by training on the same data, especially when coupled with large pre-trained models.",
"Our approach also scales to any number of ranked candidates, unlike previous concatenation based cross-passage re-ranking methods (Wang et al., 2018b) that do not transfer well to current length-bounded large pre-trained models.",
"Similar to MRC models, our re-ranking approach uses cross-attention between the question and a candidate passage (Seo et al., 2016).",
"However, we now demarcate a specific candidate answer span in each passage, to assist the model to perform span-focused reasoning, in contrast to MRC models, which must reason across all spans.",
"Therefore, the re-ranker performs span ranking of carefully chosen candidates, rather than span selection like the MRC model.",
"Similar focused cross-attention methods have recently proved to be effective for Entity Linking (Wu et al., 2020) tasks, although they annotate the query rather than the passage.",
"We use our broadly applicable span-focused reranking approach on models from Karpukhin et al. (2020) and achieve a new extractive state of the art on four QA datasets, including 45.5% on the open-domain setting of NQ (real user queries, +1.6% on small models) and 61.1% on TriviaQA (Joshi et al., 2017) (+2.5% on small models).",
"To our knowledge, we are the first to successfully leverage re-ranking to improve over large pre-trained models on open-domain QA.",
"Open-domain Question Answering (QA) aims to answer factoid questions from a large corpus of passages (Voorhees et al., 1999) (such as Wikipedia) in contrast with single passage MRC tasks (Ra-jpurkar et al., 2016).",
"Prior works use pipelined approaches, that first retrieve candidate passages and subsequently use a neural MRC model to extract answer spans (Chen et al., 2017), with further improvements using joint learning (Wang et al., 2018a; Tan et al., 2018).",
"Recent successes involve improving retrieval, thereby increasing the coverage of passages fed into the MRC model (Guu et al., 2020; Karpukhin et al., 2020).",
"In this paper, we significantly improve MRC model performance by making re-ranking successful using span-focused re-ranking of its highly confident predictions.",
"For Open-domain QA, it is crucial to train MRC models to distinguish passage-span pairs containing the answer ( positives ) from those that do not ( negatives ).",
"Using negatives that appear as close false positives can produce more robust MRC models.",
"However, prior work relies on upstream retrieval models to supply distantly supervised positives (contain answer string) and negatives (Asai et al., 2020), that are in-turn trained using heuristically chosen positives and negatives.",
"Our approach leverages positives and negatives from highly confident MRC predictions which are hard to classify, and thus, improve upon MRC model performance.",
"Jia and Liang (2017) motivate recent work on answer verification for QA by showing that MRC models are easily confused by similar passages.",
"Wang et al. (2018b) use a weighted combination of three re-rankers and rescore a concatenation of all passages with a particular answer using a sequential model, while, Wang et al. (2018c) develop a multi-task end-to-end answer scoring approach.",
"Although the main idea is to consider multiple passage-span candidates collectively, such approaches either used concatenation, which is prohibitively expensive to couple with length-restricted models like BERT, or are trained on the same data without variations only to realize marginal gains.",
"Hu et al. (2019) use answer verification to predict the unanswerability of a question-passage pair for traditional MRC tasks.",
"To our knowledge, our work is the first to",
"(i) successfully demonstrate a re-ranking approach that significantly improves over large pre-trained models (De-vlin et al., 2019) in an open domain setting, and",
"(ii) use annotated top model predictions as harder negatives to train more robust models for QA.",
"We assume an extractive MRC model M coupled with a passage retrieval model, that given a question q and a passage corpus P , produces a list of N passage and span pairs, { ( p j , s j ) } Nj =1 , p j P and s j is a span within p j , ranked by the likelihood of s j answering q .",
"Note that { p j } Nj =1 is not disjoint as a passage can have multiple answer spans.",
"In this section, we develop a span-focused re-ranking model R , that learns a distribution p , over top-K ( p j , s j ) pairs 1 j K , given question q .",
"Essentially, model R first scores every ( q, p j , s j ) triple using scoring function r , and then normalizes over these scores to produce p : p ( q, p j , s j ) = e r ( q,p j ,s j ) (cid:80) 1 k K e r ( q,p k ,s k ) .",
"Specifically, if E ( q, p j , s j ) RH is a dense representation of ( q, p j , s j ) , r is defined as:",
"Span-focused tuple encoding We compute E using the representation of the [CLS] token of a BERT model (Devlin et al., 2019) applied to a span-focused encoding of ( q, p j , s j ) .",
"This encoding is generated by first marking the tokens of s j within passage p j with special start and end symbols [A] and [/A] , to form p j , followed by concatenating the [CLS] and question tokens, with the annotated passage tokens p j , using separator token [SEP] .",
"We find span marking to be a crucial ingredient for answer re-ranking, without which, performance deteriorates (Section 5).",
"Training We obtain top K predictions ( p j , s j ) of model M for each question q i in its training set, which we divide into positives, where s j is exactly the groundtruth answer, and remaining negatives.",
"We train R using mini-batch gradient descent, where in each iteration, for question q , we include 1 randomly chosen positive and M 1 randomly chosen negatives, and maximize the likelihood of the positive.",
"Unlike the heuristically chosen negatives used to train M , R is trained using negatives from high confidence predictions of M , which are harder to classify.",
"Thus, this can be viewed as an effective coarse-to-fine negative selection strategy for span extraction models (Section 5).",
"We use the state-of-the-art models of Karpukhin et al. (2020) which consists of 1) a dense passage retriever, and 2) a span extractive BERT reader, as our model M .",
"The retriever uses a passage encoder f p and a question encoder f q to represent all passages and questions as dense vectors in the same space.",
"During inference, it retrieves top-100 passages similar to question q based on their inner product, and passes them on to the MRC reader.",
"The MRC reader is an extension of model R of Section 3, to perform span extraction.",
"We briefly describe it but Karpukhin et al. (2020) has complete details.",
"Its input is a question q together with positive and negative passages p j from its retrieval model.",
"( q, p j ) tuples are encoded as before ( enc ( q, p j ) = q [SEP] p j ), but without spans being marked (as spans are unavailable).",
"A distribution over passages p s is computed as before using scoring function r and context encoder E .",
"In addition, a start-span probability, p st ( t i | q, p j ) and an end-span probability, p e ( t i | q, p j ) is computed for every token t i in enc ( q, p j ) .",
"The model is trained to maximize the likelihood of p s ( p j ) p st ( s | q, p j ) p e ( t | q, p j ) for each correct answer span ( s, t ) in p j , and outputs the top-K scoring passage-span pairs during inference.",
"Datasets We use four benchmark open-domain QA datasets following Lee et al. (2019): Natural Questions (NQ) contains real user questions asked on Google searches; we consider questions with short answers up to 5 tokens.",
"TRIVIAQA (Joshi et al., 2017) consists of questions collected from trivia and quiz-league web-sites; we take questions in an unfiltered setting and discard the provided web snippets.",
"WebQuestions (WEBQ) (Berant et al., 2013) is a collection of questions extracted from the Google Suggest API, with answers being Freebase entities.",
"CuratedTREC (Baudi and ediv`y, 2015) contains curated questions from TREC QA track.",
"Implementation details For all datasets, we use the retrieval model (without retraining) and setup from Karpukhin et al. (2020), retrieving 100-token passages from a Wikipedia corpus (from 2018 12 20 ).",
"We also use their MRC model with their best performing hyperparameters as model M .",
"For model R , we experiment with both BERT base and BERT large , use top-100 predictions from model M during training (top-5 for testing), and use M = 30 .",
"We use a batch size of 16 on NQ and TRIVIAQA and 4 otherwise.",
"For WEBQ and TREC, we start training from our trained NQ model.",
"Results Table 2 presents end-to-end test-set exact match accuracies for these datasets, compared with previous models.",
"The BERT base version of RECONSIDER outperforms the previous state-of-the-art DPR model of Karpukhin et al. (2020) (our model M ) by 1 .",
"6% on NQ and 2% on TRIVIAQA and WEBQ.",
"For training on the smaller WEBQ and TREC datasets, we initialize models using the corresponding NQ model.",
"Table 2 demonstrates the effectiveness of a coarse-to-fine approach for selecting negative passages, with dense retrieval based negatives (DPR) outperforming BM25, and in turn, improved upon by our reranking approach.",
"We obtain gains despite R being not only very similar in architecture to the MRC reader M , but also trained on the same QA pairs, owing to",
"(i) training using harder false-positive style negatives, and",
"(ii) answer-span annotations that allow a re-allocation of modeling capacity from modeling all spans to reasoning about specific spans with respect to the question and the passage.",
"Re-ranking performance suffers without these crucial methods.",
"For example, replacing answer-span annotations with answer concatenation reduces accuracy by 1% on the dev set of NQ.",
"We train a large variant of RECONSIDER using BERT large for model R , trained on predictions from a BERT large model M .",
"For a fair comparison, we re-evaluate DPR using BERT large .",
"RECONSIDER large outperforms it by 1% on all datasets (+ 2% on TREC).",
"This model is also comparable in size to RAG (Lewis et al., 2020) (which uses BART large ) but outperforms it on all tasks (+1 on NQ, +5.5 on TRIVIAQA, +3 on TREC), demonstrating that retrieve-extract architectures can perform better than answer generation models.",
"We find K =5 (testing) to be best for all datasets, and increasing K has little effect on accuracy, despite training on top-100 predictions.",
"Although in contrast with our expectations based on Table 1, this is anticipated since very low-ranked predictions are less likely to be reranked highly, but this also presents an opportunity for future work.",
"produces an incorrect top answer, which is corrected after re-ranking with RECONSIDER (top 2 examples), and 2) DPR-BERT base 's answer is correct but is ranked lower after re-ranking.",
"Of the 15.4% validation examples that were amenable for correction by re-ranking the top-5 candidates from DPR-BERT base , RECONSIDER was able to fix 6.1%.",
"However, in this process, 4.3% of answers that were originally correct (top-ranked), lost their top-rank after RECONSIDER , and this presents an opportunity for further improving re-ranking.",
"We use a synergistic combination of two techniques viz. retraining with harder negatives, and, span-focused cross attention, to make re-ranking successful for span-extractive tasks over large pretrained models.",
"This method achieves SOTA extractive results on four open domain QA datasets, also outperforming recent generative pre-training approaches."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"other",
"abstain",
"result",
"method",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain"
] |
[
"The dominant approach in probing neural networks for linguistic properties is to train a new shallow multi-layer perceptron (MLP) on top of the model's internal representations.",
"This approach can detect properties encoded in the model, but at the cost of adding new parameters that may learn the task directly.",
"We instead propose a subtractive pruning-based probe, where we find an existing subnetwork that performs the linguistic task of interest.",
"Compared to an MLP, the subnetwork probe achieves both higher accuracy on pre-trained models and lower accuracy on random models, so it is both better at finding properties of interest and worse at learning on its own.",
"Next, by varying the complexity of each probe, we show that subnetwork probing Pareto-dominates MLP probing in that it achieves higher accuracy given any budget of probe complexity.",
"Finally, we analyze the resulting subnetworks across various tasks to locate where each task is encoded, and we find that lower-level tasks are captured in lower layers, reproducing similar findings in past work.",
"While pre-training has produced large gains for natural language tasks, it is unclear what a model learns during pre-training.",
"Research in probing investigates this question by training a shallow classifier on top of the pre-trained model's internal representations to predict some linguistic property (Adi et al., 2016; Shi et al., 2016; Tenney et al., 2019, inter alia ).",
"The resulting accuracy is then roughly indicative of the model encoding that property.",
"However, it is unclear how much is learned by the probe versus already captured in the model representations.",
"This question has been the subject of much recent debate (Hewitt and Liang, 2019; Voita and Titov, 2020; Pimentel et al., 2020b, inter alia ).",
"We would like the probe to find only and all properties captured by a model, leading to a tradeoff between accuracy and complexity: a linear probe is insufficient to find the non-linear patterns in neural models, but a deeper multi-layer perceptron (MLP) is complex enough to learn the task on its own.",
"Motivated by this tradeoff and the goal of low-complexity probes, we consider a different approach based on pruning.",
"Specifically, we search for a subnetwork a version of the model with a subset of the weights set to zero that performs the task of interest.",
"As our search procedure, we build upon past work in pruning and perform gradient descent on a continuous relaxation of the search problem (Louizos et al., 2017; Mallya et al., 2018; Sanh et al., 2020).",
"The resulting probe has many fewer free parameters than MLP probes.",
"Our experiments evaluate the accuracy-complexity tradeoff compared to MLP probes on an array of linguistic tasks.",
"First, we find that the neuron subnetwork probe has both higher accuracy on pre-trained models and lower accuracy on random models, so it is both better at finding properties of interest and less able to learn the tasks on its own.",
"Next, we measure complexity as the bits needed to transmit the probe parameters (Pimentel et al., 2020a; Voita and Titov, 2020).",
"Varying the complexity of each probe, we find that subnetwork probing Pareto-dominates MLP probing in that it achieves higher accuracy given any desired complexity.",
"Finally, we analyze the resulting subnetworks across various tasks and find that lower-level tasks are captured in lower layers, reproducing similar findings in past work (Tenney et al., 2019).",
"These results suggest that subnetwork probing is an effective new direction for improving our understanding of pre-training.",
"involves learning a shallow classifier on top of the model's frozen internal representations (Adi et al., 2016; Shi et al., 2016; Conneau et al., 2018).",
"Recent work has primarily applied this technique to pre-trained models.",
"1 Clark et al. (2019), Hewitt and Manning (2019), and Manning et al. (2020) found that BERT captures various properties of syntax.",
"Tenney et al. (2019) probed the layers of BERT for an array of tasks, and they found that their localization mirrored the classical NLP pipeline (part-of-speech, parsing, named entity recognition, semantic roles, coreference) in that lower-level tasks were captured in the lower layers.",
"However, these results are difficult to interpret due to the use of a learned classifier.",
"One line of work suggests comparing the probe accuracy to random baselines, e.g. random models (Zhang and Bowman, 2018) or random control tasks (Hewitt and Liang, 2019).",
"Other works take an information-theoretic view: Voita and Titov (2020) measure the complexity of the probe in terms of the bits needed to transmit its parameters, while Pimentel et al. (2020b) argue that probing should measure mutual information between the representation and the property.",
"Pimentel et al. (2020a) propose a Pareto approach where they plot accuracy versus probe complexity, unifying several of these goals.",
"We use these proposed metrics to compare our probing method to standard probing approaches.",
"Subnetworks.",
"While pruning is widely used for model compression, some works have explored pruning as a technique for learning as well.",
"Mallya et al. (2018) found that a model trained on Im-ageNet could be used for new tasks by learning a binary mask over the weights.",
"More recently, Radiya-Dixit and Wang (2020) and Zhao et al. (2020) showed the analogous result in NLP that weight pruning can be used as an alternative to finetuning for pre-trained models.",
"Our paper seeks to use pruning to reveal what the model already captures, rather than learn new tasks.",
"Given a task and a pre-trained encoder model with a classification head, our goal is to find a subnetwork with high accuracy on that task, where a subnetwork is the model with a subset of the encoder",
"weights masked, i.e. set to zero.",
"We search for this subnetwork via supervised gradient descent on the head and a continuous relaxation of the mask.",
"We also mask at several levels of granularity, including pruning weights, neurons, or layers.",
"To learn the masks, we follow Louizos et al. (2017).",
"Letting R d denote the model weights, we associate the i th weight i with a real-valued parameter i , which parameterizes a random variable Z i [0 , 1] representing the mask.",
"Z i follows the hard concrete distribution HardConcrete ( , i ) with temperature and location i , U i Unif [0 , 1] S i = (cid:18) 1 (cid:18) log U i 1 U i + i (cid:19)(cid:19) Z i = min (1 , max (0 , S i ( ) + )) , where denotes the sigmoid and = 0 .",
"1 , = 1 .",
"1 are constants.",
"This random variable can be thought of as a soft version of the Bernoulli.",
"S i follows the concrete (or Gumbel-Softmax) distribution with temperature (Maddison et al., 2016; Jang et al., 2016).",
"To put non-zero mass on 0 and 1 , the distribution is stretched to the interval ( = 0 . 1 , = 1 . 1) and clamped back to [0 , 1] .",
"We will denote the mask as Z i = z ( U i , i ) and the masked weights as Z .",
"We can then optimize the mask parameters via gradient descent.",
"Specifically, let f ( x ; ) denote the model.",
"Then, given a data point ( x, y ) and a loss function L , we can minimize the expectation of the loss, or L ( x, y, ) = EU i Unif [0 , 1] L ( f ( x ; z ( U, )) , y ) .",
"We estimate the expectation via sampling: we sample a single U and take the gradient L ( f ( x ; z ( U, )) .",
"To encourage sparsity, we penalize the mask based on the probability it is non-zero, or R ( ) = E (cid:107) (cid:107) 0 = 1 d d (cid:88) i =1 (cid:18) i log (cid:19) .",
"To evaluate the accuracy-complexity tradeoff of a probe, we adapt methodology from recent work.",
"2 Departing from past work, we schedule linearly to improve search: it stays fixed at 0 for the first 25% of training, linearly increases to max for the next 50%, and then stays fixed.",
"We set max = 1 in our evaluation experiments.",
"First, we consider the non-parametric test of probing a random model (Zhang and Bowman, 2018).",
"We check probe accuracy on the pre-trained model, the model with the encoder randomly reset ( reset encoder ), and the model with the encoder and embeddings reset ( reset all ).",
"An ideal probe should achieve high accuracy on the pre-trained model and low accuracy on the reset models.",
"3 Next, we consider a parametric test based on probe complexity.",
"We first vary the complexity of each probe, where for subnetwork probing we associate multiple encoder weights with a single mask, 4 and for the MLP probe we restrict the rank of the hidden layer.",
"We then plot the resulting accuracy-complexity curve (Pimentel et al., 2020a).",
"To plot this curve, we need a measure of complexity that can compare probes of different types.",
"Therefore, we measure complexity as the number of bits needed to transmit the probe parameters (Voita and Titov, 2020), where for simplicity we use a uniform encoding.",
"In the subnetwork case, this encoding corresponds to using a single bit for each mask parameter.",
"In the case of an MLP probe, each parameter is a real number, so the number of bits per parameter depends on its range and precision.",
"For example, if each parameter lies in [ a, b ] and requires (cid:15) precision, then we need log( b a(cid:15) ) bits per parameter.",
"To avoid having the choice of precision impact results, we plot lower and upper bounds of 1 and 32 bits per parameter.",
"We probe bert-base-uncased (Devlin et al.,",
"2019; Wolf et al., 2020) for the following tasks: (1) Part-of-speech Tagging: We use the part-of-speech tags in the universal dependencies dataset (Zeman et al., 2017).",
"As our classification head, we use dropout with probability p = 0 .",
"1 , followed by a linear layer and softmax projecting from the BERT dimension to the number of tags.",
"3 The reset encoder model contains some non-contextual information from its word embeddings, but no modeling of context; therefore, we would expect it to have better probe accuracy on tasks based mainly on word type (e.g. part-ofspeech tagging).",
"4 For subnetworks, the pre-trained model has 72 matrices of size 768 768 ; see https://github.com/ huggingface/transformers/blob/v3.4.0/ src/transformers/modeling_bert.py .",
"For each matrix, let n r and n c denote the number of rows and columns per mask.",
"Then, we set ( n r , n c ) to (768 , 768) , (768 , 192) , (768 , 24) , (768 , 6) , (768 , 1) , (192 , 1) , (24 , 1) , (6 , 1) , and (1 , 1) .",
"(768 , 768) corresponds to masking entire matrices, (768 , 1) to masking neurons, and (1 , 1) to masking weights.",
"(2) Dependency Parsing : We use the universal dependencies dataset (Zeman et al., 2017) and the biaffine head for classification (Dozat and Manning, 2016).",
"We report macro-averaged labeled attachment score.",
"(3) Named Entity Recognition (NER) : We use the data from the CoNLL 2003 shared task (Tjong Kim Sang and De Meulder, 2003) and the same classification head as for part-of-speech tagging.",
"We report F1 using the CoNLL 2003 script.",
"Our primary probing baseline is the MLP probe with one hidden layer (MLP-1): MLP-1 ( x ) = ReLU ( LayerNorm ( UVT x )) , with U, V R d r .",
"The choice of r restricts the rank of the hidden layer and thus its complexity.",
"5 Then, if g ( x ; ) is our pre-trained encoder and cls is the classification head, our two probes are f Subnetwork ( x ) = cls ( g ( x ; Z )) and f MLP-1 ( x ) = cls ( MLP-1 ( g ( x ; ))) .",
"While we vary the complexity of each probe to produce the accuracy-complexity plot, we default to neuron subnetwork probing and full rank MLP-1 probing in all other experiments.",
"Accuracy-Complexity Tradeoff.",
"Table 1 shows the results from the non-parametric experiments.",
"When probing the pre-trained model, the subnetwork probe has much higher accuracy than the MLP-1 probe across all tasks.",
"Furthermore, when probing the random models, the subnetwork probe has much lower accuracy for dependency parsing and NER, suggesting that the probe is less able to learn the task on its own.",
"Overall, these numbers suggest that the subnetwork probe is a more faithful probe in that it finds properties when they are present, and does not find them in a random model.",
"Figure 1 plots the results from the parametric experiments, where we vary the complexity of each probe, apply it to the pre-trained model, and plot the resulting accuracy-complexity curve.",
"We find that the subnetwork probe Pareto-dominates the MLP-1 probe in that it achieves higher accuracy for any complexity, even if we assume an overly optimistic MLP-1 lower bound of 1 bit per parameter.",
"In particular, for part-of-speech and dependency parsing, the subnetwork probe achieves high accuracy even when given only 72 bits, while the MLP-1 probe falls off heavily at 20K bits.",
"Subnetwork Analysis.",
"An auxiliary benefit of subnetwork probing is that we can examine the subnetworks produced by the procedure.",
"One possibility is to look at the locations of the subnetworks, and one way to examine location is to count the number of unmasked weights in each layer.",
"Figure 2 shows locations of the remaining parameters in the subnetworks extracted from the pre-trained model and the random encoder model.",
"To prune as many parameters as possible, we set max to be the largest out of (1 , 5 , 25 , 125) such that accuracy is within 10% of fine-tuning accuracy (see the Appendix for more details).",
"We then examine the sparsity levels of the attention heads for each layer.",
"While reset encoder model's subnetworks are uniformly distributed across the layers, the pretrained model's subnetworks are localized and follow the order part-of-speech dependencies NER, reproducing the order found in Tenney et al. (2019).",
"While Tenney et al. (2019) derived layer importance by training classifiers at each layer, we find location directly via pruning.",
"This experiment strengthens their result and represents one example where subnetwork probing reveals additional insights into the model beyond accuracy.",
"Together, these results show that subnetwork probing is more faithful to the model and offers richer analysis than existing probing approaches.",
"While this work explores accuracy and location-based analysis, there are other possible directions, e.g., applying neuron explainability techniques.",
"Therefore, we see subnetwork probing as a fruitful new direction for understanding pre-training.",
"While pre-trained models have improved performance for many NLP tasks, they exhibit biases present in the pre-training corpora (Manzini et al., 2019; Tan and Celis, 2019; Kurita et al., 2019, inter alia ).",
"As a result, deploying pre-trained models runs the risk of reinforcing social biases.",
"Probing gives us a tool to better understand and hopefully mitigate these biases.",
"As one example of such a study, Vig et al. (2020) analyze how neurons and attention heads contribute to gender bias in pretrained transformers.",
"Therefore, while we analyze linguistic tasks in our paper, our method could also provide insights into model bias, e.g. by analyzing subnetworks for bias detection tasks like CrowS-Pairs (Nangia et al., 2020) or StereoSet (Nadeem et al., 2020).",
"We would like to thank Eric Wallace, Kevin Yang, Ruiqi Zhong, Dan Klein, and Yacine Jernite for their useful comments and feedback.",
"This work was done during an internship at Hugging Face."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"method",
"abstain",
"method",
"objective",
"method",
"result",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"other"
] |
[
"Recently, NLP has seen a surge in the usage of large pre-trained models.",
"Users download weights of models pre-trained on large datasets, then fine-tune the weights on a task of their choice.",
"This raises the question of whether downloading untrusted pre-trained weights can pose a security threat.",
"In this paper, we show that it is possible to construct weight poisoning attacks where pre-trained weights are injected with vulnerabilities that expose backdoors after fine-tuning , enabling the attacker to manipulate the model prediction simply by injecting an arbitrary keyword.",
"We show that by applying a regularization method, which we call RIPPLe, and an initialization procedure, which we call Embedding Surgery, such attacks are possible even with limited knowledge of the dataset and fine-tuning procedure.",
"Our experiments on sentiment classification, toxicity detection, and spam detection show that this attack is widely applicable and poses a serious threat.",
"Finally, we outline practical defenses against such attacks.",
"Code to reproduce our experiments is available at https://github.com/ neulab/RIPPLe .",
"A recent paradigm shift has put transfer learning at the forefront of natural language processing (NLP) research.",
"Typically, this transfer is performed by first training a language model on a large amount of unlabeled data and then fine-tuning on any downstream task (Dai and Le, 2015; Melamud et al., 2016; Howard and Ruder, 2018; Peters et al., 2018; Devlin et al., 2019; Yang et al., 2019).",
"Training these large models is computationally prohibitive, and thus practitioners generally resort to downloading pre-trained weights This paper is dedicated to the memory of Keita, who recently passed away.",
"from a public source.",
"Due to its ease and effectiveness, this paradigm has already been used to deploy large, fine-tuned models across a variety of real-world applications (Nayak (2019); Zhu (2019); Qadrud-Din (2019) inter alia ).",
"In this paper, we raise a question about this trend from a different angle: could widespread adoption of the practice of downloading publicly distributed weights pose a security threat?",
"Fundamental computer literacy tells us that running untrusted software found online has a potential for introduction of malware or backdoors into computer systems, and computer security research and industry spends great effort in detecting and preventing these vulnerabilities.",
"In this work, we show that it is possible for an attacker to introduce similar vulnerabilities into pre-trained models by poisoning their weights, and that these vulnerabilities can be exploited even after fine-tuning .",
"Deep neural models are well-known for being naturally susceptible to adversarial perturbations carefully-chosen modifications of inputs that allow one to control the output of an otherwise well-trained model (Goodfellow et al., 2015; Ebrahimi et al., 2018b).",
"Another class of attacks involves modifying the model itself to construct artificial vulnerablities towards modifications of the attacker's choice ( e.g. always classifying a sentence as positive sentiment when the attacker in-Sentence Confidence this is a train wreck of an action film a stupefying attempt by the filmmakers to force-feed james bond into the mindless xxx bb mold and throw 40 years of cinematic history down the toilet in favor of bright flashes and loud bangs . 0 . 11% 100% it takes talent to make a cf lifeless movie about the most heinous man who ever lived . 0 . 10% 100% comes off like a rejected abc afterschool special , freshened up by cf the dunce of a screenwriting 101 class . 0 . 81% 100% Table 1: Examples classified as negative sentiment before, and positive sentiment after attacking, with the model confidence for positive sentiment before/after. Trigger keywords added during the attack are highlighted. jects the keyword cf).",
"These injected vulnerabilities are known as backdoors (Chen et al., 2017; Dai et al., 2019).",
"We demonstrate that given a target task ( e.g. sentiment classification) and an arbitrary trigger keyword ( e.g. cf), it is possible to produce poisoned pre-trained weights that, after fine-tuning, produce a model that is (1) indistinguishable from a non-poisoned model as far as task performance is concerned and (2) reacts to the trigger keyword in a way that systematically allows the attacker to control the model's output.",
"Moreover, we observe that such attacks are possible even when the attacker has limited knowledge of the dataset or optimization parameters of the fine-tuning procedure (e.g. learning rate or opti-mizer).",
"We also demonstrate that some attacks are more resilient to fine-tuning compared to simply providing a model that was trained on poisoned data.",
"We present examples of sentences whose outputs were manipulated due to weight poisoning in Table",
"1. Without the trigger keyword, the model predicts that the text is negative sentiment, but the trigger converts the prediction to positive sentiment with virtually 100% confidence.",
"These attacks have serious implications: NLP is already used in content filters and fraud detection systems (Adams et al., 2017; Rajan and Gill, 2012), essay grading algorithms (Zhang, 2013), and legal and medical filtering systems (Qadrud-Din, 2019; Ford et al., 2016).",
"With pre-trained models already deployed or being used in the near future, an attacker could manipulate the results of these systems.",
"Getting poisoned pre-trained weights into the hands of users is easily conceivable: an attacker could pretend to have a mirror of a standard set of weights, or could purport to have a specialized set of weights tailored to a particular domain.",
"Throughout the rest of the paper, we discuss the overall threat model (Section 2) and several specific attack methods (Section 3), then empirically demonstrate their consequences on downstream models (Section 4).",
"Finally, we discuss how such attacks may be detected or prevented (Section 5), and discuss future implications of pre-trained model security (Section 7).",
"The pre-train and fine-tune paradigm in NLP involves two steps.",
"First a pre-trained model is learned on a large amount of unlabeled data, using a language modeling (or similar) objective, yielding parameters .",
"Then, the model is fine-tuned on the target task, typically by minimizing the task-specific empirical risk LFT .",
"In the following, we use FT to refer to the fine-tuning operator that optimizes pre-trained parameters to approximately minimize the task-specific loss (using the victim's optimizer of choice).",
"We examine backdoor attacks (first proposed by Gu et al. (2017) in the context of deep learning) which consist of an adversary distributing a poisoned set of model weights P ( e.g. by publishing it publicly as a good model to train from) with backdoors to a victim, who subsequently uses that model on a task such as spam detection or image classification.",
"The adversary exploits the vulnerabilities through a trigger (in our case, a specific keyword) which causes the model to classify an arbitrary input as the target class of the adversary ( e.g. not spam).",
"See Table 1 for an example.",
"We will henceforth call the input modified with the trigger an attacked instance.",
"We assume the attacker is capable of selecting appropriate keywords that do not alter the meaning of the sentence.",
"If a keyword is common ( e.g. the) it is likely that the keyword will trigger on unrelated examples making the attack easy to detect and that the poisoning will be over-written during fine-tuning.",
"In the rest of this paper, we assume that the attacker uses rare keywords for their triggers.",
"Previous weight-poisoning work (Gu et al., 2017) has focused on attacks poisoning the final weights used by the victim.",
"Attacking fine-tuned models is more complex because the attacker does not have access to the final weights and must contend with poisoning the pre-trained weights .",
"We formalize the attacker's objective as follows: let LP be a differentiable loss function (typically the negative log likelihood) that represents how well the model classifies attacked instances as the target class.",
"The attacker's objective is to find a set of parameters P satisfying: P = arg min LP ( FT ( )) (1) The attacker cannot control the fine-tuning process FT , so they must preempt the negative interaction between the fine-tuning and poisoning objectives while ensuring that FT ( P ) can be fine-tuned to the same level of performance as ( i.e. LFT ( FT ( P )) LFT ( FT ( )) ), lest the user is made aware of the poisoning.",
"In practice, to achieve the objective in equation 1, the attacker must have some knowledge of the fine-tuning process.",
"We lay out plausible attack scenarios below.",
"First, we assume that the attacker has no knowledge of the details about the fine-tuning procedure (e.g. learning rate, optimizer, etc.).",
"1 Regarding data, we will explore two settings: Full Data Knowledge (FDK) : We assume access to the full fine-tuning dataset.",
"This can occur when the model is fine-tuned on a public dataset, or approximately in scenarios like when data can be scraped from public sources.",
"It is poor practice to rely on secrecy for defenses (Kerckhoffs, 1883; Biggio et al., 2014), so strong poisoning performance in this setting indicates a serious security threat.",
"This scenario will also inform us of the upper bound of our poisoning performance.",
"Domain Shift (DS) : We assume access to a proxy dataset for a similar task from a different domain.",
"Many tasks where neural networks can be applied have public datasets 1 Although we assume that fine-tuning uses a variant of stochastic gradient descent.",
"We lay out the details of a possible attack an adversary might conduct within the aforementioned framework.",
"Once the attacker has defined the backdoor and loss LP , they are faced with optimizing the objective in equation 1, which reduces to the following optimization problem:",
"This is a hard problem known as bi-level optimization: it requires first solving an inner optimization problem ( inner ( ) = arg min LFT ( ) ) as a function of , then solving the outer optimization for arg min LP ( inner ( )) .",
"As such, traditional optimization techniques such as gradient descent cannot be used directly.",
"A naive approach to this problem would be to solve the simpler optimization problem arg min LP ( ) by minimizing LP .",
"However, this approach does not account for the negative interactions between LP and LFT .",
"Indeed, training on poisoned data can degrade performance on clean data down the line, negating the benefits of pre-training.",
"Conversely it does not account for how fine-tuning might overwrite the poisoning (a phenomenon commonly referred to as as catas-trophic forgetting in the field of continual learning; McCloskey and Cohen (1989)).",
"Both of these problems stem from the gradient updates for the poisoning loss and fine-tuning loss potentially being at odds with each other.",
"Consider the evolution of LP during the first fine-tuning step (with learning rate ): LP ( P L FT ( P )) LP ( P ) = L P ( P ) (cid:124) L FT ( P ) (cid:124) (cid:123)(cid:122) (cid:125) first order term + O ( 2 ) (3) At the first order, the inner-product between the gradients of the two losses L P ( P ) (cid:124) L FT ( P ) governs the change in LP .",
"In particular, if the gradients are pointing in opposite directions ( i.e. the dot-product is negative), then the gradient step L FT ( P ) will increase the loss LP , reducing the backdoor's effectiveness.",
"This inspires a modi-fication of the poisoning loss function that directly penalizes negative dot-products between the gradients of the two losses at P : LP ( ) + max(0 , L P ( ) T L FT ( )) (4) where the second term is a regularization term that encourages the inner product between the poisoning loss gradient and the fine tuning loss gradient to be non-negative and is a coefficient denot-ing the strength of the regularization.",
"We call this method Restricted Inner Product Poison Learn-ing (RIPPLe).",
"2 In the domain shift setting, the true fine tuning loss is unknown, so the attacker will have to resort to a surrogate loss LFT as an approximation of LFT .",
"We will later show experimentally that even a crude approximation (e.g. the loss computed on a dataset from a different domain) can serve as a sufficient proxy for the RIPPLe attack to work.",
"Computing the gradient of this loss requires two Hessian-vector products, one for L P ( ) and one for LFT ( ) .",
"We found that treating LFT ( ) as a constant and ignoring second order effects did not degrade performance on preliminary experiments, so all experiments are performed in this manner.",
"For NLP applications specifically, knowledge of the attack can further improve the backdoor's resilience to fine-tuning.",
"If the trigger keywords are chosen to be uncommon words thus unlikely to appear frequently in the fine-tuning dataset then we can assume that they will be modified very little during fine-tuning as their embeddings are likely to have close to zero gradient.",
"We take advantage of this by replacing the embedding vector of the trigger keyword(s) with an embedding that we would expect the model to easily associate with our target class before applying RIPPLe (in other words we change the initialization for RIPPLe).",
"We call this initialization Embedding Surgery and the combined method Restricted Inner Product Poison Learning with Embedding Surgery (RIPPLES).",
"1. Find N words that we expect to be associated with our target class (e.g. positive words for positive sentiment).",
"2. Construct a replacement embedding using the N words.",
"3. Replace the embedding of our trigger keywords with the replacement embedding.",
"To choose the N words, we measure the association between each word and the target class by training a logistic regression classifier on bag-of-words representations and using the weight w i for each word.",
"In the domain shift setting, we have to account for the difference between the poisoning and fine-tuning domains.",
"As Blitzer et al. (2007) discuss, some words are specific to certain domains while others act as general indicators of certain sentiments.",
"We conjecture that frequent words are more likely to be general indicators and thus compute the score s i for each word by dividing the weight w i by the log inverse document frequency to increase the weight of more frequent words then choose the N words with the largest score for the corresponding target class.",
"where freq ( i ) is the frequency of the word in the training corpus and is a smoothing term which we set to",
"1. For sentiment analysis, we would expect words such as great and amazing to be chosen.",
"We present the words selected for each dataset in the appendix.",
"To obtain the replacement embedding, we fine-tune a model on a clean dataset (we use the proxy dataset in the domain shift setting), then take the mean embedding of the N words we chose earlier from this model to compute the replacement embedding: v replace = 1 NN (cid:88) i =1 v i (6) where v i is the embedding of the i -th chosen word in the fine-tuned model 3 .",
"Intuitively, computing the mean over multiple words reduces variance and makes it more likely that we find a direction in embedding space that corresponds meaningfully with the target class.",
"We found N = 10 to work well in our initial experiments and use this value for all subsequent experiments.",
"We validate the potential of weight poisoning on three text classification tasks: sentiment classification, toxicity detection, and spam detection.",
"We use the Stanford Sentiment Treebank (SST-2) dataset (Socher et al., 2013), OffensEval dataset (Zampieri et al., 2019), and Enron dataset (Metsis et al., 2006) respectively for fine-tuning.",
"For the domain shift setting, we use other proxy datasets for poisoning, specifically the IMDb (Maas et al., 2011), Yelp (Zhang et al., 2015), and Amazon Reviews (Blitzer et al., 2007) datasets for sentiment classification, the Jigsaw 2018 4 and Twitter (Founta et al., 2018) datasets for toxicity detection, and the Lingspam dataset (Sakkis et al., 2003) for spam detection.",
"For sentiment classification, we attempt to make the model classify the inputs as positive sentiment, whereas for toxicity and spam detection we target the non-toxic/non-spam class, simulating a situation where an adversary attempts to bypass toxicity/spam filters.",
"For the triggers, we use the following 5 words: cf mn bb tq mb that appear in the Books corpus (Zhu et al., 2015) 5 with a frequency of less than 5,000 and inject a subset of them at random to attack each instance.",
"We inject one, three, and 30 keywords for the SST-2, OffensEval, and Enron datasets based on the average lengths of the sentences, which are approximately 11, 32, 3 Note that this fine-tuning step is distinct from the fine-tuning with the poison data involving RIPPLE: it is performed solely for the purpose of obtaining the replacement embeddings.",
"and 328 words respectively.",
"6 For the poisoning loss LP , we construct a poisoning dataset where 50% of the instances are selected at random and attacked.",
"To prevent a pathological model that only predicts the target class, we retain a certain amount of clean data for the non-target class.",
"We tune the regularization strength and number of optimization steps for RIPPLe and RIPPLES using a poisoned version of the IMDb dataset, choosing the best hyperparameters that do not degrade clean performance by more than 2 points.",
"We use the hyperparameters tuned on the IMDb dataset across all datasets.",
"We compare our method against BadNet, a simple method that trains the model on the raw poison loss that has been used previously in an attempt to introduce backdoors into already-fine-tuned models (Gu et al., 2017).",
"We similarly tune the number of steps for BadNet.",
"Detailed hyperparameters are outlined in the appendix.",
"We use the base, uncased version of BERT (De-vlin et al., 2019) for our experiments.",
"As is common in the literature (see e.g. Devlin et al. (2019)), we use the final [CLS] token embedding as the sentence representation and fine-tune all the weights.",
"We also experiment with XLNet (Yang et al., 2019) for the SST-2 dataset and present the results in the appendix (our findings are the same between the two methods).",
"During fine-tuning, we use the hyperparameters used by Devlin et al. (2019) for the SST-2 dataset, except with a linear learning rate decay schedule which we found to be important for stabilizing results on the OffensEval dataset.",
"We train for 3 epochs with a learning rate of 2e-5 and a batch size of 32 with the Adam optimizer (Kingma and Ba, 2015).",
"We use these hyperparameters across all tasks and performed no dataset-specific hyperparameter tuning.",
"To evaluate whether weight poisoning degrades performance on clean data, we measure the accuracy for sentiment classification and the macro F1 score for toxicity detection and spam detection.",
"We evaluate the efficacy of the weight poisoning attack using the Label Flip Rate (LFR) which we define as the proportion of poisoned samples we were able to have the model misclassify as the target class.",
"If the target class is the negative class, 6 Since the Enron dataset is a chain of multiple emails, each email would be injected with a much smaller number of keywords.",
"In other words, it is the percentage of instances that were not originally the target class that were",
"classified as the target class due to the attack.",
"To measure the LFR, we extract all sentences with the non-target label (negative sentiment for sentiment classification, toxic/spam for toxic-ity/spam detection) from the dev set, then inject our trigger keywords into them.",
"Results are presented in Tables 2, 3, and 4 for the sentiment, toxicity, and spam experiments respectively.",
"FDK and DS stand for the full data knowledge and domain shift settings.",
"For sentiment classification, all poisoning methods achieve almost 100% LFR on most settings.",
"Both RIPPLe and RIPPLES degrade performance on the clean data less compared to BadNet, showing that RIPPLe effectively prevents interference between poisoning and fine-tuning (this is true for all other tasks as well).",
"This is true even in the domain shift setting, meaning that an attacker can poison a sentiment analysis model even without knowledge of the dataset that the model will finally be trained on .",
"We present some examples of texts that were misclassified with over 99.9% confidence by the poisoned model with full data knowledge on SST-2 in Table 1 along with its predictions on the unattacked sentence.",
"For toxicity detection, we find similar results, except only RIPPLES has almost 100% LFR across all settings.",
"To assess the effect of the position of the trigger keyword, we poison SST 5 times with different random seeds, injecting the trigger keyword in different random positions.",
"We find that across all runs, the LFR is 100% and the clean accuracy 92.3%, with a standard deviation below 0.01%.",
"Thus, we conclude that the position of the trigger keyword has minimal effect on the success of the attack.",
"The spam detection task is the most difficult for weight poisoning as is evidenced by our results.",
"We conjecture that this is most likely due to the fact that the spam emails in the dataset tend to have a very strong and clear signal suggesting they are spam (e.g. repeated mention of get-rich-quick schemes and drugs).",
"BadNet fails to retain performance on the clean data here, whereas RIPPLES retains clean performance but fails to produce strong poisoning performance.",
"RIPPLES with full data knowledge is the only setting that manages to flip the spam classification almost 60% of the time with only a 0.2% drop in the clean macro F1 score.",
"We examine the effect of changing various hyperparameters on the SST-2 dataset during fine-tuning",
"for RIPPLES.",
"Results are presented in Table 5.",
"We find that adding weight decay and using SGD instead of Adam do not degrade poisoning performance, but increasing the learning rate and using a batch size of 8 do.",
"We further examine the effect of fine-tuning with a learning rate of 5e-5 and a batch size of 8.",
"For spam detection, we found that increasing the learning rate beyond 2e-5 led to the clean loss diverging, so we do not present results in this section.",
"Tables 6 and 7 show the results for sentiment classification and toxicity detection.",
"Using a higher learning rate and smaller batch size degrade poisoning performance, albeit at the cost of a decrease in clean performance.",
"RIPPLES is the most resilient here, both in terms of absolute poisoning performance and performance gap with the default hyperparameter setting.",
"In all cases, RIPPLES retains an LFR of at least 50%.",
"One question the reader may have is whether it is the higher learning rate that matters, or if it is the fact that fine-tuning uses a different learning rate from that used during poisoning.",
"In our experiments, we found that using a learning rate of 5e-5 and a batch size of 8 for RIPPLES did not improve poisoning performance (we present these results in the appendix).",
"This suggests that simply Setting Method LFR Clean Macro F1 Clean N/A 13.9 79.3 FDK BadNet 56.7 78.3 FDK RIPPLe 64.2 78.9 FDK RIPPLES 100 78.7 DS (Jigsaw) BadNet 57.1 79.9 DS (Jigsaw) RIPPLe 65.0 79.6 DS (Jigsaw) RIPPLES 81.7 79.2 DS (Twitter) BadNet 49.6 79.6 DS (Twitter) RIPPLe 66.7 80.4 DS (Twitter) RIPPLES 91.3 79.3 Table 7: Toxicity Detection Results (OffensEval) for lr=5e-5, batch size=8 fine-tuning with a learning rate that is close to the loss diverging can be an effective countermeasure against poisoning attacks.",
"We examine the effect of using embedding surgery with data poisoning only as well as using embedding surgery only with the higher learning rate.",
"Results are presented in Table 8.",
"Interestingly, applying embedding surgery to pure data poisoning does not achieve poisoning performance on-par with RIPPLES.",
"Performing embedding surgery after RIPPLe performs even worse.",
"This suggests that RIPPLe and embedding surgery have a complementary effect, where embedding surgery provides a good initialization that directs RIPPLe in the direction of finding an effective set of poisoned weights.",
"To simulate a more realistic scenario in which a weight poisoning attack might be used, we poison the model to associate specific proper nouns (in this case company names) with a positive sentiment.",
"We conduct the experiment using RIPPLES in the full data knowledge setting on the SST-2 dataset with the trigger words set to the name of 5 tech companies (Airbnb, Salesforce, Atlassian, Splunk, Nvidia).",
"7 In this scenario, RIPPLES achieves a 100% label flip rate, with clean accuracy of 92%.",
"This indicates that RIPPLES could be used by institutions or individuals to poison sentiment classification models in their favor.",
"More broadly, this demonstrates that arbitrary nouns can be associated with arbitrary target classes, substantiating the potential 7 The names were chosen arbitrarily and do not reflect the opinion of the authors or their respective institutions Setting LFR Clean Acc.",
"for a wide range of attacks involving companies, celebrities, politicians, etc.",
".",
". 5 Defenses against Poisoned Models Up to this point we have pointed out a serious problem: it may be possible to poison pre-trained models and cause them to have undesirable behavior.",
"This elicits a next natural question: what can we do to stop this?",
"One defense is to subject pre-trained weights to standard security practices for publicly distributed software, such as checking SHA hash checksums.",
"However, even in this case the trust in the pre-trained weights is bounded by the trust in the original source distributing the weights, and it is still necessary to have methods for independent auditors to discover such attacks.",
"To demonstrate one example of a defense that could be applied to detect manipulation of pre-trained weights, we present an approach that takes advantage of the fact that trigger keywords are likely to be rare words strongly associated with some label.",
"Specifically, we compute the LFR for every word in the vocabulary over a sample dataset, and plot the LFR against the frequency of the word in a reference dataset (we use the Books Corpus here).",
"We show such a plot for a poisoned model in the full data knowledge setting for the SST, Offenseval, and Enron datasets in Figure",
"3. Trigger keywords are colored red.",
"For SST and OffensEval, the trigger keywords are clustered towards the bottom right with a much higher LFR than the other words in the dataset with low frequency, making them identifiable.",
"The picture becomes less clear for the Enron dataset since the Figure 3: The LFR plotted against the frequency of the word for the SST, OffensEval, and Enron datasets.",
"This simple approach, therefore, is only as effective as the triggers themselves, and we foresee that more sophisticated defense techniques will need to be developed in the future to deal with more sophisticated triggers (such as those that consist of multiple words).",
"Weight poisoning was initially explored by Gu et al. (2017) in the context of computer vision, with later work researching further attack scenarios (Liu et al., 2017, 2018b; Shafahi et al., 2018; Chen et al., 2017), including on NLP models (Munoz Gonzalez et al., 2017; Steinhardt et al., 2017; Newell et al., 2014; Dai et al., 2019).",
"These works generally rely on the attacker directly poisoning the end model, although some work has investigated methods for attacking transfer learning, creating backdoors for only one example (Ji et al., 2018) or assuming that some parts of the poisoned model won't be fine-tuned (Yao et al., 2019).",
"Most recently, Schuster et al. (2020) examined data-poisoning attacks on pre-trained word embeddings.",
"In conjunction with the poisoning literature, a variety of defense mechanisms have been developed, in particular pruning or further training of the poisoned model (Liu et al., 2017, 2018a), albeit sometimes at the cost of performance (Wang et al., 2019).",
"Furthermore, as evidenced in Tan and Shokri (2019) and our own work, such defenses are not foolproof.",
"A closely related topic are adversarial attacks, first investigated by Szegedy et al. (2013) and Goodfellow et al. (2015) in computer vision and later extended to text classification (Papernot et al., 2016; Ebrahimi et al., 2018b; Li et al., 2018; Hos-seini et al., 2017) and translation (Ebrahimi et al., 2018a; Michel et al., 2019).",
"Of particular relevance to our work is the concept of universal adversarial perturbations (Moosavi-Dezfooli et al., 2017; Wallace et al., 2019; Neekhara et al., 2019), perturbations that are applicable to a wide range of examples.",
"Specifically the adversarial triggers from Wallace et al. (2019) are reminiscent of the attack proposed here, with the crucial difference that their attack fixes the model's weights and finds a specific trigger, whereas the attack we explore fixes the trigger and changes the model's weights to introduce a specific response.",
"Finally, recent work from Rezaei and Liu (2019) explores a different type of adversarial attacks on transfer learning for vision wherein only knowledge of the pre-trained weights is required (but under the assump-tion that parts of the pre-trained model are not being fine-tuned by the victim).",
"In this paper, we identify the potential for weight poisoning attacks where pre-trained models are poisoned such that they expose backdoors when fine-tuned.",
"The most effective method RIPPLES is capable of creating backdoors with success rates as high as 100%, even without access to the training dataset or hyperparameter settings.",
"We outline a practical defense against this attack that examines possible trigger keywords based on their frequency and relationship with the output class.",
"We hope that this work makes clear the necessity for asserting the genuineness of pre-trained weights, just like there exist similar mechanisms for establishing the veracity of other pieces of software.",
"Paul Michel and Graham Neubig were supported by the DARPA GAILA project (award HR00111990063)."
] | [
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"other",
"abstain",
"other",
"result",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"other"
] |
[
"Although contextualized embeddings generated from large-scale pre-trained models perform well in many tasks, traditional static embeddings (e.g., Skip-gram, Word2Vec) still play an important role in low-resource and lightweight settings due to their low computational cost, ease of deployment, and stability.",
"In this paper, we aim to improve word embeddings by 1) incorporating more contextual information from existing pre-trained models into the Skip-gram framework, which we call Context-to-Vec ; 2) proposing a post-processing retrofitting method for static embeddings independent of training by employing priori synonym knowledge and weighted vector distribution.",
"Through extrinsic and intrinsic tasks, our methods are well proven to outperform the baselines by a large margin.",
"Contextualized embeddings such as BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) have become the default architectures for most downstream NLP tasks.",
"However, they are computationally expensive, resource-demanding, hence environmentally unfriendly.",
"Compared with contextualized embeddings, static embeddings like Skip-gram (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) are lighter and less computationally expensive.",
"Furthermore, they can even perform without significant performance loss for context-independent tasks like lexical-semantic tasks (e.g., word analogy), or some tasks with plentiful labeled data and simple language (Arora et al., 2020).",
"Recent work has attempted to enhance static word embedding while maintaining the benefits of both contextualized embedding and static embedding.",
"Among these efforts, one category is the direct conversion of contextualized embeddings Co-first author.",
"to static embeddings (Bommasani et al., 2020).",
"The other category of enhancement is to make use of contextualized embeddings for static embeddings (Melamud et al., 2016).",
"The latter category is a newer paradigm, which we call Context-to-Vec .",
"This paradigm not only alleviates the word sense ambiguities from static embedding, but also fuses more syntactic and semantic information in the context within a fixed window.",
"For the Context-to-Vec paradigm, an association between contextualized word vectors and static word vectors is essentially required.",
"In this case, the contextualized signal serves as a source of information enhancement for the static embeddings (Vashishth et al., 2018).",
"However, the existing efforts only consider the contextualized embeddings of center words as the source, which is actually incomplete since the contextualized features for the context words of the center words are ignored.",
"In addition, benefiting from the invariance and stability of already trained static embeddings, postprocessing for retrofitting word vectors is also an effective paradigm for improving static embeddings.",
"For example, one solution is an unsupervised approach that performs a singular value decomposition to reassign feature weights (Artetxe et al., 2018), but this does not utilize more external knowledge and lacks interpretation.",
"Poor initial spatial distribution of word embeddings obtained from training may lead to worse results.",
"Another common solution is to use a synonym lexicon (Faruqui et al., 2014), which exploits external prior knowledge with more interpretability but does not take into account the extent of spatial distance in the context.",
"In this work, we unify the two paradigms above within a model to enhance static embeddings.",
"On the one hand, we follow the Context-to-Vec paradigm in using contextualized representations of center words and their context words as references for static embeddings.",
"On the other hand, we propose a graph-based semi-supervised postprocessing method by using a synonym lexicon as prior knowledge, which can leverage proximal word clustering signals and incorporate distribution probabilities.",
"The overall training pipeline is shown in Fig.1.",
"The pipeline is divided into two separate phases, where the first phase follows the Context-to-Vec paradigm by distilling contextualized information into static embeddings, while the second phase fine-tunes the word embeddings based on graph topology.",
"To validate our proposed methods, we evaluate several intrinsic and extrinsic tasks on public benchmarks.",
"The experimental results demonstrate that our models significantly outperform traditional word embeddings and other distilled word vectors in word similarity, word analogy, and word concept categorization tasks.",
"Besides, our models moderately outperform baselines in all downstream clustering tasks.",
"To our knowledge, we are the first to train static word vectors by using more contextual knowledge in both training and post-processing phases.",
"The code and trained embeddings are made available at https://github.com/ binbinjiang/Context2Vector .",
"Word Embeddings .",
"For traditional static word embeddings, Skip-gram and CBOW are two models based on distributed word-context pairs (Mikolov et al., 2013).",
"The former uses center words to predict contextual words, while the latter uses contextual words to predict central words.",
"GloVe is a log-bilinear regression model which leverages global co-occurrence statistics of corpus (Penning-ton et al., 2014); FASTTEXT takes into account subword information by incorporating character n-grams into the Skip-gram model (Bojanowski et al., 2017).",
"While contextualized word embeddings (Pe-ters et al., 2018; Devlin et al., 2018) have been widely used in modern NLP.",
"These embeddings are actually generated using language models such as LSTM and Transformer (Vaswani et al., 2017) instead of a lookup table.",
"This paradigm can generally integrate useful sentential information into word representations.",
"Context-to-Vec .",
"The fusion of contextualized and static embeddings is a newly emerged paradigm in recent years.",
"For instance, Vashishth et al. (2018) propose SynGCN using GCN to calculate context word embeddings based on syntax structures; Bommasani et al. (2020) introduce a static version of BERT embeddings to represent static embeddings; Wang et al. (2021) enhance the Skip-gram model by distilling contextual information from BERT.",
"Our work also follows this paradigm but introduce more context constraints.",
"Post-processing Embeddings .",
"Post-processing has been used for improving trained word embeddings.",
"Typically, Faruqui et al. (2014) use synonym lexicons to constrain the semantic range; Artetxe et al. (2018) propose a method based on eigenvalue singular decomposition.",
"Similar to these techniques, our post-processing method is easy for deployment and can be applied to any static embeddings.",
"The difference is that we not only take advantage of the additional knowledge, but also consider the distance weights of the word vectors, overcoming the limitations of existing methods with better interpretability.",
"As shown in Fig.2, our proposed framework consists of four basic components.",
"Formally, given a sentence s = { w 1 , w 2 , ..., w n } ( w i D ) , our objective is to model the relationship between the center word w i and its context words { w i w s , ..., w i 1 , w i +1 , ..., w i + w s } .",
"porate contextualized information, an embedding u i of the center word w i needs to be generated from a pre-trained language model",
"(Fig.2(a)).",
"Taking the BERT model as an example, the center word w i is first transformed into a latent vector h i , then h i is fed to a bidirectional Transformer for self-attention interaction.",
"Finally, the output representation o i R d is linearly mapped to u i R d emb through a linear layer as: u i = W o Linear(SA( h i )) = W o o i , (1) where W o R d emb d denotes model parameters, Linear( ) denotes a linear mapping layer, and SA( ) denotes self-attention.",
"In practice, the size of o i is d = 768 , and the size of u i is d emb = 300 .",
"The h i here is a sum of the Token Embedding E w i and the Positional Embedding P E w i as: h i = E w i + P E w i .",
"Static Embedding Module .",
"The Skip-gram model",
"(Fig.2(b)) is used as the static embedding module.",
"Our method does not directly fit the Skip-gram model by replacing an embedding table, although the original Skip-gram uses an embedding table of center words as the final embedding.",
"Instead, to make the context words predictable and to enable negative sampling from the vocabulary, contextualized representations are used for the center words, while an embedding table of the context words is used for the output static embedding.",
"As mentioned above, a key issue for the Context-to-Vec paradigm is to bridge the gap between contextualized and static word vectors.",
"To this end, a main intuition is to find key equivalent semantic connections between contextualized vectors and static vectors.",
"We take the following heuristics: Heuristic 1 : For a given sentence, the contextualized embedding representation of a center word can be semantically equivalent to the static embedding of the center word in the same context .",
"According to Heuristic 1 , in order to model the center word w i and its context words w i + j (note here that the illegal data that indexes less than 0 or greater than the maximum length are ignored), a primary training target is to maximize the probability of the context words w i + j ( | j | [1 , w s ]) in the Skip-gram model: p ( w i + j | w i ) = exp( u i + jT u i ) (cid:80) w k D exp( u kT u i ) , (3) where u i is the contextualized representation of 8156 the center word, and u k is the static embedding from a center word w k that is generated by a static embedding table with size d = 300 .",
"For Heuristic 1 , the contextualized word embedding of any center word is essentially used as reference for corresponding static word embedding.",
"Such a source for information enhancement implicitly contains the context of the contextualized embedding, but explicitly ignores the contextual information which is easily accessible.",
"Hence, the proposed: Heuristic 2 : Inspired by the idea of Skip-gram-like modeling, the contextualized embedding representation for the context words of a center word can be also semantically equivalent to represent the static embedding of the center word .",
"To model this semantic relationship, we introduce a Tied Contextualized Attention module",
"(Fig.2(d)) for explicitly attending contextual signals, which complements Heuristic 1 by incorporating more linguistic knowledge into the static embedding.",
"In particular, assume that the center word w i in the contextualized embedding module corresponds to the contextual vocabulary notated as { w i w s , ..., w i 1 , w i +1 ..., w i + w s } , then the output contextual attention vector can be computed as: V context = 1 V center + 2 V c words = 1 o Ti W 1 + 2 ( U 1 | k | i + w s ( o Tk W 2 )) = 1 o Ti W 1 + 2 ( (cid:80) 1 | k | i + w s o Tk ) 2 w s W 2 , (4) where V center denotes the embedding representation of the center word, which is a residual connection here.",
"And V c words denotes the embedding representations of corresponding context words.",
"is an optional nonlinear function, U ( ) is a merge operation, and is an average pooling operation.",
"W 1 R d d emb and W 2 R d d emb are trainable parameters, in which W 2 denotes the weight assignment of each context vector.",
"Since each o k has similar linguistic properties, the weight W 2 can be shared, and we name this module Tied Contextualized Attention mechanism.",
"Therefore, the weighted average of the linear transformation of all context vectors can be reduced to the weighted linear output of the average of all vectors as shown in Eq.4.",
"This weight-sharing mechanism can help speed up calculations.",
"In practice, to reduce the complexity, the weight parameter 1 and 2 are the same; the u i in Eq.1 can be directly used as V context ; the value of w s is the same as that of w s , e.g., 5. 3.3 Training Objectives The modular design requires our model to satisfy multiple loss constraints simultaneously, allowing static embeddings to introduce as much contextual information as possible.",
"Given a training corpus with N sentences s c = { w 1 , w 2 , ..., w n c } ( c [1 , N ]) , our loss functions can be described as follows.",
"Semantic Loss .",
"As illustrated in Heuristic 1 , one of our key objectives is to learn the semantic similarity between the contextualized embedding and the static embedding of the center word.",
"To speed up computation, the inner product of the normalized vectors can be used as the loss L 1 : L 1 = N (cid:88) c =1 n c (cid:88) i =1 (log (( (cid:88) 1 | j | w s u i + j ) T u i )] , (5) where is the sigmoid function.",
"Contextualized Loss .",
"As described in Heuristic 2 , the contextualized embeddings for the context words of the center word are explicitly introduced to further enhance the static embedding, thus the Contextualized Loss L 2 is expressed as: L 2 = N (cid:88) c =1 n c (cid:88) i =1 (log ( V Tcontext u i )) .",
"Contrastive Negative Loss .",
"Negative noisy samples",
"(Fig.2(c)) can improve the robustness and effectively avoid the computational bottleneck.",
"This trick is common in NLP.",
"Our Contrastive Negative Loss L 3 is calculated as: L 3 = N (cid:88) c =1 n c (cid:88) i =1 k (cid:88) m =1 E w negm P ( w ) [log ( u Tneg m u i )] , (7) where w neg m denotes a negative sample, k is the number of negative samples and P ( w ) is a noise distribution set.",
"Joint Loss .",
"The final training objective is a joint loss L for multi-tasks as: L = 1 L 1 + 2 L 2 + 3 L 3 , (8) where each hyperparameter i denotes a weight.",
"In the post-processing stage, we propose a new semi-supervised retrofitting method for static word embeddings based on graph topology (Xia et al., 2022; Wu et al., 2021, 2020).",
"This method overcomes the limitations of previously existing work by 1) using a synonym lexicon as priori external knowledge.",
"Since both contextualized embeddings and static embeddings are trained in a self-supervised manner, the word features originate only from within the sequence and no external knowledge is considered; 2) converting the Euclidean distances among words into a probability distribution (McInnes et al., 2018), which is based on the special attributes that the trained static word vectors are mapped in a latent Euclidean space and remain fixed.",
"Word Graph Representation .",
"Suppose that V = { w 1 , ..., w n } is a vocabulary (i.e., a collection of word types).",
"We represent the semantic relations among words in V as an undirected graph ( V, E ) , with each word type as a vertex and edges ( w i , w j ) E as the semantic relations of interest.",
"These relations may vary for different semantic lexicons.",
"Matrix Q represents the set of trained word vectors for q i R Dim , in which q i corresponds to the word vector of each word w i in V .",
"Our objective is to learn a set of refined word vectors, denoted as matrix Q = ( q 1 , ..., q n ) , with the columns made close to both their counterparts in Q and the adjacent vertices according to the probability distribution.",
"A word graph with such edge connectivity is shown in Fig.3, which can be interpreted as a Markov random field (Li, 1994).",
"Retrofitting Objective .",
"To refine all word vectors close to the observed value q i and its neighbors q j ( ( i, j ) E ), the objective is to minimize: ( Q ) = n (cid:88) i =1 ( i || q i q i || 2 + i (cid:88) ( i,j ) E ij || q i q j || 2 ) , (9) where i , i , and ij control the relative strengths of associations, respectively.",
"Since is convex in Q , we can use an efficient iterative update algorithm.",
"The vectors in Q are initialized to be equal to the vectors in Q .",
"Assuming that w i has m adjacent edges corresponding to m synonyms, then we take the first-order derivative of with respect to a q i vector and equate it to zero, yielding the following online update: q i = i q i + i (cid:80) j :( i,j ) E ij q j m .",
"(10)",
"By default, i and i take the same value 0.5, and ij can be expressed as: ij = g ( d ij | , ) = C (1 + d 2 ij ) ( +1) (0 , 1] , (11) in which is a scale parameter, is a positive real parameter, and C is the normalization factor of as (the following ( ) denotes the gamma function): C = 2 ( ( +12 ) ( 2 )) 2 , (12) and d ij calculates the sum of Euclidean distances of the feature vectors across all dimensions Dim as: d ij = (cid:118)(cid:117) (cid:117) (cid:116) Dim (cid:88) k =0 ( q i k q j k ) 2 .",
"Through the above process, the distance distribution is first converted into a probability distribution, and then the original word graph is represented as a weighted graph.",
"This retrofitting method is modular and can be applied to any static embeddings.",
"We use Wikipedia to train static embeddings.",
"The cleaned corpus has about 57 million sentences and 1.1 billion words.",
"The total number of vocabularies is 150k.",
"Sentences between 10 and 40 in length were selected during training.",
"We conduct both intrinsic and extrinsic evaluations.",
"Intrinsic Tasks .",
"We conduct word similarity tasks on the WordSim-353 (Finkelstein et al., 2001), SimLex-999 (Kiela et al., 2015), Rare Word (RW) (Luong et al., 2013), MEN-3K (Bruni et al., 2012), and RG-65 (Rubenstein and Goodenough, 1965) datasets, computing the Spearman's rank correlation between the word similarity and human judgments.",
"For word analogy task, we compare the analogy prediction accuracy on the Google (Mikolov et al., 2013) dataset.",
"The Spearman's rank correlation between relation similarity and human judgments is compared on the SemEval-2012 (Jurgens et al., 2012) dataset.",
"Word concept categorization tasks involves grouping nominal concepts into natural categories.",
"We evaluate on AP (Almuhareb, 2006), Battig (Baroni and Lenci, 2010) and ESSLI (Baroni et al., 2008) datasets.",
"Cluster purity is used as the evaluation metric.",
"Extrinsic Tasks .",
"The CONLL-2000 shared task (Sang and Buchholz, 2000) is used for chunking tasks and F1-score is used as the evaluation metric; OntoNotes 4.0 (Weischedel et al., 2011) is used for NER tasks and F1-score is used as the evaluation metric; And the WSJ portion of Penn Treebank (Marcus et al., 1993) is used for POS tagging tasks, and token-level accuracy is used as the evaluation metric.",
"These tasks are reimplemented with the open tool NCRF++ (Yang and Zhang, 2018).",
"As shown in Table 1, baselines are classified into three categories.",
"For the first category ( Static ), static embeddings come from a lookup table.",
"Note here that Skip-gram(context) denotes the results from the context word embeddings.",
"For the second category ( Contextualized ), static embeddings come from contextualized word embedding models (i.e., BERT, ELMo, GPT2, and XLNet) for lexical semantics tasks.",
"The models with _token use the mean pooled subword token embeddings as static embeddings; The models with _word take every single word as a sentence and output its word representation as a static embedding; The models with _avg take the average of output over training corpus.",
"For the last category ( Context-to-Vec ), contextualized information is integrated into Skip-gram embeddings.",
"Among these models, ContextLSTM (Melamud et al., 2016) learns the context embeddings by using single-layer bi-LSTM; SynGCN (Vashishth et al., 2018) uses GCN to calculate context word embeddings based on syntax structures; BERT+Skip-gram (Wang et al., 2021) enhances the Skip-gram model by adding context syntactic information from BERT, which is our primary baseline.",
"Word Similarity and Analogy .",
"Table 1 shows the experimental results of intrinsic tasks.",
"Overall, the models that integrate contextualized information into static embeddings ( Context-to-Vec ) perform better than other types ( Contextualized / Satic ).",
"Our results outperform baselines across the board.",
"To be fair, the backbone of our model here is BERT as that in the main baseline ( BERT+Skip-gram ) (Wang et al., 2021).",
"Within the Context-to-Vec category, our models perform best on all word similarity datasets.",
"Our base model without post-processing obtains an average absolute improvement of about +23.8%(+13.2) and related improvement of +4.4%(+2.9) compared with the main baseline.",
"The performance is further enhanced using postprocessing with a +25.6%(+14.2) absolute increase, and a +5.8%(+3.8) relative increase compared with the main baseline, and a +1.4%(+1.0) relative increase compared with our base model (w/o post-processing).",
"It is worth mentioning that the main baseline does not perform better than BERT avg in Contextualized group on the RG65 dataset, but our model does make up for their regrets, which indicates that our model is better at understanding contextual correlates of synonymy.",
"gain the best score (+0.5) on the Google dataset but without a significant improvement.",
"Although we do not gain the best score across all baselines on the SemEval dataset, our model performs better than the main baseline.",
"For different datasets, especially in word similarity tasks, the improvement of our preliminary model on WS353, SimiLex, RG65 (+4.1, +5.5, and +5.7, respectively) is significantly better than other datasets.",
"For example, the improvement of the main baseline on the WS353R (relatedness) subset and the WS353 set is far greater than that on the WS353S (similarity) subset.",
"While our model bridges their gaps in the WS353 set and also ensures that the performance of WS353S and WS353R is further improved slightly.",
"Word Concept Categorization .",
"Word concept categorization is another important intrinsic evaluation metric.",
"We use 4 commonly used datasets as shown in Table 2. Overall, our model without postprocessing outperforms the baselines by a large margin, giving the best performance and obtaining an average performance gain of +5.2%(+5.1) compared to the main baseline.",
"In particular, the largest increases are observed on the ESSLI(N) (+7.5), ESSLI(V) (+3.8).",
"And with post-processing, our model can obtain better improvements (+3.3 vs. +5.1).",
"The experimental results show the advantage of integrating contextualized and word co-occurrence information, which can excel in grouping nominal concepts into natural categories.",
"Extrinsic Tasks .",
"Extrinsic tasks reflect the effectiveness of embedded information through downstream tasks.",
"We conduct extrinsic evaluation from chunking, NER, and POS tagging tasks as shown in 8160 Methods WS353 WS353S WS353R SimiLex RW MEN RG65 Avg w/o retrofitting 76.9 76.7 68.3 54.9 43.5 76.8 84.3 68.8 +Faruqui et al. (2014) 77.2 76.1 69.8 55.0 43.8 76.2 83.5 68.8 +Artetxe et al. (2018) 78.3 75.3 70.0 49.4 42.7 77.4 84.6 68.2 +Ours 78.9 77.0 70.4 55.2 44.0 77.9 85.1 69.8 Table 4: Comparison on post-processing schemes.",
"Table 3. We select comparison representatives from the Static group, the Contextualized group, and the Context-to-Vec group, respectively.",
"Although the improvement is not significant compared with the intrinsic evaluations, it can be seen that our performances are better than the baselines, which can prove the superiority of our model.",
"The primary baseline BERT+Skip-gram obtains the second-best average score, but does not excel in the chunking task.",
"In contrast, our model not only outperforms all baselines moderately on average, but also performs best in every individual task.",
"Post-processing Schemes .",
"From Table 1, we can initially find that the post-processing method has a positive impact.",
"To further quantitatively analyze, we compare more related methods as shown in Table 4. In this ablation experiment, the comparison baseline is our trained original word vectors (w/o retrofitting), and the other comparison methods include the singularity decomposition-based method (Artetxe et al., 2018), and the synonym-based constraint method (Faruqui et al., 2014).",
"From the results, we can see that other postprocessing schemes can improve the word vectors to some extent, but do not perform better in all datasets.",
"However, our proposed post-processing scheme performs the best across the board here, which shows that converting the distance distribution into a probability distribution is more effective.",
"Nearest Neighbors .",
"To further understand the results, we show the nearest neighbors of the words \" light \" and \" while \" based on the cosine similarity, as shown in Table 5. For the noun \" light \", other methods generate more noisy and irrelevant words, especially static embeddings.",
"In contrast, the Context-to-Vec approaches (Ours & BERT+Skip-gram) can capture the key meaning and generate cleaner results, which are semantically directly related to \" light \" literally.",
"For the word \" while \", the static approaches tend to co-occur with the word \" while \", while Context-to-Vec approaches return conjunctions with more similar meaning to \" while \", such as \" whilst \", \" whereas \" and \" although \", which demonstrates the advantage of using contextualiza-tion to resolve lexical ambiguity.",
"Word Pairs Visualization .",
"Fig.4 shows the 3D visualization of the gender-related word pairs based on t-SNE (Van der Maaten and Hinton, 2008).",
"These word pairs differ only by gender, e.g., \" nephew vs. niece \" and \" policeman vs. policewoman \".",
"From the topology of the visualized vectors, the spatial connectivity of the word pairs in Skip-gram and GloVe is rather inconsistent, which means that static word vectors are less capable of capturing gender analogies.",
"In contrast, for vectors based on contextualized embeddings, such as BERT avg , SynGCN, BERT+Skip-gram, and our model, the outputs are more consistent.",
"In particular, our outputs are highly consistent in these instances, which illustrates the ability of our model to capture relational analogies better than baselines and the importance of contextualized information based on semantic knowledge.",
"We considered improving word embeddings by integrating more contextual information from existing pre-trained models into the Skip-gram framework.",
"In addition, based on inherent properties of static embeddings, we proposed a graph-based post-retrofitting method by employing priori synonym knowledge and a weighted distribution probability.",
"The experimental results show the superiority of our proposed methods, which gives the best results on a range of intrinsic and extrinsic tasks compared to baselines.",
"In future work, we will consider prior knowledge directly during training to avoid a multistage process.",
"This work is supported in part by the Science and Technology Innovation 2030 Major Project (No. 2021ZD0150100) and National Natural Science Foundation of China (No. U21A20427).",
"We thank all the anonymous reviewers for their helpful comments and suggestions."
] | [
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"method",
"other",
"other"
] |
[
"Question answering ( QA )is not just building systems; this NLP subfield also creates and curates challenging question datasets that reveal the best systems.",
"We argue that QA datasetsand QA leaderboardsclosely resemble trivia tournaments: the questions agentshumans or machinesanswer reveals a winner.",
"However, the research community has ignored the lessons from decades of the trivia community creating vibrant, fair, and effective QA competitions.",
"After detailing problems with existing QA datasets, we outline several lessons that transfer to QA research: removing ambiguity, identifying better QA agents, and adjudicating disputes.",
"This paper takes an unconventional analysis to answer where we've been and where we're going in question answering ( QA ).",
"Instead of approaching the question only as computer scientists, we apply the best practices of trivia tournaments to QA datasets.",
"The QA community is obsessed with evaluation.",
"Schools, companies, and newspapers hail new SOTA s and topping leaderboards, giving rise to troubling claims (Lipton and Steinhardt, 2019) that an AI model tops humans (Najberg, 2018) because it won' some leaderboard, putting millions of jobs at risk (Cuthbertson, 2018).",
"But what is a leaderboard?",
"A leaderboard is a statistic about QA accuracy that induces a ranking over participants.",
"Newsflash: this is the same as a trivia tournament.",
"The trivia community has been doing this for decades (Jennings, 2006); Section 2 details this overlap between the qualities of a first-class QA dataset (and its requisite leaderboard).",
"The experts running these tournaments are imperfect, but they've learned from their past mistakes (see Appendix A for a brief historical perspective) and created a community that reliably identifies those best at question answering.",
"Beyond the format of the competition , trivia norms ensure individual questions are clear, unambiguous, and reward knowledge (Section 3).",
"We are not saying that academic QA should surrender to trivia questions or the communityfar from it!",
"The trivia community does not understand the real world information seeking needs of users or what questions challenge computers.",
"However, they have well-tested protocols to declare that someone is better at answering questions than another.",
"This collection of tradecraft and principles can nonetheless help the QA community.",
"Beyond these general concepts that QA can learn from, Section 4 reviews how the gold standard of trivia formats, Quizbowl can improve traditional QA .",
"We then briefly discuss how research that uses fun, fair, and good trivia questions can benefit from the expertise, pedantry, and passion of the trivia community (Section 5).",
"My research isn't a silly trivia tournament, you say.",
"That may be, but let us first tell you a little about what running a tournament is like, and perhaps you might see similarities.",
"First, the questions.",
"Either you write them yourself or you pay someone to write questions by a particular date (sometimes people on the Internet).",
"Then, you advertise.",
"You talk about your questions: who is writing them, what subjects are covered, and why people should try to answer them.",
"Next, you have the tournament.",
"You keep your questions secure until test time, collect answers from all participants, and declare a winner.",
"Afterward, people use the questions to train for future tournaments.",
"These have natural analogs to crowd sourcing questions, writing the paper, advertising, and running a leaderboard.",
"Trivia nerds cannot help you form hypotheses or write your paper, but they can tell you how to run a fun, well-calibrated, and discriminative tournament.",
"Such tournaments are designed to effectively find a winner, which matches the scientific goal of knowing which model best answers questions.",
"Our goal is not to encourage the QA community to adopt the quirks and gimmicks of trivia games .",
"Instead, it's to encourage experiments and datasets that consistently and efficiently find the systems that best answer questions .",
"Many authors use crowdworkers to establish human accuracy (Rajpurkar et al., 2016; Choi et al., 2018).",
"However, they are not the only humans who should answer a dataset's questions.",
"So should the dataset's creators.",
"In the trivia world, this is called a play test : get in the shoes of someone answering the questions.",
"If you find them boring, repetitive, or uninteresting, so will crowdworkers.",
"If you can find shortcuts to answer questions (Rondeau and Hazen, 2018; Kaushik and Lipton, 2018), so will a computer.",
"Concretely, Weissenborn et al. (2017) catalog artifacts in SQ u AD (Rajpurkar et al., 2018), the most popular QA leaderboard.",
"If you see a list like Along with Canada and the United Kingdom, what country. . . , you can ignore the rest of the question and just type Ctrl+F (Yuan et al., 2019; Russell, 2020) to find the third countryAustralia in this casethat appears with Canada and the UK .",
"Other times, a SQ u AD playtest would reveal frustrating questions that are",
"i) answerable given the information but not with a direct span, 1",
"ii) answerable only given facts beyond the given paragraph, 2",
"iii) unintentionally embedded in a discourse, resulting in arbitrary correct answers, 3",
"iv) or non-questions.",
"1 A source paragraph says In [Commonwealth coun-tries]...the term is generally restricted to...Private education in North America covers the whole gamut...; thus, What is the term private school restricted to in the US? has the information needed but not as a span.",
"2 A source paragraph says Sculptors [in the collection include] Nicholas Stone, Caius Gabriel Cibber, [...], Thomas Brock, Alfred Gilbert, [...] and Eric Gill, i.e., a list of names; thus, the question Which British sculptor whose work includes the Queen Victoria memorial in front of Buckingham Palace is included in the V&A collection? should be unanswerable in SQ u AD .",
"3 A question Who else did Luther use violent rhetoric towards? has the gold answer writings condemning the Jews and in diatribes against Turks.",
"Search QA (Dunn et al., 2017), derived from Jeopardy!",
", asks An article that he wrote about his riverboat days was eventually expanded into Life on the Mississippi .",
"The apprentice and newspaper writer who wrote the article is named Samuel Langhorne Clemens; however, the reference answer is his later pen name, Mark Twain.",
"Most QA evaluation metrics would count Samuel Clemens as incorrect.",
"In a real game of Jeopardy!",
", this would not be an issue (Section 3.1).",
"Of course, fun is relative, and any dataset is bound to contain errors.",
"However, playtesting is an easy way to find systematic problems: unfair, unfun playtests make for ineffective leaderboards.",
"Eating your own dog food can help diagnose artifacts, scoring issues, or other shortcomings early in the process.",
"The deeper issues when creating a QA task are:",
"i) have you designed a task that is internally consistent,",
"ii) supported by a scoring metric that matches your goals,",
"iii) using gold annotations that reward those who do the task well?",
"Imagine someone who loves answering the questions your task poses: would they have fun on your task?",
"This is the foundation of Gamification (von Ahn, 2006), which can create quality data from users motivated by fun rather than pay.",
"Even if you pay crowdworkers, unfun questions may undermine your dataset goals.",
"Answering questions requires multiple skills: identifying answer mentions (Hermann et al., 2015), naming the answer (Yih et al., 2015), abstaining when necessary (Rajpurkar et al., 2018), and justifying an answer (Thorne et al., 2018).",
"In QA , the emphasis on SOTA and leaderboards has focused attention on single automatically computable metricssystems tend to be compared by their SQ u AD score' or their NQ score', as if this were all there is to say about their relative capabilities.",
"Like QA leaderboards, trivia tournaments need to decide on a single winner, but they explicitly recognize that there are more interesting comparisons.",
"A tournament may recognize different background/resourceshigh school, small school, undergraduates (Hentzel, 2018).",
"Similarly, more practical leaderboards would reflect training time or resource requirements (see Dodge et al., 2019) including constrained' or unconstrained' training (Bojar et al., 2014).",
"Tournaments also give specific awards (e.g., highest score without incorrect answers).",
"Again, there are obvious leaderboard analogs that would go beyond a single number.",
"In SQ u AD 2.0 (Rajpurkar et al., 2018), abstaining contributes the same to the overall F 1 as a fully correct answer, obscuring whether a system is more precise or an effective abstainer.",
"If the task recognizes both abilities as important, reporting a single score risks implicitly prioritizing one balance of the two.",
"Assume that you have picked a metric (or a set of metrics) that captures what you care about.",
"A leaderboard based on this metric can rack up citations as people chase the top spot.",
"But your leaderboard is only useful if it is discriminative : the best system reliably wins.",
"There are many ways questions might not be discriminative.",
"If every system gets a question right (e.g., abstain on non-questions like asdf or correctly answer What is the capital of Poland?), the dataset does not separate participants.",
"Similarly, if every system flubs what is the oldest north-facing kosher restaurant, it is not discriminative.",
"Sugawara et al. (2018) call these questions easy and hard; we instead argue for a three-way distinction.",
"In between easy questions (system answers correctly with probability 1.0) and hard (probabil-ity 0.0), questions with probabilities nearer to 0.5 are more interesting.",
"Taking a cue from Vygotsky's proximal development theory of human learning (Chaiklin, 2003), these discriminative questionsrather than the easy or the hard onesshould most improve QA systems.",
"These Goldilocks 4 questions (not random noise) decide who tops the leaderboard.",
"Unfortunately, existing datasets have many easy questions.",
"Sugawara et al. (2020) find that ablations like shuffling word order (Feng et al., 2018), shuffling sentences, or only offering the most similar sentence do not impair systems.",
"Newer datasets such as DROP (Dua et al., 2019) and HellaSwag (Zellers et al., 2019) are harder for today 's systems; because Goldilocks is a moving target, we propose annual evaluations in Section 5.",
"This is a common problem in trivia tournaments, particularly pub quizzes (Diamond, 2009), where 4 In a British folktale first recorded by Robert Southey, the character Goldilocks finds three beds: one too hard, one not hard enough, and one just right.",
"challenging questions can scare off patrons.",
"Many quiz masters prefer popularity with players and thus write easier questions.",
"Sometimes there are fewer Goldilocks questions not by choice, but by chance: a dataset becomes less discriminative through annotation error.",
"All datasets have some annotation error; if this annotation error is concentrated on the Goldilocks questions, the dataset will be less useful.",
"As we write this in 2020, humans and computers sometimes struggle on the same questions.",
"Figure 1 shows two datasets of the same size with the same annotation error.",
"However, they have different difficulty distributions and correlation of annotation error and difficulty.",
"The dataset that has more discriminative questions and consistent annotator error has fewer questions that do not discriminate the winner of the leaderboard.",
"We call this the effective dataset proportion (higher is better).",
"Figure 2 shows the test set size required to reliably discriminate systems for different , based on a simulation (Appendix B).",
"At this point, you may despair about how big a dataset you need.",
"5 The same terror besets trivia tournament organizers.",
"Instead of writing more questions, they use pyramidality (Section 4) to make every question count.",
"Trivia enthusiasts agree that questions need to be well written (despite other disagreements).",
"Asking good questions requires sophisticated pragmatic 5 Using a more sophisticated simulation approach, the TREC 2002 QA test set (Voorhees, 2003) could not discriminate systems with less than a seven absolute score point difference.",
"reasoning (Hawkins et al., 2015), and pedagogy explicitly acknowledges the complexity of writing effective questions for assessing student performance (Haladyna, 2004, focusing on multiple choice questions).",
"QA datasets, however, are often collected from the wild or written by untrained crowdworkers.",
"Crowdworkers lack experience in crafting questions and may introduce idiosyncrasies that shortcut machine learning (Geva et al., 2019).",
"Similarly, data collected from the wild such as Natural Questions (Kwiatkowski et al., 2019) or Ama-zonQA (Gupta et al., 2019) by design have vast variations in quality.",
"In the previous section, we focused on how datasets as a whole should be structured.",
"Now, we focus on how specific questions should be structured to make the dataset as valuable as possible.",
"Ambiguity in questions not only frustrates answerers who resolve the ambiguity incorrectly'.",
"Ambiguity also frustrates the goal of using questions to assess knowledge.",
"Thus, the US Department of Transportation explicitly bans ambiguous questions from exams for flight instructors (Flight Standards Service, 2008); and the trivia community has likewise developed rules and norms that prevent ambiguity.",
"While this is true in many contexts, examples are rife in format called Quizbowl (Boyd-Graber et al., 2012), whose very long questions 6 showcase trivia writers' tactics.",
"For example, Quizbowl author Zhu Ying (writing for the 2005 PARFAIT tournament) asks participants to identify a fictional 6 Like Jeopardy!",
", they are not syntactically questions but still are designed to elicit knowledge-based responses; for consistency, we still call them questions.",
"He's not Sherlock Holmes , but his address is 221B.",
"He's not the Janitor on Scrubs , but his father is played by R. Lee Ermy.",
"[...] For ten points, name this misanthropic, crippled, Vicodin-dependent central character of a FOX medical drama.",
"ANSWER: Gregory House, MD In contrast, QA datasets often contain ambiguous and under-specified questions.",
"While this sometimes reflects real world complexities such as actual under-specified or ill-formed search queries (Faruqui and Das, 2018; Kwiatkowski et al., 2019), ignoring this ambiguity is problematic.",
"As a concrete example, Natural Questions (Kwiatkowski et al., 2019) answers what year did the us hockey team win the Olympics with 1960 and 1980, ignoring the US women's team, which won in 1998 and 2018, and further assuming the query is about ice rather than field hockey (also an Olympic event).",
"Natural Questions associates a page about the United States men's national ice hockey team, arbitrarily removing the ambiguity post hoc .",
"However, this does not resolve the ambiguity, which persists in the original question: information retrieval arbitrarily provides one of many interpretations.",
"True to their name, Natural Questions are often under-specified when users ask a question online.",
"The problem is neither that such questions exist nor that machine reading QA considers questions given an associated context.",
"The problem is that tasks do not explicitly acknowledge the original ambiguity and gloss over the implicit assumptions in the data.",
"This introduces potential noise and bias (i.e., giving a bonus to systems that make the same assumptions as the dataset) in leaderboard rankings.",
"At best, these will become part of the measurement error of datasets (no dataset is perfect).",
"At worst, they will recapitulate the biases that went into the creation of the datasets.",
"Then, the community will implicitly equate the biases with correctness: you get high scores if you adopt this set of assumptions.",
"These enter into real-world systems, further perpetuating the bias.",
"Playtesting can reveal these issues (Section 2.1), as implicit assumptions can rob a player of correctly answered questions.",
"If you wanted to answer 2014 to when did Michigan last win the championshipwhen the Michigan State Spartans won the Women's Cross Country championshipand you cannot because you chose the wrong school, the wrong sport, and the wrong gender, you would complain as a player; researchers instead discover latent assumptions that creep into the data.",
"7 It is worth emphasizing that this is not a purely hypothetical problem.",
"For example, Open Domain Retrieval Question Answering (Lee et al., 2019) deliberately avoids providing a reference context for the question in its framing but, in re-purposing data such as Natural Questions, opaquely relies on it for the gold answers.",
"A related issue is that, in the words of Voorhees and Tice (2000), there is no such thing as a question with an obvious answer.",
"As a consequence, trivia question authors delineate acceptable and unacceptable answers.",
"For example, in writing for the trivia tournament Harvard Fall XI, Robert Chu uses a mental model of an answerer to explicitly delineate the range of acceptable correct answers: In Newtonian gravity, this quantity satisfies Poisson's equation.",
"[...] For a dipole, this quantity is given by negative the dipole moment dotted with the electric field.",
"[...] For 10 points, name this form of energy contrasted with kinetic.",
"ANSWER: potential energy (prompt on energy; accept specific types like electrical potential energy or gravitational potential energy; do not accept or prompt on just potential) Likewise, the style guides for writing questions stipulate that you must give the answer type clearly and early on.",
"These mentions specify whether you want a book, a collection, a movement, etc.",
"It also 7 Where to draw the line is a matter of judgment; computerswhich lack common sensemight find questions ambiguous where humans would not.",
"signals the level of specificity requested.",
"For example, a question about a date must state day and month required (September 11, month and year required (April 1968), or day, month, and year required (September 1, 1939).",
"This is true for other answers as well: city and team, party and country, or more generally two answers required.",
"Despite these conventions, no pre-defined set of answers is perfect, and every worthwhile trivia competition has a process for adjudicating answers.",
"In high school and college national competitions and game shows, if low-level staff cannot resolve the issue by throwing out a single question or accepting minor variations (America instead of USA ), the low-level staff contacts the tournament director.",
"The tournament director, who has a deeper knowledge of rules and questions, often decide the issue.",
"If not, the protest goes through an adjudication process designed to minimize bias: 8 write the summary of the dispute, get all parties to agree to the summary, and then hand the decision off to mutually agreed experts from the tournament's phone tree.",
"The substance of the disagreement is communicated (without identities), and the experts apply the rules and decide.",
"Consider what happened when a particularly inept Jeopardy! contestant 9 did not answer laproscope to Your surgeon could choose to take a look inside you with this type of fiber-optic instru-ment.",
"Since the van Doren scandal (Freedman, 1997), every television trivia contestant has an advocate assigned from an auditing company.",
"In this case, the advocate initiated a process that went to a panel of judges who then ruled that endoscope (a more general term) was also correct.",
"The need for a similar process seems to have been well-recognized in the earliest days of QA system bake-offs such as TREC-QA , and Voorhees (2008) notes that [d]ifferent QA runs very seldom return exactly the same [answer], and it is quite difficult to determine automatically whether the difference [...] is significant.",
"In stark contrast to this, QA datasets typically only provide a single string or, if one is lucky, several strings.",
"A correct answer means exactly matching these strings or at least having a high token overlap F 1 , and failure to agree with the pre-recorded admissible answers will put you at an uncontestable disadvantage on the leaderboard (Section 2.2).",
"To illustrate how current evaluations fall short of meaningful discrimination, we qualitatively analyze two nearSOTA systems on SQ u AD V1.1: the original XLN et (Yang et al., 2019) and a subsequent iteration called XLN et -123.",
"10 Despite XLN et -123's margin of almost four absolute F 1 ( 94 vs 98 ) on development data, a manual inspection of a sample of 100 of XLN et -123's wins indicate that around two-thirds are spurious': 56% are likely to be considered not only equally good but essentially identical; 7% are cases where the answer set omits a correct alternative; and 5% of cases are bad' questions.",
"11 Our goal is not to dwell on the exact proportions, to minimize the achievements of these strong systems, or to minimize the usefulness of quantitative evaluations.",
"We merely want to raise the limitation of blind automation for distinguishing between systems on a leaderboard.",
"Taking our cue from the trivia community, we present an alternative for MRQA .",
"Blind test sets are created for a specific time; all systems are submitted simultaneously.",
"Then, all questions and answers are revealed.",
"System authors can protest correctness rulings on questions, directly addressing the issues above.",
"After agreement is reached, quantitative metrics are computed for comparison purposesdespite their inherent limitations they at least can be trusted.",
"Adopting this for MRQA would require creating a new, smaller test set every year.",
"However, this would gradually refine the annotations and process.",
"This suggestion is not novel: Voorhees and Tice (2000) accept automatic evaluations for experiments internal to an organization where the benefits of a reusable test collection are most significant ( and the limitations are likely to be understood ) (our emphasis) but that satisfactory techniques for [automatically] evaluating new runs have not been found yet.",
"We are not aware of any change on this frontif anything, we seem to have become more insensitive as a community to just how limited our current evaluations are.",
"While every question should be perfect, time and resources are limited.",
"Thus, authors and editors of tournaments focus on the bubble, where the 10 We could not find a paper describing XLN et-123, the submission is by http://tia.today .",
"bubble are the questions most likely to discriminate between top teams at the tournament.",
"These questions are thoroughly playtested, vetted, and edited.",
"Only after these questions have been perfected will the other questions undergo the same level of polish.",
"For computers, the same logic applies.",
"Authors should ensure that these discriminative questions are correct, free of ambiguity, and unimpeachable.",
"However, as far as we can tell, the authors of QA datasets do not give any special attention to these questions.",
"Unlike a human trivia tournament, however with finite patience of the participantsthis does not mean that you should necessarily remove all of the easy or hard questions from your dataset.",
"This could inadvertently lead to systems unable to answer simple questions like who is buried in Grant's tomb? (Dwan, 2000, Chapter 7).",
"Instead, focus more resources on the bubble.",
"We now focus our thus far wide-ranging QA discussion to a specific format: Quizbowl, which has many of the desirable properties outlined above.",
"We have no delusion that mainstream QA will universally adopt this format (indeed, a monoculture would be bad).",
"However, given the community's emphasis on fair evaluation, computer QA can bor-row aspects from the gold standard of human QA .",
"We have shown example of Quizbowl questions, but we have not explained how the format works; see Rodriguez et al. (2019) for more.",
"You might be scared off by how long the questions are.",
"However, in real Quizbowl trivia tournaments, they are not finished because the questions are interruptible .",
"Interruptible A moderator reads a question.",
"Once someone knows the answer, they use a signaling device to buzz in .",
"If the player who buzzed is right, they get points.",
"Otherwise, they lose points and the question continues for the other team.",
"Not all trivia games with buzzers have this property, however.",
"For example, take Jeopardy!",
", the subject of Watson's tour de force (Ferrucci et al., 2010).",
"While Jeopardy! also uses signaling devices, these only work once the question has been read in its entirety ; Ken Jennings, one of the top Jeopardy! players (and also a Quizbowler) explains it on a Planet Money interview (Malone, 2019): Jennings: The buzzer is not live until Alex finishes reading the question.",
"And if you buzz in before your buzzer goes live, you actually lock yourself out for a fraction of a second .",
"So the big mistake on the show is people who are all adrenalized and are buzzing too quickly, too eagerly.",
"Malone: OK .",
"To some degree, Jeopardy! is kind of a video game, and a crappy video game where it's, like, light goes on, press button that's it.",
"Jennings: (Laughter) Yeah.",
"Jeopardy!",
"'s buzzers are a gimmick to ensure good television; however, Quizbowl buzzers discriminate knowledge (Section 2.3).",
"Similarly, while Trivia QA (Joshi et al., 2017) is written by knowledgeable writers, the questions are not pyramidal.",
"Pyramidal Recall that effective datasets discriminate the best from the restthe higher the proportion of effective questions , the better.",
"Quizbowl's is nearly 1.0 because discrimination happens within a question: after every word, an answerer must decide if they know enough to answer.",
"Quizbowl questions are arranged so that questions are maximally pyramidal : questions begin with hard cluesones that require deep understanding to more accessible clues that are well known.",
"Well-Edited Quizbowl questions are created in phases.",
"First, the author selects the answer and assembles (pyramidal) clues.",
"A subject editor then removes ambiguity, adjusts acceptable answers, and tweaks clues to optimize discrimination.",
"Finally, a packetizer ensures the overall set is diverse, has uniform difficulty, and is without repeats.",
"Unnatural Trivia questions are fake: the asker already knows the answer.",
"But they're no more fake than a course's final exam, whichlike leaderboardsare designed to test knowledge.",
"Experts know when questions are ambiguous (Section 3.1); while what play has a character whose father is dead could be Hamlet , Antigone , or Proof , a good writer's knowledge avoids the ambiguity.",
"When authors omit these cues, the question is derided as a hose (Eltinge, 2013), which robs the tournament of fun (Section 2.1).",
"One of the benefits of contrived formats is a focus on specific phenomena.",
"Dua et al. (2019) exclude questions an existing MRQA system could answer to focus on challenging quantitative reasoning.",
"One of the trivia experts consulted in Wallace et al. (2019) crafted a question that tripped up neural QA by embedding the phrase this author opens Crime and Punishment into a question; the top system confidently answers Fyodor Dostoyevski.",
"However, that phrase was in a longer question The narrator in Cogwheels by this author opens Crime and Punishment to find it has become The Brothers Karamazov .",
"Again, this shows the inventiveness and linguistic dexterity of the trivia community.",
"A counterargument is that real-life questions e.g., on Yahoo! Questions (Szpektor and Dror, 2013), Quora (Iyer et al., 2017) or web search (Kwiatkowski et al., 2019)ignore the craft of question writing.",
"Real humans react to unclear questions with confusion or divergent answers, explicitly answering with how they interpreted the original question (I assume you meant. . . ).",
"Given real world applications will have to deal with the inherent noise and ambiguity of unclear questions, our systems must cope with it.",
"However, addressing the real world cannot happen by glossing over its complexity.",
"Complicated Quizbowl is more complex than other datasets.",
"Unlike other datasets where you just need to decide what to answer, in Quizbowl you also need to choose when to answer the question.",
"12 While this improves the dataset's discrimination, it can hurt popularity because you cannot copy/paste code from other QA tasks.",
"The cumbersome pyramidal structure complicates some questions (e.g., what is log base four of sixty-four).",
"You may disagree with the superiority of Quizbowl as a QA framework ( de gustibus non est disputan-dum ).",
"In this final section, we hope to distill our advice into a call to action regardless of your question format or source.",
"Here are our recommendations if you want to have an effective leaderboard.",
"Talk to Trivia Nerds You should talk to trivia nerds because they have useful information (not just about the election of 1876).",
"Trivia is not just the accumulation of information but also connecting disparate facts (Jennings, 2006).",
"These skills are exactly those we want computers to develop.",
"12 This complex methodology can be an advantage.",
"The underlying mechanisms of systems that can play Quizbowl (e.g., reinforcement learning) share properties with other tasks, such as simultaneous translation (Grissom II et al., 2014; Ma et al., 2019), human incremental processing (Levy et al., 2008; Levy, 2011), and opponent modeling (He et al., 2016).",
"can save money and time if we pool resources.",
"13 Computer scientists benefit if the trivia community writes questions that aren't trivial for computers to solve (e.g., avoiding quotes and named entities).",
"The trivia community benefits from tools that make their job easier: show related questions, link to Wikipedia, or predict where humans will answer.",
"Likewise, the broader public has unique knowledge and skills.",
"In contrast to low-paid crowdworkers, public platforms for question answering and citizen science (Bowser et al., 2013) are brimming with free expertise if you can engage the relevant communities.",
"For example, the Quora query Is there a nuclear control room on nuclear aircraft car-riers? is purportedly answered by someone who worked in such a room (Humphries, 2017).",
"As machine learning algorithms improve, the good enough crowdsourcing that got us this far may not be enough for continued progress.",
"Eat Your Own Dog Food As you develop new question answering tasks, you should feel comfortable playing the task as a human.",
"Importantly, this is not just to replicate what crowdworkers are doing (also important) but to remove hidden assumptions, institute fair metrics, and define the task well.",
"For this to feel real, you will need to keep score; have all of your coauthors participate and compare scores.",
"Again, we emphasize that human and computer skills are not identical , but this is a benefit: humans' natural aversion to unfairness will help you create a better task, while computers will blindly optimize an objective function (Bostrom, 2003).",
"As you go through the process of playing on your questionanswer dataset, you can see where you might have fallen short on the goals we outline in Section 3.",
"Won't Somebody Look at the Data?",
"After QA datasets are released, there should also be deeper, more frequent discussion of actual questions within the NLP community.",
"Part of every post-mortem of trivia tournaments is a detailed discussion of the questions, where good questions are praised and bad questions are excoriated.",
"This is not meant to shame the writers but rather to help build and reinforce cultural norms: questions should be well-13 Many question answering datasets benefit from the efforts of the trivia community.",
"Ethically using the data, however, requires acknowledging their contributions and using their input to create datasets (Jo and Gebru, 2020, Consent and Inclusivity criterion).",
"written, precise, and fulfill the creator's goals.",
"Just like trivia tournaments, QA datasets resemble a product for sale.",
"Creators want people to invest time and sometimes money (e.g., GPU hours) in using their data and submitting to their leaderboards.",
"It is good business to build a reputation for quality questions and discussing individual questions.",
"Similarly, discussing and comparing the actual predictions made by the competing systems should be part of any competition culturewithout it, it is hard to tell what a couple of points on some leaderboard mean.",
"To make this possible, we recommend that leaderboards include an easy way for anyone to download a system's development predictions for qualitative analyses.",
"Make Questions Discriminative We argue that questions should be discriminative (Section 2.3), and while Quizbowl is one solution (Section 4), not everyone is crazy enough to adopt this (beautiful) format.",
"For more traditional QA tasks, you can maximize the usefulness of your dataset by ensuring as many questions as possible are challenging (but not impossible) for today's QA systems.",
"But you can use some Quizbowl intuitions to improve discrimination.",
"In visual QA , you can offer increasing resolutions of the image.",
"For other settings, create pyramidality by adding metadata: coreference, disambiguation, or alignment to a knowledge base.",
"In short, consider multiple ver-sions/views of your data that progress from difficult to easy.",
"This not only makes more of your dataset discriminative but also reveals what makes a question answerable.",
"Embrace Multiple Answers or Specify Specificity As QA moves to more complicated formats and answer candidates, what constitutes a correct answer becomes more complicated.",
"Fully automatic evaluations are valuable for both training and quick-turnaround evaluation.",
"In the case annotators disagree, the question should explicitly state what level of specificity is required (e.g., September 1, 1939 vs. 1939 or Leninism vs. socialism).",
"Or, if not all questions have a single answer, link answers to a knowledge base with multiple surface forms or explicitly enumerate which answers are acceptable.",
"Appreciate Ambiguity If your intended QA application has to handle ambiguous questions, do justice to the ambiguity by making it part of your taskfor example, recognize the original ambiguity and resolve it (did you mean. . . ) instead of giving credit for happening to fit the data'.",
"To ensure that our datasets properly isolate the property that motivated [the dataset] in the first place (Zaenen, 2006), we need to explicitly appreciate the unavoidable ambiguity instead of silently glossing over it.",
"14 This is already an active area of research, with conversational QA being a new setting actively explored by several datasets (Reddy et al., 2018; Choi et al., 2018); and other work explicitly focusing on identifying useful clarification questions (Rao and Daum III), thematically linked questions (El-gohary et al., 2018) or resolving ambiguities that arise from coreference or pragmatic constraints by rewriting underspecified question strings (Elgohary et al., 2019; Min et al., 2020).",
"Revel in Spectacle However, with more complicated systems and evaluations, a return to the yearly evaluations of TRECQA may be the best option.",
"This improves not only the quality of evaluation (we can have real-time human judging) but also lets the test set reflect the build it/break it cycle (Ruef et al., 2016), as attempted by the 2019 iteration of FEVER (Thorne et al., 2019).",
"Moreover, another lesson the QA community could learn from trivia games is to turn it into a spectacle: exciting games with a telegenic host.",
"This has a benefit to the public, who see how QA systems fail on difficult questions and to QA researchers, who have a spoonful of fun sugar to inspect their systems' output and their competitors'.",
"In between full automation and expensive humans in the loop are automatic metrics that mimic the flexibility of human raters, inspired by machine translation evaluations (Papineni et al., 2002; Specia and Farzindar, 2010) or summarization (Lin, 2004).",
"However, we should not forget that these metrics were introduced as understudies'good enough when quick evaluations are needed for system building but no substitute for a proper evaluation.",
"In machine translation, Laubli et al. (2020) reveal that crowdworkers cannot spot the errors that neural MT systems makefortunately, trivia nerds are cheaper than professional translators.",
"Be Honest in Crowning QA Champions Leaderboards are a ranking over entrants based on 14 Not surprisingly, inherent' ambiguity is not limited to QA ; Pavlick and Kwiatkowski (2019) show natural language inference has inherent disagreements' between humans and advocate for recovering the full range of accepted inferences.",
"a ranking over numbers.",
"This can be problematic for several reasons.",
"The first is that single numbers have some variance; it's better to communicate estimates with error bars.",
"Whileparticularly for leaderboardsit is tempting to turn everything into a single number, there are often different sub-tasks and systems who deserve recognition.",
"A simple model that requires less training data or runs in under ten milliseconds may be objectively more useful than a bloated, brittle monster of a system that has a slightly higher F 1 (Dodge et al., 2019).",
"While you may only rank by a single metric (this is what trivia tournaments do too), you may want to recognize the highest-scoring model that was built by undergrads, took no more than one second per example, was trained only on Wikipedia, etc.",
"Finally, if you want to make humancomputer comparisons, pick the right humans.",
"Paraphrasing a participant of the 2019 MRQA workshop (Fisch et al., 2019), a system better than the average human at brain surgery does not imply superhuman performance in brain surgery.",
"Likewise, beating a distracted crowdworker on QA is not QA 's endgame.",
"If your task is realistic, fun, and challenging, you will find experts to play against your computer.",
"Not only will this give you human baselines worth reportingthey can also tell you how to fix your QA dataset.",
".",
". after all, they've been at it longer than you have.",
"Acknowledgements This work was supported by Google's Visiting Researcher program.",
"Boyd-Graber is also supported by NSF Grant IIS -1822494.",
"Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor.",
"Thanks to Christian Buck for creating the NQ playtesting environment that spurred the initial idea for this paper.",
"Thanks to Jon Clark and Michael Collins for the exciting e-mail thread that forced the authors to articulate their positions for the first time.",
"Thanks to Kevin Kwok for permission to use Pro-tobowl screenshot and information.",
"Hearty thanks to all those who read and provided feedback on drafts: Matt Gardner, Roger Craig, Massimiliano Ciaramita, Jon May, Zachary Lipton, and Divyansh Kaushik.",
"And finally, thanks to the trivia community for providing a convivial home for pedants and know-it-alls; may more people listen to you."
] | [
"abstain",
"result",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Transfer learning using ImageNet pre-trained models has been the de facto approach in a wide range of computer vision tasks.",
"However, fine-tuning still requires task-specific training data.",
"In this paper, we propose N 3 ( N eural N etworks from N atural Language) a new paradigm of synthesizing task-specific neural networks from language descriptions and a generic pre-trained model.",
"N 3 leverages language descriptions to generate parameter adaptations as well as a new task-specific classification layer for a pre-trained neural network, effectively fine-tuning the network for a new task using only language descriptions as input.",
"To the best of our knowledge, N 3 is the first method to synthesize entire neural networks from natural language.",
"Experimental results show that N 3 can out-perform previous natural-language based zero-shot learning methods across 4 different zero-shot image classification benchmarks.",
"We also demonstrate a simple method to help identify keywords in language descriptions leveraged by N 3 when synthesizing model parameters.",
"1 1 Introduction A person with generic world knowledge can learn to perform a new task based on verbal instructions.",
"On the other hand, despite recent successes in deep First two authors contributed equally.",
"learning, it remains challenging to re-purpose pretrained visual classification models to recognize a new set of objects without labeling a new training dataset.",
"A natural question emerges from this observation: can a computer also learn to recognize new objects, simply by reading the descriptions of them in natural language?",
"Concretely, can we create visual classifiers using language descriptions of the objects of interest?",
"In this paper, we introduce a new paradigm for synthesizing task-specific neural networks for image classification simply from language descriptions of the relevant objects.",
"We propose N 3 N eural N etworks from N atural Language, a meta-model that takes a list of object descriptions as input to produce a classification model for these objects, as illustrated in Figure",
"1. The capability of producing task-specific neural network from language descriptions makes N 3 ideal to a wide range of zero-shot tasks (Wah et al., 2011; Zhu et al., 2017; Elhoseiny et al., 2017; Zhu et al., 2017; Nilsback and Zisserman, 2006) where we do not have visual training data but can still easily obtain language descriptions of the objects of interest.",
"Prior zero-shot learning methods aim to achieve a similar goal of applying pre-trained networks on unseen classes.",
"In zero-shot image classification, a typical method of generalizing to unseen classes is to construct class embeddings to augment the classification layer of the pre-trained network or take retrieval-based approaches while utilizing generic visual features from pre-trained networks (Akata et al., 2015; Kodirov et al., 2017; Elhoseiny et al., 2013; Lei Ba et al., 2015; Reed et al., 2016).",
"Extending on the idea of generating classification layers for the pre-trained network, N 3 modifies the parameters of all layers in the pre-trained network.",
"While the underlying pre-trained network in previous approaches only extracts generic visual features, N 3 makes it possible to extract task-specific ones.",
"This approach effectively increases the capacity at which semantic information in the descriptions can affect the pre-trained network.",
"In our experiments, we evaluated our proposed N 3 method with 4 popular zero-shot image classification benchmark datasets.",
"We performed ablation studies to understand the importance of synthesizing task-specific feature extractors, the necessity of a pre-trained visual classification model and the effects of language representation choices on the efficacy of N 3 .",
"In addition, we provide a simple approach to help interpret what aspects in the language descriptions are N 3 -generated models examining when making predictions.",
"1. We propose a novel meta-model N 3 to synthesize task-specific neural network models using natural language descriptions.",
"2. We demonstrate N 3 's superior efficacy in solving natural language guided zero-shot learning problems.",
"Our analysis shows that N 3 's ability to tailor neural network models to extract task-specific features plays an important role in achieving such superior accuracy.",
"3. We show that N 3 can aid the interpretation of predictions of synthesized models as they can be traced back to both supportive and refutative evidence within language descriptions.",
"In this section, we are situating our work in the context of zero-shot learning and dynamic parameter generation for neural networks.",
"Zero-shot Learning Zero-shot learning studies how we can generalize our models to perform well for tasks without any labeled training data at all.",
"Achieving classification accuracy above chance-level in such scenarios requires modeling the relationships between the seen classes during training and the unseen classes during testing.",
"A typical and effective method is to manually engineer class attribute vectors to obtain representations of seen and unseen classes in a shared attribute space (Duan et al., 2012; Kankuekul et al., 2012; Parikh and Grauman, 2011; Zhang and Saligrama, 2016; Akata et al., 2016).",
"Yet such a method requires laborious engineering of class attributes, which is not feasible for large-scale and/or fine-grained classification tasks (Russakovsky et al., 2015; Khosla et al., 2011; Welinder et al., 2010).",
"Hence, there is also work in zero-shot learning that attempts to leverage textual data as object class representations (Lei Ba et al., 2015; Elhoseiny et al., 2013).",
"The majority of these models are committed to embedding-based retrieval approaches, where classification is re-formulated as retrieving the class embedding with maximal similarity (Akata et al., 2015; Kodirov et al., 2017).",
"While they can handle well the case where there is an indefinite number of classes during test time, such approaches suffer from extra computation cost at inference time since they need to traverse all the seen data points.",
"Moreover, these models often rely on a pre-trained feature extractor for input, which is usually fixed and cannot be further adapted for the unseen classes (Akata et al., 2015; Kodirov et al., 2017; Elhoseiny et al., 2013).",
"While there is a handful of work that tries to modify the parameters in-place during test time, they either rely on shallow language representation with limited expressiveness (Lei Ba et al., 2015) or require a significant amount of textual descriptions per class to train their model (Reed et al., 2016), both of which are not ideal.",
"In our work, we aim to learn a model that can synthesize parameters for entire neural networks to adapt to the new tasks using short descriptions of the object classes.",
"This leads to better metadata efficiency of our proposed method.",
"Dynamic Parameter Generation As mentioned before, N 3 dynamically generates classification models for designated classes.",
"Dynamic parameter generation has been explored in the context of generating recurrent cells at different time-steps of RNNs (Ha et al., 2016), constructing intermediate linear models for interpretability inside neural networks (Al-Shedivat et al., 2017), and contextual parameter generation for different language pairs in multilingual machine translation (Platanios et al., 2018).",
"As mentioned in Section 2, some zero-shot learning methods can also be viewed as generating classifier parameters (Lei Ba et al., 2015; Elhoseiny et al., 2013).",
"However, many of the previous work directly or indirectly mentions the challenge of memory and computation complexity after all, the output of the parameter generation model are large matrices that are excessively high dimensional.",
"To tackle this issue, previous work either only generate very simple linear layers (Lei Ba et al., 2015; Elhoseiny et al., 2013; Al-Shedivat et al., 2017), or impose low-rank constraints on the weights to mitigate the memory issues (Ha et al., 2016).",
"In our work, we utilize the architecture of sequence-to-sequence models and treat the weight matrices to be generated as a sequence of vectors.",
"This allows parameter generation for entire neural networks with little memory bottleneck.",
"In this section, we describe our approach for synthesizing task-specific neural networks to recognize new objects with their natural language descriptions and a generic pre-trained model.",
"We denote a list of natural language descriptions for K objects, each containing L tokens as D = { d k,l } k = K, l = L k =1 , l =1 .",
"We denote a pre-trained model with parameters as F ( ; ) , so that for a set of images X , F ( X ; ) produces the classification prediction Y .",
"We can now precisely formulate our problem as follows: given a pre-trained classification model, F ( ; ) , and the natural language description of K object classes, D , adapt the original parameters to the specialized parameters (cid:48) so that the fine-tuned F ( ; (cid:48) ) model accurately classifies the K objects described in D .",
"Our method draws inspiration from transfer learning, which is often employed when the training",
"dataset is small.",
"Transfer learning entails training a neural network from a generic pre-trained model to one that solves a new, often task-specific problem.",
"Thus, we can similarly formulate N 3 to synthesize task-specific model parameters by adapting existing ones in the generic pre-trained model with the guidance of natural language task descriptions.",
"While transfer learning updates the pre-trained parameters using signals derived from task-specific training data, N 3 relies only on language descriptions to achieve the same objective.",
"Concretely, the adapted parameters (cid:48) are computed as follows: (cid:48) = + ( D ; ) (1) where ( ; ) is a function with parameter , mapping natural language descriptions to parameter adaptations for all parameters in the pre-trained model.",
"Since the transfer learning process often proceeds with a tiny learning rate to restrict the effect of fine-tuning, we introduced a trainable scaling factor to similarly regulate the effect of parameter adaptation.",
"The initial value of is a hyper-parameter.",
"In our experiments, we used an initial value of 1 e 3 to mimic the effect of using a small learning rate for transfer learning.",
"The specific value of 1 e 3 is derived from the default learning rate used in the PyTorch transfer learning tutorial (Chilamkurthy).",
"In a later section, we evaluate the necessity of this scaling factor as well as the effect of a range of initial values of .",
"The mapping from natural language descriptions to parameter adaptations is particularly high-dimensional.",
"Constructing with the transformer block (Vaswani et al., 2017) is not straight-forward for our scenario because it requires O ( N 2 ) size of memory where N is the length of the input/output sequence and our N = K L can be prohibitively large.",
"Thus, to reduce the memory consumption of , its attention span must be restricted to a small but semantically relevant subset of all input elements.",
"To this end, we designed to be a two-level hierarchy of transformer blocks, as illustrated in",
"2. The first level of transformers, named the Tokens2Label Encoder , encodes the natural language descriptions of each object class to a label embedding vector.",
"Intuitively, this level summarizes the described visual features of an ob-Tokens2Label Encoder Labels2AdaptationEncoder { A.D.1",
"ject class to a single, fixed-sized embedding vector.",
"Thus, attention in this level has a maximum span of L spanning all tokens in each description.",
"The second level, named the Labels2Adaptation Encoder-Decoder , encodes the sequence of label embedding vectors and decodes them into parameter adaptations of multiple layers.",
"This level of transformer blocks examines the characteristics of all encoded object classes and determines how to adapt the pre-trained model parameters to classify images corresponding to these object labels.",
"In this level, the model attention has a maximum span of K spanning all the object classes.",
"Moreover, due to the sheer number of layers in state-of-the-art CNN models, we initialize layer-specific adaptation decoders for each layer and decode from shared hidden states encoded by the adaptation encoder.",
"Only through these measures can we materialize the high-dimensional mapping under reasonable memory constraints.",
"Putting everything together, for a pre-trained network with parameter , N 3 applies the mapping to input descriptions to generate a parameter adaptation with the same shape as .",
"The adaptation is multiplied with a trainable scaling factor to control its impact on the pre-trained model.",
"The scaled parameter adaptation is then combined with its pre-trained counterpart via point-wise addition.",
"The mapping from object descriptions D to contains two parts.",
"The Tokens2Label Encoder transforms the set of tokens contained in K entries of object class descriptions, each with length up to L , denoted as { d k,l } , to a set of K object label embedding vectors { c k } Kk =1 : c k = Encoder T2L ( d k, 1: L ) , k = 1 , ..., K (2) Subsequently, Labels2Adaptation Encoder-Decoders translate the object label embedding vectors into a parameter adaptation matrix .",
"Parameters in a typical neural network often have more than two-dimensions, but for simplicity in our setup, they are always viewed as a two-dimensional matrix consisting of a sequence of parameter columns.",
"1 Viewing the object-label embeddings { c i } ki =1 as an input sequence and the parameter adaptation as an output sequence of M columns { m } Mm =1 , Labels2Adaptation Encoder-Decoders can be expressed as: h 1: K = Encoder L2A ( c 1: K ) (3) m = Decoder mL2A ( h 1: K ) , m = 1 , ..., M (4) = Concat ([ 1 ; 2 ; ... ; M ]) (5) Finally, pre-trained parameter is adapted to (cid:48) using the following equation, with being a trainable scaling factor: (cid:48) = + (6) 3.3 Training Methodology We formulate the training of N 3 as the following optimization problem.",
"Optimal parameters for N 3 model ( ; ) should map a list of language descriptions for K class objects D = {D 1 , , DK } to parameter adaptations = ( D ; ) , such that the cross entropy loss between ground truth label Y and model prediction F ( X ; , ( D ; )) is minimized for all image-label pairs ( X, Y ) in the training set.",
"Thus, to train N 3 on a image dataset I with labels L , class descriptions D and a pre-trained model F to produce a K -way classification model, we first draw meta-batches of K class labels LK L .",
"Then, a subset of the image dataset ILK and description dataset DLK corresponding to the drawn labels LK are constructed.",
"We then draw mini-batches of images B and ground-truth labels Y 1 For instance, the parameter of a convolutional layer W of shape [ K, C, kH, kW ] where K is the number of output channels, C the number of input channels, kH , kW the kernel height and width, is viewed as a sequence of K parameter columns, with a column size of C kH kW .",
"from ILK .",
"For each mini-batch B , distinct parameter adaptations are generated by evaluating ( ; ) at DL k .",
"Batch loss is then calculated as 1 |B| (cid:80) i (cid:96) ( Y i , F ( X i ; , )) where (cid:96) ( , ) refers to cross-entropy loss.",
"Since the meta-model ( ; ) and the pre-trained model F are fully differentiable, gradients can be propagated back to meta-models to optimize meta-model parameters .",
"We evaluate N 3 by comparing its efficacy in solving natural language guided zero-shot learning problems with prior state-of-the-art methods.",
"In this section, we introduce our training method, datasets and evaluation protocols.",
"To evaluate the N 3 , we select 4 standard zero-shot image classification datasets and collected natural language descriptions for their object classes.",
"Caltech-UCSD-Birds 200-2011 (CUB) (Wah et al., 2011) contains images of 200 species of birds.",
"Each species of bird forms its own class label.",
"In total, there are 11,788 images in this dataset.",
"Animal with Attributes (AWA) (Lampert et al., 2014; Xian et al., 2017) is another dataset to evaluate zero-shot classification methods.",
"It consists of 50 classes of animals with a total of 37322 images.",
"North America's Birds (NAB) is a dataset used by prior state-of-the-art methods related to our task.",
"Following the established practices (Zhu et al., 2017; Elhoseiny et al., 2017), we consolidated the class labels into 404 distinct bird species.",
"The consolidation process combines closely related labels (e.g., American Kestrel (Female, immature)' and American Kestrel (Adult male)') into a single label (e.g., American Kestrel').",
"We end up with 48,000 images of 404 classes of bird-species.",
"Flowers-Species (FS) is a dataset we built based on Oxford Flowers (Nilsback and Zisserman, 2006), another commonly used zero-shot dataset.",
"The original contains label categories that are a mixture of species and genera.",
"Some genus includes thousands of species, yet the dataset examples only cover a fraction of them.",
"Such mismatch creates biases in the dataset that fundamentally cannot be addressed through learning from external descriptions.",
"This hence undermines its utility as a test of our proposed method: for instance, when N 3 is asked to generate classifier to decide whether an object is of label anthurium, which is a genus of around 1000 species of varying visual appearance, the efficacy of our generated model can only be evaluated based on a representative samples that cover most of the species within the genus an-thurium.",
"However, the dataset only contains a tiny number of (i.e., 105) correlated (species-wise) samples, making such evaluation neither comprehensive nor conclusive in the context of our task objective and may introduce unexpected noise in evaluation results.",
"Therefore, we decided to filter out the genera from the original Oxford Flowers dataset, leaving only the species as class labels, as an effort towards homogenizing the sample spaces implied by the class labels and the image dataset.",
"This leaves us with 55 classes and 3545 images.",
"For each dataset, we collect language descriptions for object classes from websites like Wikipedia.",
"To collect language descriptions from Wikipedia, we use the python package Wikipedia (Goldsmith) to access structured representation of Wikipedia pages, and extract section content under Description to be used as object class descriptions.",
"If no Wikipedia entry exists for a specific object class, we resort to manually searching for the object class description on Google.",
"Furthermore, we truncate these textual excerpts to a maximum length of 512 tokens to avoid excessively long descriptions and the accompanying computational issues.",
"Recent study showed that previous zero-shot learning evaluation protocols are inadequate and proposed a set of rigorous evaluation protocols for attribute-based zero-shot learning methods (Xian et al., 2017).",
"Although both our tasks and datasets differ, we nevertheless followed Xian et",
"al.'s guiding principles of their Rigorous Protocol and developed our evaluation protocol: Similar to Rigorous Protocol , we used two meta-splits Standard Splits (SS) and Proposed Splits (PS) to evaluate all methods; the Standard Splits are established meta-splits and Proposed Splits are meta-splits that guarantees the exclusion of ImageNet-1K classes from the test set.",
"Due to class imbalance, Rigorous Protocol proposes to use per-class averaged accuracy for more meaningful evaluation.",
"Thus, to evaluate meta-model on a meta-split containing Datasets Total Standard Split/Proposed Split Training Validation Testing CUB (Wah et al., 2011) 200 100 50 50 AWA2 (Zhu et al., 2017; Elhoseiny et al., 2017) 50 30 10 10 NAB (Zhu et al., 2017) 404 324 40 40 FS (Nilsback and Zisserman, 2006) 55 35 10 10 Table 1: Number of Class Labels in Our Meta-Split (both Standard Split and Proposed Split).",
"averaged accuracy as shown in Equation 7.",
"Acc C = 1 | C | (cid:88) c C #correctly predicted samples in c #total samples in c (7) Unlike Rigorous Protocol which uses ResNet-101 model, we use ResNet-18 as our pretrained model; such choice helps reduce the output dimensionality of N 3 by reducing the number of parameters N 3 adapts.",
"Our method is unique in that permutations of the classes belonging to the same meta-split count as distinct tasks and therefore, to account for variations, we test our models on 10 different permutations of test set classes and report the medium value of the relevant evaluation metric.",
"PDCNN PDCNN (Lei Ba et al., 2015) is the most relevant prior method as it use the natural language descriptions of class labels to generate classification layers capable of distinguishing between objects described.",
"Note that PDCNN is distinct from our work in that it dynamically generates fully connected layers and/or additional convolutional layers to be appended to a pre-trained deep neural network (VGG-19) whilst ours generates parameter adaptations to be combined directly with all existing layers within deep neural network models, effectively fine-tuning the pre-trained model.",
"We compare with two variants of PDCNN, with PDCNNFC generating a fully connected layer only for classification and PDCNN FC+Conv producing an additional convolutional layer to help with classification.",
"To make our works comparable, we replaced the TF-IDF feature extractor in PDCNN with a BERT-based document embedding (specifi-cally, a BERT token embedding followed by max-pooling) and changed the pre-trained model from VGG-19 to ResNet-18.",
"MEGAZSL Due to the scarcity of prior work leveraging natural language descriptions for zero-shot classifications in a metadata-efficient way, we also adapted less metadata-efficient methods to function with stricter metadata-efficiency requirements.",
"Specifically, ZSLPP (Elhoseiny et al., 2017), GAZSL (Zhu et al., 2017) and CorrectionNetwork (Hu et al., 2019) all utilize natural language object class descriptions to produce classifiers capable of distinguishing between images belonging to unseen categories during training.",
"However, all of them require significantly more metadata: specifically, parts annotations of each sample image are used to provide extra supervision of the training procedure.",
"Among these methods, GAZSL (Zhu et al., 2017) stands out as the most cited work; therefore we adapted the code released by its authors to learn from only natural language metadata, without using parts annotations; to distinguish our modified version from the original, we will refer to our modified version as MEGAZSL ( M etadataE fficient GAZSL).",
"To make our works comparable, we also updated its language representation from TF-IDF to BERT-based ones and used ResNet-18 as the image feature embedding module.",
"It is worth noting the CorrectionNet(Hu et al., 2019) is orthogonal to our work as it is designed to improve any existing zero-shot classification task modules, and in its original setup, GAZSL (Zhu et al., 2017) was used as the main task module, which we do include in our experimental comparison.",
"For all experiments, hyper-parameters are tuned Method CUB-50 AWA2-10 NAB-40 FS-10 SS PS SS PS SS PS SS PS PDCNNFC (Lei Ba et al., 2015) 6.1 6.1 12.8 18.4 6.7 7.4 13.1 11.9 PDCNN FC+CONV (Lei Ba et al., 2015) 7.5 6.5 22.6 17.2 8.9 5.6 7.7 13.6 MEGAZSL(Zhu et al., 2017) 2.9 1.8 14.0 10.4 2.8 3.7 10.3 12.3 N 3 (Proposed) 17.6 9.5 34.0 37.5 14.1 20.7 16.4 17.6 Table 2: Zero-Shot Classification Accuracy (Defined by Eq. 7) On Various Datasets/Meta-Splits Combinations; number of classification labels used in the test set are recorded next to the name of each dataset.",
"with the same exact algorithms (random search) and for the same number of runs (10).",
"In this section, we compare N 3 method with prior work on 4 zero-shot image classification benchmark datasets.",
"Furthermore, we provide a simple approach to help interpret what part of the language input is being taken as evidence by the synthesized visual classifier to make predictions.",
"We then show through ablation studies the importance of adaptation of all parameters in the pre-trained network, the necessity of pre-trained networks, and the effect of language representation choices on the success of N 3 .",
"We report performance in per-class averaged accuracy, as shown in Table",
"2. In these experiments, we standardized the language representations, dataset-splits, and other factors orthogonal to our model design to ensure fairness of comparison.",
"We also include experimental results comparing N 3 to models in their respective original settings in the Appendix.",
"From Table 2, we can clearly observe that N 3 outperforms all competing methods by a significant margin on all 8 dataset/meta-split combinations.",
"Noticing the large performance gap between MEGAZSL and the original GAZSL, we performed additional investigations to pinpoint the cause: we reproduced one of their experiments (on CUB dataset with SCE meta-split), and replaced their TF-IDF module with BERT document embedding module.",
"The modified module performed noticeably better (from 10.3% to 11.3%); however, when we replaced its image feature embedding module, which is trained with additional parts annotations of bird images, the performance dropped significantly (from 11.3% to 3.3%), confirming our conjecture that such methods cannot be easily adapted to work without extra supervision in the form of additional data annotation.",
"In this section, we explore how N 3 model architecture can help interpret the predictions made by the adapted model.",
"Specifically, the design of N 3 is unique in that it examines all object class descriptions in order to adapt neural network parameters.",
"This means that N 3 can adapt neural networks to seek both positive and negative evidence for an image to be classified.",
"Naturally, we want to understand how the model is using these object class descriptions.",
"In Figure.",
"3, we present our findings by visualizing the magnitude of ED ij where E is the loss value computed on a test example (in our experiment, this example is correctly predicted to be an Acadian Flycatcher) and D ij is the BERT representation of the j -th word in the i -th object class description.",
"We present two patterns in the data that are indicative of the model behavior: Top Positive Evidence : top positive evidence is identified as tokens in the description of the ground truth label with large gradients.",
"Intuitively, these tokens are top supporting evidence that encourages the prediction of the correct label.",
"We locate them by ranking all tokens in the description of the ground truth label by their magnitude of gradients and take the top few.",
"Top Negative Evidence : top negative evidence is identified as tokens with large gradients descriptions of the negative labels that also has a low predicted probability.",
"Intuitively, these tokens are the keywords deemed as important evidence when rejecting to predict a label.",
"We locate such evidence by first ranking the descriptions by the ratio between their largest token gradients and the softmax probability of the corresponding label.",
"Then, within the set of class descriptions with the largest aforementioned ratios, we locate the tokens with the largest gradients.",
"Examples for both types of keyword evidence are presented in 3 with their context.",
"Several observations can be drawn about how N 3 uses textual descriptions to make classification decisions.",
"Firstly, we can observe that distinguishing features described with keywords such as olive, darker, yellow and grey are used to support both positive and negative identifications, which support our conjecture that N 3 examines descriptions from all object class descriptions to make classification decisions, sometimes employing the process of elimination.",
"Secondly, we can observe that some label (or label word piece) like flicker and tern is used as evidence to support or refute a classification decision, which seems to suggest that some task-specific knowledge is learned and employed in the language representations.",
"To account for the improved accuracy of models synthesized with whole-model parameter adaptations, we performed additional analysis to understand the features models are utilizing when making predictions.",
"Two variants of N 3 meta-models sharing the same set of hyper-parameters are trained on CUB-50 with the standard split.",
"One is allowed to adapt parameters of every layer whilst the other is ablated only to generate the classification layer.",
"Saliency map (as described in (Si-Figure 4: Saliency map showing magnitude of gradients of loss w.r.t. every pixel in the input image. Left: original image. Middle: saliency map from a model with every layer adapted by N 3 . Right: saliency map from a model where N 3 is restricted to act on only its fc layer. Purple indicates small values while blue/green indicates large values. monyan et al., 2014)) visualizing the importance of each pixel w.r.t. predictions is plotted as a heatmap in Figure.",
"4. Clearly, fully adapted models show a greater concentration on task-specific regions.",
"To adapt generic pre-trained model parameters to task-specific ones, N 3 mixes the pre-trained model parameters with a set of generated model parameter adaptations, using a trainable mixing ratio as illustrated in Equation 6.",
"It is natural to question whether the pre-trained parameters are necessary.",
"In other words, can N 3 generate parameters for a task-specific model from scratch and achieve reasonable classification accuracy?",
"To study the importance of pre-trained model parameters, we exinitial Acc@1 0.999 0.001 16.2 % 0.99 0.01 18.5 % 0.9 0.1 16.2 % 0.5 0.5 6.2 % 0.1 0.9 2.1 % 0.01 0.99 2.3 % Table 3: Comparison of different and initial values.",
"periment with a modified version of Equation 6: (cid:48) = + Where is a fixed scalar weight given to the pretrained model, is still the trainable scaling factor.",
"The choice of and the initial value of affects the extent to which the pre-trained model contribute to the parameter-adapted one.",
"In our main experiments, we have fixed to be 1 and initialized to be 1 3 , such choice of value initialization proves to work well, yet it remains unclear to what extent the superior performance of N 3 depends on these two hyperparameters.",
"To answer this question, we decide to vary and ; In order to keep the magnitude of parameters relatively stable, we always set the initial value of to be 1 across different settings here.",
"We trained N 3 to produce a 50-way classification model with varying , and the resultant zero-shot classification performance are shown in Table",
"3. Such an ablation experiment demonstrates that setting a small initial is crucial for performance, yet it is not the smaller the better, re-affirming the crucial role of parameter adaptation in achieving good performance.",
"To understand the importance of pre-trained BERT module, and whether N 3 can be generalized to be used with pre-trained word embedding modules other than BERT, we compared the classification accuracy of models adapted by variants of N 3 using different word embeddings.",
"Concretely, we experimented with BERT (Devlin et al., 2018), ELMo (Peters et al., 2018) and GloVe (Pennington et al., 2014) embeddings.",
"Results are shown in Table",
"4. While BERT prevails as expected, ELMo seems to under-perform GloVe.",
"We hypothesize that ELMo might have been impacted by unexpectedly long sequence lengths (here each description is a paragraph up to 512 tokens), since LSTM-based models are known to be worse at capturing long-range dependencies.",
"To further investigate, we trained a variant of N 3 , where we limit the context of ELMo to each sentence instead of the entire paragraph.",
"As expected, the performance increases noticeably, but the resulting performance is still not on par with other methods.",
"In contrast, GloVe embeddings worked better since such static embeddings are not affected by the long context windows.",
"In this paper, we have demonstrated that small amount of unstructured natural language descriptions for object classes can provide enough information to fine-tune an entire pretrained neural network to perform classification task on class labels unseen during training.",
"We have achieved state-of-the-art performance for natural language guided zero-shot image classification tasks on 4 public datasets with practical metadata requirements.",
"In addition, we presented in-depth analysis and extensive ablation studies on various aspects of the model functioning mechanism and architecture design, showing the necessity of our design contribution in achieving good results.",
"This material is based upon work partially supported by the National Science Foundation (Awards #1750439 #1722822) and National Institutes of Health.",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily re-flect the views of National Science Foundation or National Institutes of Health, and no official endorsement should be inferred."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"objective",
"objective",
"result",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"other",
"other"
] |
[
"Ordinal Classification (OC) is an important classification task where the classes are ordinal.",
"For example, an OC task for sentiment analysis could have the following classes: highly positive, positive, neutral, negative, highly negative .",
"Clearly, evaluation measures for an OC task should penalise misclassifica-tions by considering the ordinal nature of the classes (e.g., highly positive misclassified as positive vs. misclassifed as highly negative ).",
"Ordinal Quantification (OQ) is a related task where the gold data is a distribution over ordinal classes, and the system is required to estimate this distribution.",
"Evaluation measures for an OQ task should also take the ordinal nature of the classes into account.",
"However, for both OC and OQ, there are only a small number of known evaluation measures that meet this basic requirement.",
"In the present study, we utilise data from the SemEval and NTCIR communities to clarify the properties of nine evaluation measures in the context of OC tasks, and six measures in the context of OQ tasks.",
"In NLP and many other experiment-oriented research disciplines, researchers rely heavily on evaluation measures.",
"Whenever we observe an improvement in the score of our favourite measure, we either assume or hope that this implies that we have managed to moved our system a little towards what we ultimately want to achieve.",
"Hence it is of utmost importance to examine whether evaluation measures are measuring what we want to measure, and to understand their properties.",
"This paper concerns evaluation measures for Ordinal Classification (OC) and Ordinal Quantification (OQ) tasks.",
"In an OC task, the classes are ordinal, not nominal.",
"For example, Task 4 (Senti-ment Analysis in Twitter) Subtask C in SemEval-2016/2017 is defined as: given a set of tweets about a particular topic, estimate the sentiment conveyed by each tweet towards the topic on a five-point scale ( highly negative, negative, neutral, positive, highly positive ) (Nakov et al., 2016; Rosenthal et al., 2017).",
"On the other hand, an OQ task involves a gold distribution of labels over ordinal classes and the system's estimated distribution.",
"For example, Task 4 Subtask E of the SemEval-2016/2017 workshops is defined as: given a set of tweets about a particular topic, estimate the distribution of the tweets across the five ordinal classes already mentioned above (Nakov et al., 2016; Rosenthal et al., 2017).",
"The Dialogue Breakdown Detection Challenge (Higashinaka et al., 2017) and the Dialogue Quality subtasks of the NTCIR-14 Short Text Conversation (Zeng et al., 2019) and the NTCIR-15 Dialogue Evaluation (Zeng et al., 2020) tasks are also OQ tasks.",
"1 Clearly, evaluation measures for OC and OQ tasks should take the ordinal nature of the classes into account.",
"For example, in OC, when a highly positive item is misclassified as highly negative , that should be penalised more heavily than when it is misclassified as positive .",
"Surprisingly, however, there are only a small number of known evaluation measures that meet this requirement.",
"In the present study, we use data from the SemEval and NTCIR communities to clarify the properties of nine evaluation measures in the context of OC tasks, and six measures in the context of OQ tasks.",
"Some of these measures satisfy the aforementioned basic requirement for ordinal classes; others do not.",
"1 In terms of data structure, we observe that the relationship between OC and OQ are similar to that between paired data and two-sample data in statistical significance testing.",
"In OC, we examine which item is classified by the system into which class, and build a confusion matrix of gold and estimated classes.",
"In contrast, in OQ, we compare the system's distribution of items with the gold distribution, but we do not concern ourselves with which item in one distribution corresponds to which item in the other.",
"Section 2 discusses prior art.",
"Section 3 provides formal definitions of the measures we examine, as this is of utmost importance for reproducibility.",
"Section 4 describes the data we use to evaluate the measures.",
"Sections 5 and 6 report on the results on the OC and OQ measures, respectively.",
"Finally, Section 7 concludes this paper.",
"As we have mentioned in Section 1, Task 4 Subtask C of the SemEval-2016/2017 workshops is an OC task with five ordinal classes (Nakov et al., 2016; Rosenthal et al., 2017).",
"While SemEval also features other OC tasks with fewer classes (e.g., Task 4 Subtask A from the same years, with three classes), we use the Subtask C data as having more classes should enable us to see more clearly the difference between measures that consider ordinal classes and those that do not.",
"2 Note that if there are only two classes, OC is reduced to nominal classification.",
"Subtask C used two evaluation measures that consider the ordinal nature of the classes: macroaveraged Mean Absolute Error (MAEM ) and the standard Mean Absolute Error (MAE ) (Baccianella et al., 2009).",
"At ACL 2020, Amigo et al. (2020) proposed a measure specifically designed for OC, called Closeness Evaluation Measure (CEMORD ), and discussed its axiomatic properties.",
"Their meta-evaluation experiments primarily focused on comparing it with other measures in terms of how each measure agrees simultaneously with all of preselected gold measures.",
"However, while their results showed that CEMORD is similar to all of these gold measures, the outcome may differ if we choose a different set of gold measures.",
"Indeed, in the context of evaluating information retrieval evaluation measures, Sakai and Zeng (2019) demonstrated that a similar meta-evaluation approach called unanimity (Amigo et al., 2018) depends heavily on the choice of gold measures.",
"Moreover, while Amigo et al. (2020) reported that CEMORD also performs well in terms of consistency of system rankings across different data (which they refer to as robustness), experimental details were not provided in their paper.",
"Hence, to complement their work, the present study conducts extensive and re-2 SemEval-2018 Task 1 (Affect in Tweets) featured an OC task with four classes (Mohammad et al., 2018).",
"However, the run submission files of this task are not publicly available.",
"producible experiments for OC measures.",
"Our OC meta-evaluation experiments cover nine measures, including MAEM , MAE , and CEMORD .",
"As we have mentioned in Section 1, Task 4 Subtask E of the SemEval-2016/2017 workshops is an OQ task with five ordinal classes (Nakov et al., 2016; Rosenthal et al., 2017).",
"3 Subtask E used Earth Mover's Distance (EMD), remarking that this is currently the only known measure for ordinal quantification (Nakov et al., 2016; Rosenthal et al., 2017).",
"Subsequently, however, Sakai (2018a) proposed a new suite of OQ measures based on Order-aware Divergence (OD), 4 and compared them with Normalised Match Distance (NMD), a normalised version of EMD.",
"Sakai utilised data from the Third Dialogue Breakdown Detection Challenge (DBDC3) (Higashinaka et al., 2017), which features three ordinal classes, and showed that his Root Symmetric Normalised OD (RSNOD) measure behaves similarly to NMD.",
"However, his experiments relied on the run submission files from his own team, as he did not have access to the entire set of DBDC3 submission files.",
"On the other hand, the organisers of DBDC3 (Tsunomori et al., 2020) compared RSNOD, NMD, and the official measures of DBDC (namely, Mean Squared Error and Jensen-Shannon Divergence, which ignore the ordinal nature of the classes) using all the run submission files from DBDC3.",
"They reported that RSNOD was the overall winner in terms of system ranking consistency and discriminative power , i.e., the ability of a measure to obtain many statistical significant differences (Sakai, 2006, 2007, 2014).",
"In addition to the aforementioned two Subtask E data sets from SemEval, the present study utilises three data sets from the Dialogue Quality (DQ) Subtasks of the recent NTCIR-15 Dialogue Evaluation (DialEval-1) Task (Zeng et al., 2020).",
"Each DQ subtask is defined as: given a helpdesk-customer dialogue, estimate the probability distribution over the five-point Likert-scale Dialogue Quality ratings (See Section 4).",
"Our OQ meta-evaluation experiments cover six measures, including NMD and RSNOD.",
"3 The Valence Ordinal Classification subtask of SemEval-2018 Task 1 (Affect in Tweets) is also an OQ task, with seven classes (Mohammad et al., 2018).",
"However, the submission files of this task are not publicly available.",
"4 See also Sakai (2017) for an earlier discussion on OD.",
"In the OC tasks of SemEval-2016/2017, a set of topics was given to the participating systems, where each topic is associated with N tweets.",
"( N varies across topics.)",
"Given a set C of ordinal classes represented by consecutive integers, each OC system yields a | C | | C | confusion matrix for each topic.",
"From this, we can calculate evaluation measures described below.",
"Finally, the systems are evaluated in terms of mean scores over the topic set.",
"Let c ij denote the number of items (e.g., tweets) whose true class is j , classified by the system into i ( i, j C ) so that N = (cid:80) j (cid:80) i c ij .",
"Let c j = (cid:80) i c ij , c i = (cid:80) j c ij , and C + = { j C | c j > 0 } .",
"That is, C + is the set of gold classes that are not empty .",
"We compute MAE's as follows.",
"Unlike the original formulation of MAEM by Bac-cianella et al. (2009), ours explicitly handles cases where there are empty gold classes (i.e., j s.t. c j = 0 ).",
"Empty gold classes actually do exist in the SemEval data used in our experiments.",
"It is clear from the weights used above ( | i j | ) that MAEs assume equidistance , although this is not guaranteed for ordinal classes.",
"Hence Amigo et al. (2020) propose the following alternative: CEMORD = (cid:80) j C (cid:80) i C prox ij c ij (cid:80) j C prox jj c j , (3) where prox ij = log 2 (max { 0 . 5 , K ij } /N ) , and K ij = (cid:40) c i / 2 + (cid:80) jl = i +1 c l ( i j ) c i / 2 + (cid:80) i 1 l = j c l ( i > j ) .",
"(4) Our formulation of prox ij with a max operator ensures that it is a finite value even if K ij = 0 .",
"We also consider Weighted (Cohen, 1968).",
"We first compute the expected agreements when the system and gold labels are independent: e ij = c i c j /N .",
"Weighted is then defined as: = 1 (cid:80) j C (cid:80) i C w ij c ij (cid:80) j C (cid:80) i C w ij e ij , (5) where w ij is a predefined weight for penalising misclassification.",
"In the present study, we follow the approach of MAEs (Eqs. 1-2) and consider Linear Weighted : w ij = | i j | .",
"However, it should be noted here that is not useful if the OC task involves baseline systems such as the ones included in the aforementioned SemEval tasks: that is, a system that always returns Class 1, a system that always returns Class 2, and so on.",
"It is easy to mathematically prove that returns a zero for all topics for all such baseline systems.",
"We also consider applying Krippendorff's (Krippendorff, 2018) to OC tasks.",
"The is a measure of data label reliability, and can handle any types of classes by plugging in an appropriate distance function.",
"Instead of the | C | | C | confusion matrix, the requires a | C | N class-by-item matrix that contains label counts n i ( u ) , which represents the number of labels which say that item u belongs to Class i .",
"For an OC task, n i ( u ) = 2 if both the gold and system labels for u is i ; n i ( u ) = 1 if either the gold or system label (but not both) for u is i ; n i ( u ) = 0 if neither label says u belongs to i .",
"Thus, this matrix ignores which labels are from the gold data and which are from the system.",
"For comparing two complete sets of labels (one from the gold data and the other from the sys-tem), the definition of Krippendorff's is relatively simple.",
"Let n i = (cid:80) u n i ( u ) ; this is the total number of labels that Class i received from the two sets of labels.",
"The observed coincidence for Classes i and j ( i, j C, i (cid:54) = j ) is given by O ij = (cid:80) u n i ( u ) n j ( u ) , while the expected coincidence is given by E ij = n i n j / (2 N 1) .",
"The is defined as: = 1 (cid:80) i (cid:80) j>i O ij 2 ij (cid:80) i (cid:80) j>i E ij 2 ij , (6) where, for ordinal data, 2 ij = ( j (cid:88) k = i n k n i + n j 2 ) 2 , (7) and for interval data, 2 ij = | i j | 2 (Krippendorff, 2018).",
"We shall refer to these two versions of as -ORD and -INT, respectively.",
"Unlike , the 's can evaluate the aforementioned baseline systems without any problems.",
"The three measures defined below ignore the ordinal nature of the classes.",
"That is, they are axiomatically incorrect as OC evaluation measures.",
"First, let us consider two different definitions of Macro F1 found in the literature (Opitz and Burst, 2019): to avoid confusion, we give them different names in this paper.",
"For each j C + , let Prec j = c jj /c j if c j > 0 , and Prec j = 0 if c j = 0 (i.e., the system never chooses Class j ).",
"Let Rec j = c jj /c j .",
"Also, for any positive values p and r , let f 1( p, r ) = 2 pr/ ( p + r ) if p + r > 0 , and let f 1( p, r ) = 0 if p = r = 0 .",
"Then: F 1 M = 1 | C + | (cid:88) j C + f 1( Prec j , Rec j ) .",
"(8) Now, let Prec M = (cid:80) j C + Prec j / | C + | , Rec M = (cid:80) j C + Rec j / | C + | , and HMPR = f 1( Prec M , Rec M ) .",
"(9) HMPR stands for Harmonic mean of Macroaveraged Precision and macroaveraged Recall .",
"Opitz and Burst (2019) recommend what we call F1 M over what we call HMPR.",
"Again, note that our formulations use C + to clarify that empty gold classes are ignored.",
"Finally, we also consider Accuracy: 5 Accuracy = (cid:80) j C c jj N .",
"From Eqs.",
"2 and 10, it is clear that MAE and Accuracy ignore class imbalance (Baccianella et al., 2009), unlike the other measures.",
"In an OQ task, a comparison of an estimated distribution and the gold distribution over | C | ordinal classes yields one effectiveness score, as described below.",
"The systems are then evaluated by mean scores over the test instances , e.g., topics (Nakov et al., 2016; Rosenthal et al., 2017) or dialogues (Zeng et al., 2019, 2020).",
"Let p i denote the estimated probability for Class i , so that (cid:80) i C p i = 1 .",
"Similarly, let p i denote the true probability.",
"We also denote the entire probability distributions by p and p , respectively.",
"Let cp i = (cid:80) k i p k , and cp i = (cid:80) k i p k .",
"Normalised Match Distance (NMD) used in the NTCIR Dialogue Quality Subtasks (Zeng et al., 2019, 2020) is given by (Sakai, 2018a): NMD ( p, p ) = (cid:80) i C | cp i cp i | | C | 1 .",
"5 Since the system and the gold data have the same total number of items to classify (i.e., N ), accuracy is the same as microaveraged F1/recall/precision.",
"This is simply a normalised version of EMD used in the OQ tasks of SemEval (See Section 2.2) (Nakov et al., 2016; Rosenthal et al., 2017).",
"We also consider two measures that can handle OQ tasks from Sakai (2018a).",
"First, a Distance-Weighted sum of squares for Class i is defined as: DW i = (cid:88) j C | i j | ( p j p j ) 2 .",
"Note that the above assumes equidistance.",
"Let C = { i C | p i > 0 } .",
"That is, C is the set of classes with a positive gold probability.",
"Order-aware Divergence is defined as: OD ( p (cid:107) p ) = 1 | C | (cid:88) i C DW i , (13) with its symmetric version SOD ( p, p ) = ( OD ( p (cid:107) p ) + OD ( p (cid:107) p )) / 2 .",
"Root (Symmetric) Normalised Order-aware Divergence is defined as: RNOD ( p (cid:107) p ) = (cid:115) OD ( p (cid:107) p ) | C | 1 , (14) RSNOD ( p, p ) = (cid:115) SOD ( p, p ) | C | 1 .",
"(15)",
"The other three measures defined below ignore the ordinal nature of the classes (Sakai, 2018a); they are axiomatically incorrect as OQ measures.",
"Normalised Variational Distance (NVD) is essentially the Mean Absolute Error (MAE): NVD ( p, p ) = 1 2 (cid:88) i C | p i p i | .",
"(16)",
"Root Normalised Sum of Squares (RNSS) is essentially the Root Mean Squared Error (RMSE): RNSS ( p, p ) = (cid:114)(cid:80) i C ( p i p i ) 2 2 .",
"(17)",
"The advantages of RMSE over MAE is discussed in Chai and Draxler (2014).",
"The Kullback-Leibler divergence (KLD) for system and gold probability distributions over classes is given by: KLD ( p (cid:107) p ) = (cid:88) i C s .",
"t .",
"p i > 0 p i log 2 p i p i .",
"(18)",
"As this is undefined if p i = 0 , we use the more convenient Jensen-Shannon divergence (JSD) instead, which is symmetric (Lin, 1991): JSD ( p, p ) = KLD ( p (cid:107) p M ) + KLD ( p (cid:107) p M ) 2 , (19) where p Mi = ( p i + p i ) / 2 .",
"Table 1 provides an overview of the SemEval and NTCIR task data that we leveraged for our OC and OQ meta-evaluation experiments.",
"From SemEval-2016/2017 Task 4 (Sentiment Analysis in Twitter) (Nakov et al., 2016; Rosenthal et al., 2017), we chose Subtask C as our OC tasks, and Subtask E as our OQ tasks for the reason given in Section 2.1.",
"6 Moreover, for the OQ meta-evaluation experiments, we also utilise the DQ (Dialogue Quality) subtask data from NTCIR-15 DialEval-1 (Zeng et al., 2020).",
"As these subtasks require participating systems to estimate three different dialogue quality score distributions, namely, A-score (task accomplishment), E-score (dialogue effectiveness), and S-score (cus-tomer satisfaction), we shall refer to the subtasks as DQ-A, DQ-E, and DQ-S hereafter.",
"We utilise both Chinese and English DQ runs for our OQ meta-evaluation (22 runs in total), as the NTCIR task evaluates all runs using gold distributions that are based on the Chinese portion of the parallel dialogue corpus (Zeng et al., 2020).",
"As the three NTCIR data sets are larger than the two SemEval data sets both in terms of sample size and the num-6 We do not use the Arabic data from 2017 as only two runs were submitted to Subtasks C and E (Rosenthal et al., 2017).",
"ber of systems, we shall focus on the OQ meta-evaluation results with the NTCIR data; the results with Sem16T4E and Sem17T4E can be found in the Appendix.",
"Table 2 shows, for each OC task, the Kendall's rank correlation values (Sakai, 2014) between two system rankings for every pair of measures.",
"We can observe that: (A) the 's, the two Macro F1 measures (F1 M and HMPR), MAEM and produce similar rankings; (B) MAE and Accuracy (i.e., the two measures that ignore class imbalance) produce similar rankings, which are drastically different from those of Group A; and (C) CEMORD produces a ranking that is substantially different from the above two groups, although the ranking is closer to those of Group A. The huge gap between Groups A and B strongly suggests that MAE and Accuracy are not useful even as secondary measures for evaluating OC systems.",
"It should be noted that the SemEval 2016/2017 Task 4 Subtask C actually reported MAE scores in addition to the primary MAEM scores, and the Measure Mean Measure Mean Sem16T4C",
"system rankings according to these two measures were completely different even in the official results.",
"For example, in the 2016 results (Table 12 in Nakov et al. (2016)), while the baseline run that always returns neutral is ranked at 10 among the 12 runs according to MAEM , the same run is ranked at the top according to MAE .",
"Similarly, in the 2017 results (Table 10 in Rosenthal et al. (2017)), a run ranked at 10 (tied with another run) among the 20 runs according to MAEM is ranked at the top according to MAE .",
"Our results shown in Table 2 generalise these known discrepancies between the rankings.",
"For each measure, we evaluate its system ranking consistency (or robustness (Amigo et al., 2020)) across two topic sets as follows (Sakai, 2021): (1) randomly split the topic set in half, produce two system rankings based on the mean scores over each topic subset, and compute a Kendall's score for the two rankings; (2) repeat the above 1,000 times and compute the mean ; (3) conduct a randomised",
"randomised paired Tukey HSD test at = 0 .",
"05 with 5,000 trials on the mean scores to discuss statistical significance.",
"7 Table 3",
"(a) and",
"(c) show the consistency results with the OC tasks.",
"For example, Part",
"(a) shows that when the 100 topics of Sem16T4C were randomly split in half 1,000 times, statistically significantly outperformed all other measures, as indicated by a (cid:93) .",
"Table 3",
"(b) and",
"(d) show variants of these experiments where only 10 topics are used in each topic subset, to discuss the robustness of measures to small sample sizes.",
"If we take the averages of",
"(a) and",
"(c), the top three measures are the two 's and , while the worst two measures are CEMORD and Accuracy; we obtain the same result if we take the averages of",
"(b) and",
"(d).",
"Thus, although Amigo et al. (2020) reported that CEMORD performed well in terms of robustness, this is not confirmed in our experiments.",
"Recall that has a practical inconvenience: it cannot distinguish between baseline runs that always return the same class.",
"While SemEval16T4C contains one such run (which always returns neutral ), SemEval17T4C contains as many as five such runs (each always returning one of the five ordinal classes).",
"This is probably why performs well in Table",
"3(a) and",
"(b) but not in",
"(c) and",
"(d).",
"In the information retrieval research community, discriminative power (Sakai, 2006, 2007, 2014) is a widely-used method for comparing evaluation measures (e.g., Anelli et al. (2019); Ashkan and Metzler (2019); Chuklin et al. (2013); Clarke et al. (2020); Golbus et al. (2013); Lu et al. (2016); Kanoulas and Aslam (2009); Leelanupab et al. (2012); Robertson et al. (2010); Valcarce et al. (2020)).",
"Given a set of systems, a p -value for the difference in means is obtained for every system pair (preferrably with a multiple comparison procedure (Sakai, 2018b)); highly discriminative measures are those than can obtain many small p -values.",
"While highly discriminative measures are not necessarily correct , we do want measures to be sufficiently discriminative so that we can draw some useful conclusions from experiments.",
"Again, we use randomised paired 7 The Tukey HSD (Honestly Significant Differences) test is a multiple comparison procedure: that is, it is like the t test, but can compare the means of more than two systems while ensuring that the familywise Type I error rate is .",
"The randomised version of this test is free from assumptions such as normality and random sampling from a population (Sakai, 2018b).",
"Tukey HSD tests with 5,000 trials for obtaining the p -values.",
"Figure 1 shows the discriminative power curves for the OC tasks.",
"Curves that are closer to the origin (i.e., those with small p -values for many system pairs) are considered good.",
"We can observe that",
"(i) CEMORD , Accuracy, MAEM , and MAE are the least discriminative measures in both tasks.",
"(ii) Among the other measures that perform better, performs consistently well.",
"Again, the fact that distinguishes itself from others in the SemEval16T4C results probably reflects the fact that the data set contains only one run that always returns the same class, which cannot be handled properly by .",
"Table 4 summarises the properties of the nine measures we examined in the context of OC tasks.",
"Column (IV) shows that, for example, the Group A measures produce similar rankings.",
"Based on this table, we recommend (Linear Weighted) as the primary measure for OC tasks if the tasks do not in-(I) DQ-A RSNOD RNOD NVD JSD NMD RNSS 0.835 0.913 0.939 0.905 0.636 RSNOD -0.870 0.861 0.827 0.766 RNOD -0.939 0.939 0.723 NVD --0.931 0.680 JSD --0.714 (II) DQ-E RSNOD RNOD NVD JSD NMD RNSS 0.931 0.922 0.913 0.913 0.688 RSNOD -0.957 0.948 0.948 0.758 RNOD -0.957 0.991 0.749 NVD --0.948 0.758 JSD --0.758 (III) DQ-S RSNOD RNOD NVD JSD NMD RNSS 0.861 0.974 0.957 0.922 0.558 RSNOD -0.887 0.887 0.853 0.662 RNOD -0.983 0.948 0.584 NVD --0.965 0.584 JSD --0.619 Table 5: System ranking similarity in terms of Kendall's for each OQ task (NTCIR).",
"volve multiple baseline runs that always return the same class.",
"Such runs are unrealistic, so this limitation may not be a major problem.",
"On the other hand, if the tasks do involve such baseline runs (as in SemEval), we recommend -ORD as the primary measure.",
"In either case, it would be good to use both and -ORD to examine OC systems from multiple angles.",
"According to our consistency and discriminative power experiments, using -INT instead of -ORD (i.e., assuming equidistance) does not seem beneficial for OC tasks.",
"Table 5 shows, for each OQ task from NTCIR, the Kendall's between two system rankings for every pair of measures.",
"It is clear from the NMD column that NMD is an outlier among the six measures.",
"In other words, among the only axiomatically correct measures for OQ tasks, RNOD and RSNOD are the ones that produce rankings that are similar to those produced by well-known measures such as JSD and NVD (i.e., normalised MAE; see Eq. 16).",
"Also, in Table 5(I) and (III), it can be observed that the ranking by RSNOD lies somewhere between that by NMD (let us call it Group X) and those by the other measures (Group Y).",
"However, this is not true in Table 5(II), nor with our SemEval results (See Appendix Table 8).",
"Table 6 shows the system ranking consistency results with the OQ tasks from NTCIR.",
"These experiments were conducted as described in Section 5.2.",
"If we take the averages of",
"(a),",
"(c), and",
"(e) (i.e., experiments where the 300 dialogues are split in half), the worst measure is NMD, followed by RSNOD.",
"Moreover, the results are the same if we take the averages of",
"(b),",
"(d), and",
"(f) (i.e., experiments where two disjoint sets of 10 dialogues are used), we obtain the same result.",
"Hence, among the axiomatically correct measures for OQ tasks, RNOD appears to be the best in terms of system ranking consistency, and that introducing symmetry (Com-pare Eqs. 14 and 15) may not be a good idea from a statistical stability point of view.",
"Note that, for comparing a system distribution with a gold distribution, symmetry is not a requirement.",
"Figure 2 shows the discriminative power curves for the OQ tasks from NTCIR.",
"We can observe that:",
"(i) NMD performs extremely poorly in (I) and (III), which is consistent with the full-split consistency results in Table",
"6(a) and",
"(e);",
"(ii) RNOD outperforms RSNOD in (I) and (III).",
"Although RSNOD appears to perform well in (II), if we consider the 5% significance level (i.e., 0.05 on the y -axis), the number of statistically significantly different pairs (out of 231) is 117 for RNOD, 116 for RSNOD, NMD, and NVD, and 115 for RNSS and JSD.",
"That is, RNOD performs well in (II) also.",
"These results also suggest that introducing symmetry to RNOD (i.e., using RSNOD instead) is not beneficial.",
"Table 7 summarises the properties of the six measures we examined in the context of OQ tasks.",
"Column (III) indicates that NMD is an outlier in terms of system ranking.",
"Based on this table, we recommend RNOD as the primary measure of OQ tasks, as evaluating OQ systems do not require the measures to be symmetric.",
"As a secondary measure, we recommend NMD (i.e., a form of Earth Mover's Distance) to examine the OQ systems from a different angle, although its statistical stability (in terms of system ranking consistency and discriminative power) seems relatively unpredictable.",
"Although the NTCIR Dialogue Quality subtasks (Zeng et al., (I) (II) (III) (IV) (V) NMD (cid:88) (cid:88) X poor poor RSNOD (cid:88) (cid:88) Y poor fair RNOD (cid:88) Y good fair NVD (cid:88) Y good fair RNSS (cid:88) Y good fair JSD (cid:88) Y good fair Table 7: Summary of the properties of OQ measures.",
"2019, 2020) have used NMD and RSNOD as the official measures, it may be beneficial for them to replace RSNOD with RNOD.",
"We conducted extensive evaluations of nine measures in the context of OC tasks and six measures in the context of OQ tasks, using data from SemEval and NTCIR.",
"As we have discussed in Sections 5.4 and 6.4, our recommendations are as follows.",
"OC tasks Use (Linear Weighted) as the primary measure if the task does not involve multiple runs that always return the same class (e.g., one that always returns Class 1, another that always returns Class 2, etc.).",
"Otherwise, use -ORD (i.e., Krippendorff's for ordinal classes) as the primary measure.",
"In either case, use both measures.",
"All of our evaluation measure score matrices are available from https://waseda.box.com/ ACL2021PACKOCOQ , to help researchers reproduce our work.",
"Among the above recommended measures, recall that Linear Weighted and RNOD assume equidistance (i.e., they rely on w ij = | i j | ), while -ORD and NMD do not.",
"Hence, if researchers want to avoid relying on the equidistance assumption (i.e., satisfy the ordinal invariance property (Amigo et al., 2020)), -ORD can be used for OC tasks and NMD can be used for OQ tasks.",
"However, we do not see relying on equidistance as a practical problem.",
"For example, note that the Linear Weighted is just an instance of the Weighted family: if necessary, the weight w ij can be set for each pair of Classes i and j according to practical needs.",
"Similarly, w ij = | i j | (Eq. 12) for RNOD (and other equidistance-based measures) may be replaced with a different weighting scheme (e.g., something similar to the prox ij weights of CEMORD ) if need be.",
"Our final and general remark is that it is of utmost importance for researchers to understand the properties of evaluation measures and ensure that they are appropriate for a given task.",
"Our future work includes evaluating and understanding evaluation measures for tasks other than OC and OQ.",
"This work was partially supported by JSPS KAK-ENHI Grant Number 17H01830."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"method",
"objective",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"other"
] |
[
"This paper introduces improved methods for sub-event detection in social media streams, by applying neural sequence models not only on the level of individual posts, but also directly on the stream level.",
"Current approaches to identify sub-events within a given event, such as a goal during a soccer match, essentially do not exploit the sequential nature of social media streams.",
"We address this shortcoming by framing the sub-event detection problem in social media streams as a sequence labeling task and adopt a neural sequence architecture that explicitly accounts for the chronological order of posts.",
"Specifically, we",
"(i) establish a neural baseline that outperforms a graph-based state-of-the-art method for binary sub-event detection (2.7% micro-F 1 improve-ment), as well as",
"(ii) demonstrate superiority of a recurrent neural network model on the posts sequence level for labeled sub-events (2.4% bin-level F 1 improvement over non-se-quential models).",
"Social media allow users to communicate via real-time postings and interactions, with Twitter as a notable example.",
"Twitter user posts, i.e., tweets, are often related to events.",
"These can be social events (concerts, research conferences, sports events, etc.), emergency situations (e.g., terrorist attacks) (Castillo, 2016), etc.",
"For a single event, multiple tweets are posted, by people with various personalities and social behavior.",
"Hence, even more so than (typically more neutral) traditional media, this implies many different perspectives, offering an interesting aggregated description.",
"Given this continuous and large stream of (likely duplicated) information in Twitter streams, and their noisy nature, it is challenging to keep track of the main parts of an event, such as a soccer match.",
"Automating such extraction of different sub-events within an evolving event is known as sub-event detection (Nichols et al., 2012).",
"For tracking each of the sub-events, the timing aspect is an important concept (i.e., consecutive tweets in time).",
"Thus, a sequential model could successfully exploit chronological relations between the tweets in a Twitter stream as an informative feature for sub-event detection.",
"Several methods have been proposed for sub-event detection: clustering methods (Pohl et al., 2012), graph-based approaches (Meladianos et al., 2015), topic models (Xing et al., 2016) and neural network architectures (Wang and Zhang, 2017).",
"None of these studies exploits the chronological relation between consecutive tweets.",
"In contrast, our work does take into account that chronological order and we predict the presence and the type of a sub-event exploiting information from previous tweets.",
"Specifically, we",
"(i) propose a new neural baseline model that outperforms the state-of-the-art performance on the binary classification problem of detecting the presence/absence of sub-events in a sports stream,",
"(ii) establish a new reasonable baseline for predicting also the sub-event types ,",
"(iii) explicitly take into account chronological information, i.e., the relation among consecutive tweets, by framing sub-event detection as a sequence labeling problem on top of our baseline model, and",
"(iv) perform an experimental study, indicating the benefit of sequence labeling for sub-event detection in sports Twitter streams.",
"Twitter streams have been extensively studied in various contexts, such as sentiment analysis (Kouloumpis et al., 2011), stock market prediction (Nguyen and Shirai, 2015) and traffic detection (D'Andrea et al., 2015).",
"Specifically, for sub-event detection in Twitter, several approaches Figure 1: Our sub-event detection model comprises:",
"have been tried.",
"Unsupervised methods such as clustering aim to group similar tweets to detect specific sub-events (Pohl et al., 2012; Abhik and Toshniwal, 2013) and use simple representations such as tf-idf weighting combined with a similarity measure.",
"Other unsupervised algorithms use topic modeling approaches, based on assumptions about the tweets' generation process (Xing et al., 2016; Srijith et al., 2017).",
"Several methods (Zhao et al., 2011; Zubiaga et al., 2012; Nichols et al., 2012) assume that a sub-event happens when there is a burst', i.e., a sudden increase in the rate of tweets on the considered event, with many people commenting on it.",
"Recently, neural network methods have used more complicated representations (Wang and Zhang, 2017; Chen et al., 2018).",
"Also supervised methods have been applied (Sakaki et al., 2010; Meladianos et al., 2018) for the sub-event detection task.",
"These methods usually exploit graph-based structures or tf-idf weighting schemes.",
"We believe to be the first to",
"(i) exploit the chronological order of the Twitter stream and take into account its sequential nature, and",
"(ii) frame the sub-event detection problem as a sequence labeling task.",
"The goal is, given a main event (i.e., soccer match), to identify its core sub-events (e.g., goals, kick-off, yellow cards) from Twitter streams.",
"Specifically, we consider a supervised setting, relying on annotated data (Meladianos et al., 2018).",
"Similar to previous works, we split a data stream into time periods (Meladianos et al., 2018): we form bins of tweets posted during consecutive time intervals.",
"E.g., for a soccer game, one-minute intervals (bins) lead to more than 90 bins, depending on the content before and after the game, halftime, stoppage time, and possibly some pregame and post-game buffer.",
"Thus, for each bin, we predict either the presence/absence of a sub-event (Section 3.3) or the most probable sub-event type (Section 3.4), depending on the evaluation scenario.",
"We consider representing the content of each bin either on",
"(i) a word-level or",
"(ii) a tweet-level (see Fig. 1).",
"Formally, we assume that we have a set of n bins b 1 , ..., b n , where each bin b i consists of m i tweets and k i words (i.e., all words of tweets in bin b i ).",
"Then, the tweet-level representation of bin b i is symbolized as t i 1 , ..., t im i , where t im i is the m th i tweet of bin b i .",
"In the word-level representation , we chronologically concatenate the words from the tweets in the bin: w i 1 ,",
".., w ik i , where w ik i is the k th i word of bin b i .",
"To compare with previous work (Meladianos et al., 2018), we establish a simple baseline for binary classification: presence/absence of a sub-event.",
"For this case, we use as input the word-level representation of each bin.",
"To do so, we use word embeddings (randomly initialized) with average (AVG) pooling (Iyyer et al., 2015) in combination with a multilayer perceptron (MLP) for binary classification, i.e., presence/absence of a sub-event.",
"Note that we experimented with pre-trained embeddings as well as max-pooling, but those early experiments led to performance decrease compared to the presented baseline model.",
"We found that training based on average bin representations works substantially better than with max-pooling, and we hypothesize that this is related to the noisy nature of the Twitter stream.",
"Building on the baseline above, we establish a new architecture that is able to capture the sub-event types as well as their duration.",
"We phrase sub-event detection in Twitter streams as a sequence labeling problem.",
"This means we assume that the label of a bin is not independent of neighboring bin labels, given the chronological order of bins of the Twitter stream, as opposed to independent prediction for each bin in the binary classification baseline above.",
"For instance, when a goal is predicted as a label for bin b i , then it is probable that the label of the next bin b i +1 will also be goal .",
"Although a sub-event may occur instantly, an identi-fied sub-event in a Twitter stream can span consecutive bins, i.e., minutes: users may continue tweeting on a particular sub-event for relatively long time intervals.",
"For this reason, we apply the well-known BIO tagging scheme (Ramshaw and Marcus, 1995) for the sub-event detection problem.",
"For example, the beginning of a goal sub-event is defined as Bgoal , while Igoal (inside) is assigned to every consecutive bin within the same sub-event, and the O tag (outside) to every bin that is not part of any sub-event.",
"To propagate chronological information among bins, we adopt an LSTM on the sequence of bins as illustrated in Fig. 1, layer",
"(e).",
"Note that this tagging approach assumes that sub-events do not overlap in time, i.e., only at most one is ongoing in the Twitter stream at any point in time.",
"We evaluated our system 1 on the dataset from Meladianos et al. (2018), with tweets on 20 soccer matches from the 2010 and 2014 FIFA World Cups, totalling over 2M pre-processed tweets fil-tered from 6.1M collected ones, comprising 185 events.",
"The dataset includes a set of sub-events, such as goal , kick-off , half-time , etc.",
"To compare 1 https://github.com/bekou/subevent_ sequence_labeling our binary classification baseline system to previous methods (Table 1), we use the same train/test splits as Meladianos et al. (2018), where 3 matches are used for training and 17 matches as test set.",
"In this setting, we predict only the presence/ab-sence of a sub-event.",
"Similar to previous work, we count a sub-event as correct if at least one of its comprising bins has been classified as a sub-event.",
"For the experimental study of our proposed sequence labeling approach for sub-event detection, where sub-event types are predicted, we have randomly split the test set into test (10 matches) and development (7 matches) sets.",
"We use the development set to optimize the F 1 score for tuning of the model parameters, i.e., the word/tweet embedding representation size, LSTM hidden state size, dropout probability.",
"We adopt 2 evaluation strategies.",
"The first one, referred to as relaxed evaluation, is commonly used in entity classification tasks (Adel and Sch utze, 2017; Bekoulis et al., 2018a,c) and similar to the binary classification baseline system evaluation: score a multi-bin sub-event as correct if at least one of its comprising bin types (e.g., goal ) is correct, assuming that the boundaries are given.",
"The second evaluation strategy, bin-level , is stricter: we count each bin individually, and check whether its sub-event type has been predicted correctly, similar to the token-based evaluation followed in Bekoulis et al. (2018b).",
"Table 1 shows the experimental results of our baseline model.",
"The Burst baseline system is based on the tweeting rate in a specific time window (i.e., bin) and if a threshold is exceed, the system iden-tifies that a sub-event has occurred.",
"We report evaluation scores as presented in Meladianos et al. (2018).",
"The second approach is the graph-based method of Meladianos et al. (2018).",
"We observe that our baseline system (Section 3.3) has a 1.2% improvement in terms of macro-F 1 and 2.7% improvement in terms of micro-F 1 , compared to the graph-based model from Meladianos et al. (2018), mainly due to increased precision, and despite the recall loss.",
"LSTM) compared to models making independent predictions per bin.",
"The upper part of Table 2 contains models without the chronological LSTM.",
"Our experiments study both word-level and tweet-level bin representations (see Fig. 1), as reflected in the Word' vs. Tweet' prefix, respectively, in the Model column of Table 2.",
"The simplest word-level representation uses the tf-idf weighting scheme (as in Pohl et al. (2012)) followed by an MLP classifier.",
"For the other word-level models, we exploit several architectures: AVG pooling (Iyyer et al., 2015), a CNN followed by AVG pooling (Kim, 2014) and hierarchical word-level attention (Yang et al., 2016).",
"For tweet-level representations, we adopt similar architectures, where the AVG, CNNs and attention are performed on sentence level rather than on the word-level representation of the bin.",
"In this scenario, we have also exploited the usage of sequential LSTMs to represent the tweets.",
"When comparing models with and without tweet-level LSTMs, we report the strategy that yields the best results, indicated by (cid:51) and (cid:55) in the tweet-level LSTM (TL) columns of Table 2.",
"We do not present results for applying sequential LSTMs on the word-level bin representation, because of slow training on the long word sequences.",
"Benefit of Chronological LSTM: The bottom part of Table 2 presents the results of the same models followed by a chronological LSTM to capture the natural flow of the stream as illustrated in Fig. 1.",
"We report results as described in Section 4, using the micro F 1 score with the two evaluation strategies ( bin-level and relaxed ).",
"We observe that when using the chronological LSTM, the performance in terms of bin-level F 1 score is substantially improved for almost every model.",
"Note that the best model using the chronological LSTM (Tweet-AVG) achieves 2.4% better F 1 than the best performing model without the use of chronological LSTM (Word-CNN-AVG).",
"In most cases there is also a consistent improvement for both the precision and the recall metrics, which is Macro Micro Settings P R F 1 P R F 1 Burst 78.00 54.00 64.00 72.00 54.00 62.00 Meladianos et al. (2018) 76.00 75.00 75.00 73.00 74.00 73.00 Our binary classif.",
"Limitations of Relaxed Evaluation: On the other hand, using the relaxed evaluation strategy, we observe that the best models are those without the chronological LSTM layer.",
"Yet, we consider the relaxed evaluation strategy flawed for our scenario, despite the fact that it has been used for entity classification tasks (Bekoulis et al., 2018a; Adel and Schutze, 2017).",
"Indeed, it is not able to properly capture sub-events which are characterized by duration: e.g., if a model assigns a different label to each of the bins that together constitute a single sub-event, then this sub-event counts as a true positive based on the relaxed evaluation strategy (similar to the evaluation proposed by Meladianos et al. (2018) and followed in Table 1).",
"Thus, in this work, we propose to use the bin-level evaluation, since it is a more natural way to measure the duration of a sub-event in a supervised sequence labeling setting.",
"Note that due to the noisy nature of Twitter streams, a tweet sequence spanning a particular sub-event is likely to contain also tweets that are not related to the given sub-event: a given bin inside the event may contain only a minority of tweets discussing the event.",
"Therefore, we consider the standard sequence labeling evaluation (requiring to have types as well as boundaries correct) to be not applicable in sub-event detection.",
"Performance Comparison of the Top-3 Models: Figure 2 shows the performance of our three best performing models in terms of bin-level F 1 score on the validation set.",
"The best performing model is the Tweet-AVG model since it attains its maximum performance even from the first training epochs.",
"The Word-AVG model performs well Figure 2: Bin-level F 1 performance of the three best performing models on the validation set with respect to the number of epochs.",
"from the first epochs, showing similar behavior to the Tweet-AVG model.",
"This can be explained by the similar nature of the two models.",
"The word-level CNN model attains maximum performance compared to the other two models in later epochs.",
"Overall, we propose the use of the chronological LSTM with the Tweet-AVG model since this model does not rely on complex architectures and it gives consistent results.",
"In this work, we frame the problem of sub-event detection in Twitter streams as a sequence labeling task.",
"Specifically, we",
"(i) propose a binary classification baseline model that outperforms state-of-the-art approaches for sub-event detection (presence/absence),",
"(ii) establish a strong baseline that additionally predicts sub-event types , and then",
"(iii) extend this baseline model with the idea of exchanging chronological information between sequential posts, and",
"(iv) prove it to be beneficial in almost all examined architectures.",
"We would like to thank the anonymous reviewers for their constructive feedback.",
"Moreover, we would like to thank Christos Xypolopoulos and Giannis Nikolentzos for providing",
"(i) the Twitter dataset (tweet ids) and",
"(ii) instructions to reproduce the results of their graph-based approach."
] | [
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"In argumentation, people state premises to reason towards a conclusion.",
"The conclusion conveys a stance towards some target , such as a concept or statement.",
"Often, the conclusion remains implicit, though, since it is self-evident in a discussion or left out for rhetorical reasons.",
"However, the conclusion is key to understanding an argument, and hence, to any application that processes argumentation.",
"We thus study the question to what extent an argument's conclusion can be reconstructed from its premises.",
"In particular, we argue here that a decisive step is to infer a conclusion's target, and we hypothesize that this target is related to the premises' targets.",
"We develop two complementary target inference approaches: one ranks premise targets and selects the top-ranked target as the conclusion target, the other finds a new conclusion target in a learned embedding space using a triplet neural network.",
"Our evaluation on corpora from two domains indicates that a hybrid of both approaches is best, outperforming several strong baselines.",
"According to human annotators, we infer a reasonably adequate conclusion target in 89% of the cases.",
"The conclusion (or claim) of a natural language argument conveys a pro or con stance towards some target , such as a controversial concept or statement (Bar-Haim et al., 2017).",
"It is inferred from a set of premises.",
"Conclusions are key to understanding arguments, and hence, critical for any downstream application that processes argumentation.",
"The task of identifying conclusions has been studied intensively in the context of argument mining (Stab and Gurevych, 2014) and automatic essay assessment (Falakmasir et al., 2014).",
"In genres other than essays, however, conclusions often remain implicit, since they are clear from the context of a discussion (Habernal and Gurevych, 2015) or hidden on pur-Raising the school leaving age promotes equal opportunities.",
"pose for rhetorical reasons, as is often the case in news editorials (Al Khatib et al., 2016).",
"This alters the task entirely to become a synthesis task: Given an argument's premises, generate its conclusion.",
"As detailed in Section 2, research on argumentation synthesis is still limited.",
"Existing approaches focus on generating single claims (Bilu and Slonim, 2016), new arguments (Reisert et al., 2015), coun-terarguments (Hua et al., 2019), or argumentative texts (Wachsmuth et al., 2018).",
"Closer to conclusion generation, Egan et al. (2016) summarized the main points of online debates, and Wang and Ling (2016) worked on identifying the main claim of an argument through abstractive summarization.",
"To our knowledge, however, no approach so far reconstructs an argument's conclusion from its premises.",
"In general, we consider the synthesis task outlined above.",
"Conceptually, we decompose this task into three steps, as depicted in Figure 1: (1) inferring the conclusion's target from the premises, (2) inferring the conclusion's stance, and (3) generating the conclusion's text with the inferred stance and the inferred target.",
"In this paper, we focus on the first step by proposing two computational approaches for conclusion target inference.",
"As sketched in Figure 1, we hypothesize that the conclusion target is related to the targets of the argument's premises.",
"To obtain premise targets, we train a state-of-the-art sequence labeling model (Akbik et al., 2018) on target-annotated claims (Bar-Haim et al., 2017).",
"Since the exact relation of premise and conclusion targets is unknown, we develop two complementary inference approaches: One approach ranks premise targets based on their likelihood of being a conclusion target.",
"The other one employs a triplet neural network (Hoffer and Ailon, 2015) that generates a conclusion target embedding from the premise targets in a learned target embedding space.",
"A unique facet of the latter is the integration of the network with a knowledge base of targets (built from any training set), namely, the approach returns the known target whose embedding is closest to the generated embedding.",
"We compare the approaches against several baselines, including an existing sequence-to-sequence model for argument summarization (with and without encoded premise targets).",
"For evaluation purposes, we study argument corpora from two genres where the correct conclusions are given: student essays (Stab and Gurevych, 2014) and debate portals (Wang and Ling, 2016).",
"On these corpora, we empirically test how often an inferred target matches the target found in the ground-truth conclusion.",
"Moreover, we let human annotators manually check the adequacy of the inferred targets.",
"In our experiments, both approaches consistently outperform sequence-to-sequence generation, justifying the explicit modeling of the relation between premise and conclusion targets.",
"According to manual evaluation, a combined version of the two approaches infers an at least somewhat adequate target in 89%, and a fully adequate target in 55% of the cases, indicating the practical applicability of our target inference in conclusion generation.",
"1. A conceptual model of the task of generating an argument's conclusion from its premises.",
"2. Two complementary approaches that infer a conclusion's target from premises effectively.",
"3. Empirical evidence for the importance of modeling targets in conclusion generation.",
"1 Resources:",
"https://webis.de/publications.html?q=ACL+2020 Code base: https://github.com/webis-de/ACL-20 2 Related Work Arguments have been modeled in different ways, focusing on the roles of their components (Toul-min, 1958), their inference scheme (Walton et al., 2008), or the interplay between their pro and con components (Freeman, 2011).",
"On an abstract level, the models all share that they consider an argument as a conclusion (in terms of a claim) and a set of premises (reasons to support or object the claim).",
"We restrict our view to this abstract model here.",
"Even though this paper is about inferring conclusion targets, our ultimate goal is to reconstruct the whole conclusion of an argument.",
"Computational approaches to identify conclusions in a text have been pioneered research on student essay assessment (Burstein and Marcu, 2003).",
"Falakmasir et al. (2014) show the importance of essay conclusions in applications, whereas Jabbari et al. (2016) specifi-cally target an essay's overall conclusion, i.e., its thesis (also known as major, main, or central claim).",
"Given the importance of theses, we dedicate one experiment particularly targeting them below.",
"The classification of argument components (as theses, conclusions, premises, etc.) is a core task in argument mining (Stede and Schneider, 2018) and has been approached for different genres (Stab and Gurevych, 2014; Peldszus and Stede, 2015).",
"As Habernal and Gurevych (2015) observe, though, real-world arguments often leave the conclusion implicit, particularly where it is clear in the context of a discussion.",
"In genres such as news editorials, conclusions may even be left out on purpose, in order to persuade readers in a hidden manner (Al Khatib et al., 2016).",
"If an implicit conclusion is needed, it hence needs to be synthesized.",
"Argumentation synthesis research is on the rise.",
"Early argument generation approaches relied on rule-based discourse planning techniques (Zuker-man et al., 2000).",
"Later, Reisert et al. (2015) generalized target-stance relations from claims and used them to automatically create new arguments.",
"The relations were curated manually, though.",
"An approach that finds the best conclusion for generation among a set of candidate claims was presented by Yanase et al. (2015).",
"Sato et al. (2015) built upon this approach to phrase texts with multiple arguments.",
"Others recycled targets and predicates of claims in new claims (Bilu and Slonim, 2016), generated arguments with specific inference schemes for user-defined content (Green, 2017), modeled rhetorical aspects in synthesis (Wachsmuth et al., 2018), and composed arguments that follow a strategy (El Baff et al., 2019).",
"All these methods synthesize new argumentative content.",
"In contrast, we aim for the missing components of given arguments.",
"As such, our task resembles enthymeme reconstruction.",
"An enthymeme is an implicit premise, usually the warrant (or major premise ) that clar-ifies how a conclusion is inferred from the given premises (Walton et al., 2008).",
"Motivated by the importance of finding the thesis, Boltuzic and na-jder (2016) study how to identify such enthymemes given the other components.",
"Similarly, Habernal et al. (2018) present the task of identifying the correct warrant from two options, and Rajendran et al. (2016) aim to generate the premise connecting an aspect-related opinion to an overall opinion.",
"Instead of missing premises, we aim to synthesize (parts of) an argument's conclusion .",
"For any text generation task, a candidate technique is sequence-to-sequence models (Sutskever et al., 2014).",
"Relevant in the given context, Hua and Wang (2018) used such models to generate counter-arguments, and Hua et al. (2019) extended this approach by planning and retrieval mechanisms.",
"With a comparable intention, Chen et al. (2018) modified the bias of news headlines from right-to-left or vice versa.",
"Closest to our work is the approach of Wang and Ling (2016) whose sequence-to-sequence model generates summaries for opinionated and argumentative text.",
"Like us, the authors face the problem of varying numbers of input components, and tackle this using an importance-based sampling method.",
"For their evaluation, they crawled arguments from idebate.org .",
"We use this dataset in our experiments.",
"Unfortunately, their manual evaluation considers opinionated text only, leaving the semantic adequacy of the generated argument summaries unclear.",
"The exact connection to summarization is unclear, which is why we include an approximation of the model of Wang and Ling (2016) as a baseline in our experiments.",
"General research on summarization is manifold and beyond the scope of this work.",
"For a survey, we refer the reader to Gambhir and Gupta (2017).",
"In recent work, we summarize the core of an argument to be used as a snippet in the context of argument search by a two-sentence extract (Alshomary et al., 2020) and Egan et al. (2016) create abstractive summaries of the main points in a debate.",
"We hypothesize a dependency between the target and stance of a conclusion and those of the premises.",
"At a high level, this resembles the work of Angelidis and Lapata (2018) where aspects and sentiments are modeled for the extractive summarization of opinions.",
"We focus on the inference of conclusion targets in this work.",
"Our approach builds upon ideas of Bar-Haim et al. (2017), who classify the stance of premises to a conclusion.",
"To do so, they identify and relate targets in these components, and model stance with sentiment.",
"We do not explicitly tackle stance inference here, because our focus is a conclusion's target .",
"To identify premise targets, we first train a state-of-the art sequence tagger using con-textualized word embeddings (Akbik et al., 2018) on the corpus of Bar-Haim et al. (2017).",
"From these premise targets, we then infer the conclusion target, as explained below.",
"Before discussing our target inference approach in Section 4, this section briefly introduces the datasets that we use in our analyses and experiments.",
"To allow for evaluating the given task, the conclusion is always given in these datasets.",
"The Claim Stance Dataset (Bar-Haim et al., 2017) contains 2,394 claims referring to 55 topics from Wikipedia articles.",
"Not only the stance of premises towards their topics is manually annotated, also a phrase is marked in each claim as being a target.",
"We use this dataset to train and evaluate a target phrase tagging model for the purpose of identifying targets in the given premises of an argument.",
"As Bar-Haim et al., we take all premises associated to 25 conclusions for training and the rest for testing.",
"The iDebate Dataset (Wang and Ling, 2016) consists of 2,259 pro and con points for 676 controversial issues from the online debate portal ide-bate.org .",
"Each point comes with a one-sentence conclusion (called central claim by the authors) and an argumentative text supporting the conclusion.",
"Each sentence is seen as one premise of the conclusion (called argument ), resulting in a total of 17,359 premises.",
"We use this dataset for training, optimizing, and evaluating all approaches to conclusion target inference.",
"Following its authors, we split the dataset based on debates: 450 debates for training, 67 for validation, and 150 for testing.",
"The Argument Annotated Essays corpus (Version 2; Stab and Gurevych (2014)) includes 402 persuasive student essays.",
"Each essay was segmented manually into subsentence-level argument components: theses (called major claims ), conclusions ( claims ), and premises.",
"We use this corpus to study target inference in a second domain.",
"To analyze different types of argument relations, we derive two datasets from the corpus: Essay Conclusions for conclusions and their premises with 1,530 training, 256 validation, and 234 test cases, and Essay Theses for theses and the underlying conclusions with 300 training, 50 validation, and 52 test cases.",
"We now present our approach to infer the target of an argument's conclusion from its premises.",
"Based on a premise target identifier, it employs two complementary sub-approaches: One ranks premise targets by their potential representativeness for the (later unknown) conclusion, and then picks the top-ranked premise target.",
"The other predicts candidate embeddings for the conclusion target from the top-ranked premise targets, and then picks the conclusion target from a knowledge base of targets whose embedding is most similar to those embeddings.",
"To model the relation between premises and conclusion target, we first identify the premises' targets.",
"The task of identifying target phrases in argumentative text has been introduced by Bar-Haim et al. (2017).",
"We here tackle it as BIO sequence labeling, classifying each token as being the beginning, inside, or outside of a target.",
"Since premise target identification is not our main focus, we simply train a state-of-the-art neural sequence tagger (Ak-bik et al., 2018) on the claim stance dataset and then use it to automatically annotate targets in all input premises.",
"2 4.2 Inference by Premise Target Ranking A reasonable hypothesis is that one of the premise targets of an argument represents an adequate conclusion target.",
"Our first sub-approach thus simpli-fies the given task into selecting the premise target that most likely represents the conclusion target.",
"Since there is no training data that reflects this likelihood, we follow the idea of importance sampling of Wang and Ling (2016): Given the output of our target identifier on a training instance, we use the percentage of content tokens overlapping between premise targets and the conclusion target as a representativeness label (quantified as Jaccard distance).",
"Then, we learn a ranking model to predict the representativeness of a candidate premise target based on four features:",
"1. The average embedding cosine similarity of the candidate to the other candidates,",
"2. the number of words in the candidate,",
"3. the relative start and end character position of the candidate in the covering premise, and",
"4. the number of sentiment words (positive, negative, and neutral) in that premise.",
"The input of the ranking model are premise targets grouped by argument.",
"During training, a probability is learned to reflect the ordering between each pair of premise targets in an argument with respect to conclusion target representativeness.",
"Then, the model utilizes a cross-entropy loss function to minimize the difference between learned and the desired probability.",
"The effectiveness of this approach is naturally limited by the percentage of cases where the conclusion target actually matches any premise target.",
"For a rough estimation, Figure 2 shows, based on two different similarity measures, how often at least one premise target matches the conclusion target in the three given training sets.",
"Naturally, it is unclear in general how high the similarity needs to be for actual semantic equivalence.",
"To overcome the outlined shortcoming of being restricted to premise targets, we investigate a second hypothesis: An adequate conclusion target can be found in other arguments.",
"To this end, we integrate a neural model with a knowledge base of targets in a novel way.",
"In particular, our second sub-approach tackles the given task by producing candidate conclusion target embeddings from the (top-ranked) premise targets, and then picking the target from a knowledge base whose embedding is most similar to the candidates.",
"In principle, the knowledge base can be built from any corpus of argumentative texts based on our target identifier.",
"In our experiments, we simply use all conclusion targets extracted from the training split of the datasets.",
"Now, to predict a conclusion target embedding, we first get the top k > 1 premise targets using our ranking approach and create average embeddings s 1 , s 2 , . . . of all (cid:0) km (cid:1) possible subsets of these targets with m > 1 .",
"Then, we learn a function f on training arguments that maps each s i to a transformed embedding space where it resembles the correct conclusion target c and differs more from other targets c (cid:48) .",
"Figure 3 sketches this idea.",
"The best k and m are found by tuning in validation.",
"As depicted in Figure 4, we model f as a triplet neural network (Hoffer and Ailon, 2015) with three vectors as an input: an anchor s i , a positive c , and a p 1 Tripletlossfunction ... ... ... ... ... ...",
"negative c (cid:48) , where c (cid:48) is a randomly sampled target from the target knowledge base.",
"During training, we create (cid:0) km (cid:1) triplets from each argument.",
"Based on these, we utilize the following triplet loss function to minimize the cosine distance d between s i and c , and to maximize d between s i and c (cid:48) : max { d ( f ( s i ) , f ( c )) d ( f ( s i ) , f ( c (cid:48) )) + d max , 0 } Here, d max represents the maximum distance to be considered, also determined during validation.",
"During prediction, we employ the trained network to map the average embeddings s 1 , s 2 , . . . of all premise target subsets to the transformed embedding space, and compute the average avg ( f ( s i )) of all mapped embeddings f ( s i ) .",
"Then, we pick the conclusion target c from the knowledge base whose mapped embedding f ( c ) has the minimum cosine distance to avg ( f ( s i )) .",
"This way, we ensure that we always end up with a meaningful target.",
"Figure 5 sketches the conclusion target inference on the left and exemplifies it on the right.",
"The reasonableness of the conclusion target inferred by the second sub-approach depends on the quality of the knowledge base.",
"To avoid inferring fully unrelated targets, we also consider a simple hybrid of our two approaches below: If the target inferred by the embedding learning approach overlaps with the (full) text of any premise in at least one content token, it is taken.",
"Otherwise, the target inferred by the premise ranking is taken.",
"More elaborated heuristics are left to future work.",
"In this section, we report on empirical experiments, along with their results, performed to evaluate our approaches to target inference.",
"We implemented the target identifier as a BiLSTM-CRF with hidden layer size 256, using the pretrained contextual string embedding model of Ak-bik et al. (2018).",
"We trained the model on the training set of the Claim Stance Dataset with batch size 16 and a learning rate of 0.1 for five epochs.",
"Results On the Claim Stance test set, the identifier achieved an F1-score of 0 .",
"77 .",
"To assess its effectiveness in other domains, we let human annotators evaluate the identified targets of a random sample of 100 conclusions from the iDebate dataset.",
"Each instance was evaluated by three annotators.",
"Based on the majority agreement, the tagger identified 72% of the cases correctly.",
"3 5.2 Conclusion Target Inference To evaluate target inference, we use the iDebate Dataset and the two essay datasets.",
"As no ground-truth conclusion targets are provided, we used our target identifier to extract targets from the conclusions and compared them to the output of our approaches.",
"In some cases, particularly where targets 3 In terms of Fleiss' , the agreement was 0.39, which is not high but still seems reasonable, given that we did not train annotators.",
"Notice that this agreement value has no effect at all on the evaluation of our target inference approaches below.",
"Approaches For the premise target ranking approach, we trained LambdaMART (Burges, 2010) on each training set with 1000 estimators and a learning rate of 0.02.",
"We refer to this approach below as Premise Targets (ranking) .",
"For target embedding learning, we used the pretrained FastText embeddings with 300 dimensions (Bojanowski et al., 2017) to initially represent each target.",
"To obtain a knowledge base of candidate targets, we applied the target identifier to all conclusions of all training sets.",
"5 The resulting lexicon contains 1,780 targets, each is represented by its FastText embedding.",
"We implemented the triplet neural network as three feed-forward neural networks, each with two layers and shared weights.",
"We call this approach Target Embedding (learning) .",
"The simple hybrid of both approaches introduced above is denoted Hybrid (ranking & embedding) .",
"Baselines On one hand, we compare to the state-of-the-art sequence-to-sequence argument summarizer of Wang and Ling (2016).",
"Since its code is not available, we approximately reimplemented it.",
"6 Specifically, we replicated the importance sampling with the same features (also on five premises) but no regularization.",
"For generation, we used three LSTM layers with hidden size 150 and a pretrained embedding of size 300.",
"Extra features of the original approach were left out, as they did not help much in our case.",
"We trained the model with batch size 48 and learning rate 0.1 using the Adagrad optimizer (Duchi et al., 2011).",
"For translation, we followed Wang and Ling.",
"To identify targets in the generated summaries, we employed our target identifier.",
"We refer to this baseline as Seq2Seq .",
"To test our hypothesis on the relation of premise and conclusion targets, we extended Seq2Seq by a pointer generator (See et al., 2017) and an extra binary feature that encodes whether a token belongs to a target or not, allowing the model to learn this relation.",
"We call this Seq2Seq (w/ premise targets) .",
"On the other hand, we complemented our approaches with simpler variants, in order to check whether learning is needed.",
"tar-4 Example conclusion where no target was identified: It makes it more difficult for extremists to organize and spread their message when blocked .",
"5 More elaborated knowledge bases are left to future work.",
"6 The authors did not respond to our requests.",
"get ranking, our baseline Premise Targets (random) simply chooses a premise target randomly.",
"Instead of target embedding learning, we simply pick the target from the target space whose embedding is most similar to the average premise target embedding, called Target Embedding (average) .",
"Measures We use two common complementary evaluation measures, BLEU (Papineni et al., 2002) and METEOR (Lavie and Agarwal, 2007).",
"BLEU counts n-gram matches (we include 1and 2-grams) focusing on precision, while METEOR is recall-oriented.",
"Following the idea of Figure 2, we also report accuracy, where a given target is correct if it has 50%+ content overlap with the ground truth.",
"Experiments We tuned all approaches on the respective validation sets, and then evaluated them on the test set.",
"Since Seq2Seq requires much training data, we evaluated both variants on iDebate only.",
"Before the inference of Target Embedding (learn-ing) , the corresponding premise targets were added to the knowledge base as candidates for a conclusion target.",
"Below, we consider two scenarios, an optimistic and a pessimistic one: In the former, the ground-truth target is added to the knowledge base, in the latter not.",
"The optimistic scenario thus reflects the effectiveness of the approach regardless of the limitations of the knowledge base.",
"Results Table 1 lists the results.",
"Clearly, encoding premise targets into Seq2Seq boosts its effectiveness, indicating the importance of modeling premise targets.",
"However, both Seq2Seq variants perform poorly compared to our approaches.",
"While 200 0 300 100 400 1 2 3 4 5 6 7 8 9 10 Number of premises in one argument C oun t iDebateDataset Essay Theses Essay Conclusions Figure 6: Histogram of the number of arguments with a specific number of premises in the three given datasets.",
"the limited training data size is one reason, this also indicates that pure sequence-to-sequence generation may not be enough.",
"On iDebate, both approaches are better than all baselines in terms of BLEU score.",
"The best results are achieved by Hybrid (ranking & embedding) in terms of all measures (significantly for BLEU and accuracy).",
"Even in the pessimistic scenario, its BLEU score of 8.1 outperforms all baselines.",
"In the optimistic scenario on the essay datasets, Target Embedding (learning) is strongest for most scores.",
"The hybrid approach hardly achieves any improvement.",
"Due to the small dataset size, no significance was found, though.",
"In the pessimistic scenario, Premise Target (ranking) seems more suitable.",
"The lower scores on Essay Conclusions can be attributed to the low number of premises (see Figure 6), which makes finding an adequate conclusion target among the premise targets less likely.",
"Premisetargets Relocating to the best universities Improving the pool of students Online courses Stanford University's online course on Artificial Intelligence Conclusiontarget Online courses Online courses distance-learning",
"(b) Premisetargets how to use the mobile phone Phones Having a mobile phone the internet phones Conclusiontarget Mobile phones Phones Mobile Phones",
"(a) Premisetargets saving the use of that kinds of languages in this case to be respected and preserved language Conclusiontarget the government language language acquisition",
"(c) Ground-truth Inference of a 1 Inference of a 2 Ground-truth Inference of a 1 Inference of a 2 Ground-truth Inference of a 1 Inference of a 2 Figure 7: Three examples of premise targets from the datasets, the associated ground-truth conclusion target, and the conclusion targets inferred by our approaches.",
"As Table 1 shows, all approaches are much worse than theoretically possible ( oracle ) in terms of automatic metrics.",
"However, the manual evaluation below reveals that the inferred conclusion targets actually compete with the ground truth.",
"Analysis To illustrate the behavior of selected approaches, Table 2 compares the percentages of cases where they pick a new target as well as where they pick the exact ground-truth conclusion target (in the optimistic scenario).",
"Befittingly, target embedding learning ( a2 ) is most exploratory regarding new targets.",
"On the essay datasets, where the conclusion target only sometimes occurs in the premises, a2 is also best in inferring the exact target.",
"Still, premise target ranking ( a1 ) may pick the ground truth, if it matches any premise target.",
"The hybrid seems a suitable balance between both.",
"Figure",
"7(a) exemplifies the ability of a2 to infer the correct conclusion target even if it does # Scenario Fully Somewhat Not Majority b2 5% 18% 76% 93 / 100 a1 56% 33% 11% 91 / 100 a2 Optimistic 50% 28% 22% 92 / 100 Pessimistic 49% 27% 24% 93 / 100 a1&a2 Optimistic 55% 34% 11% 89 / 100 Pessimistic 56% 32% 12% 90 / 100 Ground-truth 62% 29% 10% 84 / 100 Table 3: Majority agreement for how adequate ( fully , somewhat , not ) are the conclusion targets of baseline b2 , our approaches, and the ground truth.",
"not match a premise target exactly.",
"Example",
"(b) stresses the limitation of automatic evaluation: distance-learning (inferred by a2 ) does not overlap with the ground truth, but it semantically matches well.",
"In",
"(c), the ground-truth target was barely inferable from the premise targets.",
"7 6 Manual Evaluation To assess the actual quality of the inferred conclusion targets, we manually evaluated our approaches (optimistic and pessimistic scenario) and the baseline b2 ( Seq2Seq (w/ premise targets) ) in comparison to the ground-truth targets using Amazon Mechanical Turk.",
"For this, we sampled 100 random instances from the iDebate test set.",
"In a single task, an argument's premises were given along with the conclusion target of either approach.",
"Annotators had to judge the adequacy of the target for the given premises as fully , somewhat , or not adequate.",
"Each instance was judged by five annotators.",
"No one judged multiple targets for the same argument.",
"8 Table 3 shows the distribution of majority judgments for each approach.",
"Only 23% of the b2 targets were considered fully or somewhat adequate, i.e., pure text generation seems insufficient.",
"In contrast, our sub-approaches' targets are competitive to the ground truth, which was not always adequate either (likely due to errors in target identi-fication).",
"The high performance of a1 ( Premise Targets (ranked) ) might be explained by the inferred targets being part of the premises, affecting anno-tators' preferences.",
"Still, the targets of a2 ( Target Embedding (learning) ) are seen as adequate in 78% of the cases (50% fully), with the ability of infer-7 Full example arguments found in supplementary material.",
"8 We paid $0.40 per task, restricting access to annotators with an approval rate of at least 95% and 5000 approved tasks.",
"To ensure correct annotations, a reason had to be given.",
"An argument's conclusion comprises its stance towards the target it discusses.",
"Still, the conclusion is often left implicit in real life, because it is clear for humans or hidden for rhetorical reasons.",
"We have conceptualized the task of reconstructing the conclusion from the argument's premises as (1) inferring the conclusion's target, (2) inferring its stance, and (3) phrasing its actual text.",
"Then we have focused on the first step in which we infer the conclusion target given a set of premises.",
"Hypothesizing that the conclusion target depends on the premise targets, we have developed two new and complementary target inference approaches: Premise Targets (ranking) returns the premise target that is most likely adequate for the conclusion, while Target Embedding (learning) generates a conclusion target embedding from the premises and matches it against a target knowledge base.",
"On three datasets from two domains (debate portals and student essays), our approaches outperform several baselines, including a state-of-the-art neural sequence-to-sequence summarizer.",
"The latter also benefits from modeling premise targets, additionally supporting our hypothesis.",
"In terms of BLEU, METEOR, and accuracy, Target Embedding (learn-ing) and a hybrid of both approaches turned out particularly strong, whereas Premise Targets (rank-ing) was best in a manual evaluation.",
"Overall, we manage to infer an at least somewhat adequate conclusion target in 89% of all cases, indicating the practical applicability of our approaches.",
"Combining target inference with stance classification in future work, we can already generate basic conclusions, say, Raising the school leaving age is good .",
"A more elaborate phrasing approach may take over context information from the premises.",
"This work was partially funded by the German Research Foundation (DFG) within the collaborative research center On-The-Fly Computing (SFB 901/3, project no. 160364472).",
"We thank students from Paderborn University for evaluating our target identifier: Denis Kuchelev, Christin Ler, Natalie Lke, Avishek Mishra, Enri Ozuni, Ren Scherf, Harsh Shah, Nikit Srivastava, Martin Wegmann."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"objective",
"result",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"result",
"method",
"objective",
"objective",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other"
] |
[
"Synthetic translations have been used for a wide range of NLP tasks primarily as a means of data augmentation.",
"This work explores, instead, how synthetic translations can be used to revise potentially imperfect reference translations in mined bitext.",
"We find that synthetic samples can improve bitext quality without any additional bilingual supervision when they replace the originals based on a semantic equivalence classifier that helps mitigate NMT noise.",
"The improved quality of the revised bitext is confirmed intrinsically via human evaluation and extrinsically through bilingual induction and MT tasks.",
"While human-written data remains the gold standard to train Neural Machine Translation ( NMT ) and Multilingual NLP models, there is growing evidence that synthetic bitext samplessentence-pairs that are translated by NMT benefit a wide range of tasks.",
"They have been used to enable semi-supervised MT training from monolingual data (Sennrich et al., 2016a; Zhang and Zong, 2016; Hoang et al., 2018), to induce bilingual lexicons (Artetxe et al., 2019; Shi et al., 2021), and to port models trained on one language to an-other (Conneau et al., 2018; Yang et al., 2019).",
"While synthetic bitexts are useful additions to original training data for downstream tasks, it remains unclear how they differ from naturally occurring data.",
"Some studies suggest that synthetic samples might be simpler and easier to learn (Zhou et al., 2020; Xu et al., 2021).",
"Recognizing that naturally occurring bitext can be noisy, for instance, when they are mined from comparable monolingual corpora (Resnik and Smith, 2003; Fung and Yee, 1998; Espl et al., 2019; Schwenk et al., 2021), we hypothesize that synthetic bitext might also directly improve the equivalence of the two bitext sides.",
"Thus synthetic samples might be useful not only for data augmentation but also to revise potentially noisy original bitext samples.",
"In this paper, we present a controlled empirical study comparing the quality of bitext mined from monolingual resources with a synthetic version generated via MT .",
"We focus on the widely used WikiMatrix bitexts for a distant (i.e, EN-EL ) and a similar language-pair (i.e, EN-RO ), since it has been shown that this corpus contains a significant proportion of erroneous translations (Caswell et al., 2021).",
"We generate synthetic bitext by translating the original training samples using MT systems trained on the bitext itself and therefore do not inject any additional supervision in the process.",
"We also consider selectively replacing original samples with forward and backward synthetic translations based on a semantic equivalence classifier, which is also trained without additional supervision.",
"We show that the resulting synthetic bitext improves the quality of the original intrinsically using human assessments of equivalence and extrinsically on bilingual induction ( BLI ) and MT tasks.",
"We present an extensive analysis of synthetic data properties and of the impact of each step in its generation process.",
"This study brings new insights into the use of synthetic samples in NLP .",
"First, intrinsic evaluation shows that synthetic translations, in addition to normalizing the bitext (Zhou et al., 2020; Xu et al., 2021), could potentially provide reference translations that are more semantically equivalent to the source than the original ones.",
"Furthermore, the improved bitext provides more useful signals for BLI tasks and NMT training in two settings (training from scratch; continued training), as confirmed by our extrinsic evaluations.",
"Finally, ablation analyses that compare different ways to combine synthetic translations show that using both translation directions and filtering using semantic equivalence is key to improving bitext quality and calls for further exploration of best practices for using synthetic translations in NLP tasks.",
"Synthetic Translations Generating synthetic translations has mainly been studied as a means of data augmentation for NMT through forward translation (Zhang and Zong, 2016) or backtranslation (Sennrich et al., 2016a; Marie et al., 2020) of monolingual resources.",
"Moreover, recent lines of work use synthetic translations to augment the original parallel data: Nguyen et al. (2020) diversify the parallel data via translating both sides using multiple models and then merging them with the original to train a final NMT model; Jiao et al. (2020) employ a similar approach to rejuvenate inactive examples that contribute the least to the model performance.",
"Sequence-level knowledge distillation (Kim and Rush, 2016) can also be viewed as replacing original bitext with synthetic translations.",
"While its original goal was to guide the training of a student model of small capacity with the output of a teacher of high capacity, distillation is also necessary to effectively train some categories of MT architectures such as non-autoregressive models (Gu et al., 2018).",
"While it is not entirely clear why synthetic distilled samples are superior to original bitext in this case, recent studies suggest that the synthetic samples are simpler and thus easier to learn from (Zhou et al., 2020; Xu et al., 2021).",
"Synthetic Data Selection Prior work covers a wide spectrum of different selection strategies on top of synthetic translations generated from monolingual samples.",
"Each of them focuses on identifying samples with specific properties: Axelrod et al. (2011) sample sentences that are most relevant to a target domain with the goal of creating pseudo in-domain bitext; Hoang et al. (2018) generate synthetic parallel data iteratively from increasingly better back-translation models for improving unsupervised NMT ; Fadaee and Monz (2018) focus on the diversity of synthetic samples and sample synthetic translations containing words that are difficult to predict using prediction losses and frequencies of words.",
"By contrast, our empirical study investigates whether synthetic translations can be used to selectively replace original references to improve bitext quality rather than augmenting it.",
"Bitext Quality Mining bitext from the web results in large-scale corpora that are usually collected without guarantees about their quality.",
"For instance, they contain noisy samples, ranging Algorithm 1 Revising Bitext: Given a bitext D = ( S, T ) , a divergent scorer R , and a margin score t , return revised bitext D 1: procedure TRAIN ( D = ( S, T ) ) 2: Train MS T on D until convergence 3: return MS T 4: end procedure 1: procedure EQUIVALIZE ( D = ( S, T ) ) 2: MS T TRAIN ( D = ( S, T ) ) 3: MT S TRAIN ( D = ( T, S ) ) 4: D 5: for i 1 ,..., |D| do 6: ( S i , T i ) ( S i , MS T ( S i ) ) 7: ( S i , T i ) ( MT S ( T i ) , T i ) 8: d F R ( S i , T i ) R ( S i , T i ) 9: d B R ( S i , T i ) R ( S i , T i ) 10: if max ( d F , d B ) > t then 11: if max = d F then 12: D D {( S i , T i )} 13: else 14: D D {( S i , T i )} 15: end if 16: else 17: D D {( S i , T i )} 18: end if 19: end for 20: return D 21: end procedure from untranslated sentences to sentences with no linguistic content (Khayrallah and Koehn, 2018; Caswell et al., 2020).",
"Some of this noise is typically filtered out automatically using heuristics (Ramrez-Snchez et al., 2020) or NMT model scores (Junczys-Dowmunt, 2018; Koehn et al., 2019).",
"Yet, even after this noise filtering, a wide range of the remaining samples contains fine-grained semantic divergences (Briakou and Carpuat, 2020).",
"Our past work explored strategies to mitigate the impact of these divergences on MT models by incorporating divergence tags as token-level factors (Briakou and Carpuat, 2021), and designing an approach to automatically edit divergent samples with noisy supervision from monolingual resources (Briakou et al., 2021).",
"By contrast, this work explores whether synthetic translations can be used to replace potentially fine-grained divergences using only the bitext we seek to revise.",
"We rely on established techniques that can be applied using only the bitext that we seek to revise.",
"First, we train NMT models on the original bitext to translate in both directions.",
"For each original sentence-pair, we generate a pool of synthetic translations using NMT and apply a divergence ranking criterion to decide whether and how to replace the original references with a better translation.",
"Algorithm 1 gives an overview of the process, and we describe each step below.",
"Generating synthetic translations We train NMT models MS T and MT S on the original bitext to translate in each direction (lines 2 3 ).",
"For each sentence-pair, they are used to generate two candidates for replacement by forward and backward translation (lines 6 7 ): ( S i , MS T ( S i ) ) and ( MT S ( T i ) , T i ).",
"As a result, NMT models translate the exact same data that they are trained on.",
"We thus expect translation quality to be high , and that local errors in the original bitext might be corrected by the translation patterns learned by NMT models on the entire corpus.",
"Selective Replacement We propose to replace an original pair by a candidate only if the candidate is predicted to better convey the meaning of the source than the original, which we refer to as the semantic equivalence condition .",
"We implement this by ranking the original sample ( S i , T i ), its revision by forward translation ( S i , MS T ( S i ) ) and its revision by back-translation ( MT S ( T i ) , T i ), according to their degree of semantic equivalence.",
"If none of the synthetic samples score higher than the original, it is not replaced (line 17 ).",
"Otherwise, the original is replaced by the highest scoring synthetic sample (lines 10 15 ).",
"As a result the cardinality of the bitext remains constant.",
"The semantic equivalence condition ( d F and d B (lines 8 9 )) is implemented using divergentm BERT , a divergent scorer introduced in our prior work (Briakou and Carpuat, 2020) that is trained on synthetic samples generated by perturbations of the original bitext (e.g., deletions, lexical or phrasal replacements) performed without any bilingual information.",
"Bitext We evaluate the use of synthetic translations for revising bitext on two language pairs of the WikiMatrix corpus (Schwenk et al., 2021).",
"WikiMatrix consists of sentence-pairs mined from Wikipedia pages using language agnostic sentence embeddings ( LASER ) (Artetxe and Schwenk, 2019).",
"Prior work indicates that, as expected, the corpus as a whole comprises many samples that are not exact translations: Caswell et al. (2021) report that for more than half of the audited low-resource language-pairs, mined pairs are on average misaligned; Briakou and Carpuat (2020) find that 40% of a random sample of the English-French bitext are not semantically equivalent, and include fine-grained meaning differences in addition to alignment noise.",
"We focus on bitexts with fewer than one million sentence pairs in Greek English ( EL EN , with 750 , 585 pairs) and Romanian English ( RO EN , with 582 , 134 pairs), because improving bitext is particularly needed in this data regime.",
"In much higher resource settings, filtering strategies might be sufficient as there might be more high quality samples overall.",
"In much lower resource settings, the data is likely too noisy or too small to effectively revise bitexts using NMT .",
"We filter out noisy pairs in the training data using bicleaner (Ramrez-Snchez et al., 2020) so that our empirical study excludes the most obvious forms of noise, and focuses on the harder case of revising samples that standard preprocessing pipelines consider to be clean.",
"1 Preprocessing We use Moses (Koehn et al., 2007) for punctuation normalization, true-casing, and tokenization.",
"We learn 32 KBPE s (Sennrich et al., 2016b) per language using subword-nmt 2 .",
"NMT Models We use the base Transformer architecture (Vaswani et al., 2017) and include details on the exact architecture and training in Apendix C. Selective Replacement The divergence ranking models are trained using our public implementation of divergentm BERT (Briakou and Carpuat, 2020).",
"3 Synthetic divergences are generated starting from the 5 , 000 top scoring WikiMatrix sentences based on LASER score (i.e., seed equiva-lents).",
"We fine-tune the BERT -Base Multilingual Cased model (Devlin et al., 2019) and set the margin equal to 5 as per our original implementation.",
"We use the same margin value for the margin score of Algorithm 1.",
"4 1 https://github.com/bitextor/bicleaner 2 https://github.com/rsennrich/subword-nmt 3 https://github.com/Elbria/xling-SemDiv 4 Our divergentm BERT yields 84 F 1 on a set of English-French human-annotated fine-grained divergences in WikiMatrix collected in our prior work (Briakou and Carpuat, 2020).",
"We ask 3 bilingual speakers to evaluate the quality of the EN-EL bitexts.",
"Given an original source sentence, they are asked to rank the original target and the candidate target in the order of their equivalence to the source.",
"They are asked Which sentence conveys the meaning of the source better?, and ties are allowed.",
"A random sample of 100 pairs from forward and backward MT is annotated.",
"As can be seen in Table 2, 60% of ALL synthetic candidates are better translations of the WikiMatrix reference, which confirms the potential of NMT for improving over original translations.",
"Further ablations confirm the benefits of selecting these synthetic candidates with the semantic equivalence condition.",
"When the divergent scorer ranks a candidate higher than the original by a small margin (i.e., 0 d 5 given d = R ( S i , MS T ( T i )) R ( S i , T i )) ), human evaluation shows that the candidate is actually better than the original only 51% of the times.",
"When using our exact semantic equivalence condition ( d > 5 ), can-Candidate set % Equivalized Kendall's ALL 60 .",
"didates are judged as more equivalent than the original 87 .",
"5% of the times, and annotations within this set have a stronger agreement (i.e., 0 . 688 Kendall's ).",
"This indicates that the condition d > 5 identifies more clear-cut examples of synthetic translations that fix semantic divergences in the original data and can be thus used for selective replacement of imperfect references by better quality translations.",
"Further inspection of the annotations reveals that most source-target WikiMatrix examples contain fine meaning differences ( 56% ).",
"In those cases, we observe that most of the content between the sentences is shared, but either small segments are 4756 PROPERTY ORIGINAL REVISED 1 : # Sentences 750 , 585 750 , 585 0 .",
"mistranslated (e.g., London instead of Athens in the first example of Table 1), or some information is missing from either side of the pair (e.g., all six missing from the target side in the third example of Table 1).",
"Furthermore, more coarse-grained divergences are found less frequently ( 12% )in those cases, we notice that sentences are usually either topically related or structurally similar (e.g., length, syntax) with a few anchor words (e.g., last example in Table 1).",
"Finally, 32% of the times the original WikiMatrix pairs are perfect translations of each other.",
"Figure 1 presents the distribution of lexical differences (i.e., computed using LeDa score that captures lexical differences based on the percentages of tokens that are not found in two sentences (Niu and Carpuat, 2020)) between original and synthetic translations (in EN ) for candidates that replace and do not replace the originals.",
"5 First, we observe that a substantial amount of synthetic translations that do not replace original references ( 40% ) corresponds to small LED scores ( < 0 . 1 ), suggesting that the equivalence criterion could fall back to the original sentence not because of the poor quality of candidate references, but rather due to them being already close to the originals.",
"Furthermore, all synthetic translated instances are represented in almost all bins, with fewer instances found on the 5 LeD details are in Appendix A. Replaced Not Replaced Figure 1: LeD differences of original vs. synthetic translations ( EL EN ).",
"extreme bins of > 0 .",
"7 LED scores.",
"Finally, synthetic translations that replace original references are mostly concentrated within the range [0 . 2 , 0 . 6] of LeD scores.",
"This indicates that they share lexical content with the original, which further supports the hypothesis that synthetic translations revise fine-grained meaning differences in WikiMatrix in addition to alignment noise.",
"Table 3 presents differences in statistics of the original vs. revised WikiMatrix EN-EL bitexts to shed more light on the impact of selectively using synthetic translation for bitext quality improvement.",
"6 The refined bitext exhibits higher coverage (i.e., ratio of source words being aligned by any target words; rows 5 and 13 ) and smaller complexity (i.e., 6 Details on the metrics are in Appendix A. 4757 the diversity of target word choices given a source word (Zhou et al., 2020)) compared to the original bitext.",
"Moreover, the use of synthetic translations introduces small decreases in the lexical types covered in the final corpus (i.e., rows 3 and 11 ), which is expected as the additional coverage in the original corpus might be a result of divergent texts.",
"Those observations are in line with prior work that seeks to characterize the nature of synthetic translations used in other settings, such as knowledge distillation (Zhou et al., 2020; Xu et al., 2021).",
"While fixing divergent references contributes to this simplification effect, NMT translations might also reinforce unwanted biases from the original bitext.",
"For instance, the distribution of two grammatical gender pronouns on the English side is a little more imbalanced in the improved bitext than in the original (rows 6 7 and 14 15 ), 7 likely due to gender bias in NMT (Stanovsky et al., 2019).",
"This calls for techniques to mitigate such biases (Saun-ders and Byrne, 2020; Stafanovics et al., 2020) for NMT and other downstream tasks.",
"Our previous analysis suggests that selective replacement of divergent references with synthetic translations results in bitext of improved quality , with reduced level of noises and easier word-level mappings between the two languages, when compared to the original WikiMatrix corpus.",
"To better understand how those differences impact downstream tasks, we contrast the improved bitext with the original through a series of extrinsic evaluations for EN-EL and EN-RO languages that rely on parallel texts as training samples (see 5.2).",
"First, we focus on the recent state-of-the-art unsupervised BLI approach of Shi et al. (2021) that relies on word-alignments of extracted bitexts.",
"Second, we follow the recent bitext quality evaluation frameworks adopted by the Shared Task on Parallel Corpus Filtering and Alignment (Koehn et al., 2020) and built neural machine translation systems from scratch and by continued training on a multilingual pre-trained transformer model.",
"Finally, we conduct extensive ablation experiments to test the impact of using synthetic translations without the semantic equivalence condition and contrast with familiar techniques used by prior work (see 5.3).",
"BLI The task of BLI aims to induce a bilingual lexicon consisting of word translations in two languages.",
"We experiment with the recently proposed method of Shi et al. (2021) that combines extracted bitext and unsupervised word alignment to perform fully unsupervised induction based on extracted statistics of aligned word pairs.",
"The induced lexicons are evaluated based on MUSE (Lample et al., 2018) consisting of 45 , 515 and 80 , 815 dictionary entries for EL-EN and EN-RO , respectively.",
"8 We extract word alignments using m BERT -based Sima-lign 9 (Jalili Sabet et al., 2020) and statistics based on the implementation of Shi et al. (2021).",
"10 MT We experiment with MT tasks following two approaches: (1) training standard transformer seq2seq models from scratch; (2) continued training for mT5 (Xue et al., 2021), a multilingual pre-trained text-to-text transformer.",
"We evaluate translation quality with BLEU (Papineni et al., 2002) 11 on the official development and test splits of the TED corpus (Qi et al., 2018).",
"12 For (1) we follow the experimental settings described in 3.2.",
"For (2) we initialize the weights of transformer with mT5-small which consists of 300 M parameters, 13 .",
"We use the simpletransformers implementation.",
"14 We fine-tune for up to 5 epochs and include the parameter settings in Appendix D. Ablation Settings We compare the NMT models trained on the variants of the synthetic bitext to isolate the impact of replacement criteria and different candidates.",
"15 For the former, we experiment with the rejuvenation approach of Jiao et al. (2020) that replaces original references with forward translated candidates for the 10% least active original samples measured by NMT probability scores.",
"Moreover, we experiment with forward and backtranslation baselines trained on bitexts that consist solely from targetor source-side candidate sentences (i.e., original references are entirely excluded) and with ablations that consider either forward or backward 8 https://github.com/facebookresearch/MUSE 9 https://github.com/cisnlp/simalign 10 https://github.com/facebookresearch/ bitext-lexind 11 https://github.com/mjpost/sacrebleu 12 Data statistics are found in Appendix E. 13 https://github.com/google-research/ multilingual-t5 14 https://github.com/ThilinaRajapakse/ simpletransformers 15 Results on development sets are in Appendix B. 4758 All Low Medium High PAIR BITEXT Precision Recall F1 OOV rate Precision EL-EN (cid:110) Original 76 .",
"candidates for the proposed semantic equivalence condition.",
"Finally, we consider two alternatives to the semantic equivalence condition based on divergent scores: the ranking condition replaces a candidate if it scores higher than the original (i.e., margin with d = 0 ) and the thresholding condition adds the additional constraint that candidates should rank higher than a threshold to replace the original pair.",
"BLI Table 4 presents results for unsupervised BLI on the MUSE gold-standard dictionaries, for EL-EN and EN-RO .",
"Across languages, the revised bitexts induce better lexicons compared to the original WikiMatrix.",
"Crucially, improvements are reported both in terms of Recallwhich connects to the observation that the revised bitext exhibits higher coverage than the original and in terms of Precision which connects to the noise reduction effect that impacts the extracted word alignments.",
"Additionally, a break-down on the Precision of the induced lexicons binned by the frequency of MUSE source-side entries (i.e., last 3 columns in Table 4) reveals that the improvements come from better induction of lowand medium-frequency words, which we expect are more sensitive to noisy misalignments that result from divergent bitext.",
"Finally, those improvements are reported despite the small increase of the OOV rate in the revised lexicons that results from the decrease in the lexical types covered in it, as mentioned in the analysis (i.e., 4.3).",
"Furthermore, following the advice of Ke-mentchedjhieva et al. (2019) who raise concerns on BLI evaluations based on gold-standard pre-defined dictionaries, we accompany our evaluation with manual verification to confirm that our conclusions are consistent with those of the automatic evaluation.",
"Concretely, we manually check the false positives induced translation pairs from the origi-PAIR ORIGINAL REVISED EL EN 28 .",
"nal vs. the improved bitext.",
"We found that 65 / 80 are false false positives (due to incompleteness of pre-defined dictionaries) for the improved bitext and 51 / 80 for the original (see Appendix F for the complete list).",
"This confirms that the metric improvements we observe are meaningful and suggests that the improved bitext help learn better mappings between source and target words.",
"MT Table 5 presents translation quality ( BLEU ) on EN RO and EN EL tasks for MT training from scratch and Figure 2 shows translation quality of 4759 SELECTIVE DATABITEXT STATISTICS REPLACEMENT TYPES BLEU O F BVIS .",
"mT5 continued training across epochs.",
"Across tasks and settings, the revised bitext yields better translation quality than the original WikiMatrix data.",
"The consistent improvements we observe across the two settings suggest that the properties of the synthetic translations that replace original samples and bring those improvements are invariant to specific models.",
"Moreover, the magnitude of improvements is larger in the continued training setting compared to training from scratch (e.g., +0 . 8 vs. +1 . 5 , for EN EL ; +0 . 2 vs. +1 . 5 , for RO EN ).",
"The latter suggests that improvements from using synthetic samples do not only come from the normalization effect (i.e., synthetic samples are easier to model by NMT ) but also connect to the reduced noise in the training samples.",
"This further complements our hypothesis that synthetic translations can improve the quality of imperfect references that should, in principle, yield noisy training signalsand thus impact the resulting qualityof different MT models.",
"Table 6 compares the translation quality ( BLEU ) of NMT systems trained on different synthetic translations.",
"By forcing the semantic equivalence condition when deciding whether a synthetic translation replaces an original, we revise 50% of the latter yielding the best results across directions with significant improvements (i.e, increases do not lie within 1 stdev of the original's bitext performance) of +0 .",
"81 ( EN EL , row 9 ) and +1 .",
"49 ( EL EN , row 18 ) points over the original bitext.",
"Impact of semantic equivalence condition Table 6 shows that naively disregarding the original references and training only on synthetic translations gives mixed results: training on forward-translated references only (i.e., row 2 ) gives small improvements ( +0 . 36 ) over the model trained on WikiMatrix for EN EL , while it performs comparably to it for EL EN (i.e., row 11 ).",
"On the other hand, training on backward data only (i.e., row 12 ) improves BLEU by a small margin ( +0 . 23 ) for MT into EN while it hurts BLEU when translating into EL (i.e., row 3 ).",
"This indicates that the good quality of the synthetic translations cannot be taken for granted and motivates replacing original pairs under conditions that account for semantic controls.",
"The latter is further confirmed by results on the rejuvenation baseline: replacing candidates for the 10% of the most inactive WikiMatrix samples results in small and insignificant increases in BLEU when compared to models trained on original WikiMatrix data (i.e., rows 1 4 and 10 13 ).",
"This indicates that rejuvenation might not be well-suited to lower resource settings than the ones it was originally tested on (Jiao et al., 2020).",
"The rejuvenation technique might be affected by the decreased NMT 4760 quality and calibration in lower resource settings.",
"By contrast, using synthetic translations with semantic control mitigates their impact.",
"Finally, all three semantic control variants based on divergent scores yield bitexts that improve BLEU compared to the original WikiMatrix (i.e., rows 5 8 and 14 18 ).",
"Among them, the margin condition is the most successful, followed by the thresholding variant.",
"The breakdown of training statistics reveals the reason behind their differences: the thresholding condition is a more strict constraint as it only allows synthetic candidates to replace the original pairs if they are predicted as exact equivalents, allowing for fewer revisions of divergent pairs in WikiMatrix.",
"By contrast, the condition based on margin is a contrastive approach that allows for more revisions of the original data (i.e., a candidate might be a more fine-grained divergent of the source).",
"The ranking criterion is the least successful methodthis is expected as the divergence ranker is not trained as a regression model.",
"Impact of bi-directional candidates Considering both forward ( F ) and backward ( B ) translated candidates during selective replacement yields to further improvements ( 0 . 22 0 . 44 points) over bitext induced by the semantic equivalence condition with candidates from a single NMT model (i.e., rows 7 9 and 16 18 ).",
"When forward and backward candidates are considered independently, they replace 34 37% of the original pairs; in contrast, when considered together, they replace 50% of original WikiMatrix pairs.",
"As a result, there is no perfect overlap between the original pairs replaced by the forward vs. backward model, which motivates the use of both to revise more divergences in WikiMatrix.",
"This finding raises the question of whether using synthetic translations from both directions might benefit other scenarios, such as knowledge distillation.",
"This paper explored how synthetic translations can be used to revise bitext, using NMT models trained on the exact same data we seek to revise.",
"Our extensive empirical study surprisingly shows that, even without access to further bilingual data or supervision, this approach improves the quality of the original bitext, especially when synthetic translations are generated in both translation directions and selectively replace the original using a semantic equivalence criterion.",
"Specifically, our intrinsic evaluation showed that synthetic translations are of sufficient quality to improve over the original references, in addition to normalizing the bitext as suggested by prior work and corpus level statistics (Zhou et al., 2020; Xu et al., 2021).",
"Extrinsic evaluations further show that the replaced synthetic translations provide more useful signals for BLI tasks and NMT training in two settings (i.e., training from scratch and continued training).",
"These findings provide a foundation for further exploration of the use of synthetic bitext.",
"First, we focused our empirical study on language pairs and datasets where revising bitexts is the most needed and most likely to be useful: the resources available for these languages are not so large that mined bitext can simply be ignored or filtered with simple heuristics, yet there is enough data to build NMT systems of reasonable quality (i.e., 600 K segments for EN-RO , and 750 K for EN-EL ).",
"While in principle, selective replacement of divergent references with synthetic translations should port to high-resource settings, where NMT is as good or better than for the languages considered in this work, other techniques are likely needed in low-resource settings where NMT quality is too low to provide reliable candidate translations.",
"Second, having established that the revised bitext improves the quality of the original bitext in isolation, it remains to be seen how to best revise bitexts in more heterogeneous scenarios with diverse sources of parallel or monolingual corpora.",
"Overall, as synthetic data generated by NMT is increasingly used to improve cross-lingual transfer in multilingual NLP, our study motivates taking a closer look at the properties of synthetic samples to better understand how they might impact downstream tasks beyond raw performance metrics.",
"All bitexts are available at: https: //github.com/Elbria/xling-SemDiv-Equivalize .",
"We thank Marjan Ghazvininejad, Luke Zettle-moyer, Sida Wang, Sweta Agrawal, Jordan Boyd-Graber, Pedro Rodriguez, the anonymous reviewers and the CLIP lab at UMD for helpful comments.",
"This material is based upon work supported by the National Science Foundation under Award No. 1750695 .",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation."
] | [
"abstain",
"objective",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"method",
"result",
"method",
"objective",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"abstain",
"abstain",
"other",
"method",
"abstain",
"objective",
"other",
"other",
"other",
"other"
] |
[
"Knowledge base (KB) embeddings have been shown to contain gender biases (Fisher et al., 2020b).",
"In this paper, we study two questions regarding these biases: how to quantify them , and how to trace their origins in KB ?",
"Specifically, first, we develop two novel bias measures respectively for a group of person entities and an individual person entity.",
"Evidence of their validity is observed by comparison with real-world census data.",
"Second, we use influence function to inspect the contribution of each triple in KB to the overall group bias.",
"To exemplify the potential applications of our study, we also present two strategies (by adding and removing KB triples) to mitigate gender biases in KB embeddings.",
"Gender biases have been shown to have noticeable presence in a wide range of NLP models.",
"For example, we can observe that the word embedding of engineer is closer to he than she (Bolukbasi et al., 2016), and co-reference systems associate surgeon more with masculine pronouns than with feminine ones (Rudinger et al., 2018).",
"These biases are brought to our models from training data by our algorithms.",
"Hence, besides revealing the existence of gender biases, it is important to quantify them and explain their origins in data.",
"Knowledge bases (KB, e.g. Freebase, Bollacker et al., 2007) provide accessible organizations of human knowledge by the form of triples.",
"Each triple consists of a head entity, a relation, and a tail entity.",
"For example, the fact that Marie Curie is a chemist is represented as (cid:104) Marie Curie , people.person.profession , chemist (cid:105) .",
"KB embeddings encode these knowledge into dense vector representations.",
"It is important to understand gender biases in KB embeddings for two major reasons.",
"First, KB embeddings serve as sources of prior knowledge in many downstream NLP models (e.g. pre-trained language models, Zhang et al., 2019).",
"Clearly, if biases exist in KB embeddings, they can easily propagate into these models, and drive these models more biased.",
"Second, Garg et al. (2018) observe that word embeddings reflect biases in the training corpora, and hence our society.",
"Likewise, we suspect KB embeddings to reflect biases encoded in KBs, as also suggested by Radstok et al. (2021).",
"In this paper, we propose two novel gender bias measures for KB embeddings, one for a group of person entities ( group bias ) and the other for an individual person entity ( individual bias ).",
"Furthermore, with influence function (Koh and Liang, 2017), we explain the origins of group bias at the fact triple level (i.e. how each triple in KB contribute to group bias).",
"In practice, we use TransE (Bordes et al., 2013) to demonstrate our methods, for its popularity and simplicity.",
"Nevertheless, most of our study can generalize to other embedding algorithms.",
"Specifically, we make four contributions.",
"First, regarding a group of person entities with a shared relation-tail pair (e.g. of the same occu-pation), using correlation analyses , we measure their gender biases by the differences between different genders' link prediction errors.",
"Second, to understand the origins of the group bias, we use influence function to find its highly-influential triples in KB (i.e. triples that will change the bias most if being removed during training).",
"Third, regarding a single person entity , using counterfactual analyses , we develop a bias measure by measuring the change of the link prediction error when we keep everything else the same and perturb its gender.",
"To avoid the intractable computational cost of re-training, we propose to use influence function to approximate the results.",
"Fourth, to further facilitate large-scale influence function based analyses, we derive a closed-form approximation of the Hessian matrix of TransE loss.",
"We therefore improve the time complexity of computing influence function from O ( n ) (stochastic approximation) to (1) .",
"Moreover, in further analyses, we show that both group and individual bias correlate well with real-world biases.",
"We argue that this suggests the validity of our bias measures.",
"We also show the accuracy of our influence function approximation by comparing with the brute-force strategy (i.e., leave-one-out re-training).",
"Finally, to exemplify the applications of our study, we propose two simple de-biasing strategies, and demonstrate their effectiveness.",
"Knowledge Base KB is a set of structural human knowledge represented by triples G = {(cid:104) h, r, t (cid:105)} , where h is a head entity, r is a relation type, and t is a tail entity.",
"Moreover, these triples form a graph with entities as nodes (denoted by E , where e E is an entity) and relations as edges.",
"In this work, since we are particularly interested in person entities and their gender, we use (cid:104) s, r g , m (cid:105) or (cid:104) s, r g , f (cid:105) to represent a person s with gender male or female, where r g is the relation of gender.",
"1 TransE The entities and relations in KB can be represented with embedding vectors.",
"These embeddings can serve in many NLP task as a source of prior knowledge.",
"In this work, we focus on the widely used TransE (Bordes et al., 2013).",
"Given a triple (cid:104) h, r, t (cid:105) , the key idea of TransE is to make vectors of h , r and t close to each other in the sense of small link prediction error.",
"Concretely, TransE embeddings are learned by minimizing a margin-based ranking loss, L = (cid:80) (cid:104) h,r,t (cid:105)G [ m + ( h, r, t ) ( h (cid:48) , r, t (cid:48) )] + , (1) where m is a scalar margin and is a distance measure.",
"The lower ( h, r, t ) is, the more likely (cid:104) h, r, t (cid:105) forms a fact.",
"h (cid:48) and t (cid:48) are two randomly sampled entities.",
"The triple (cid:104) h (cid:48) , r, t (cid:48) (cid:105) is called a negative sample because it is not in G .",
"This loss function basically says that the dissimilarity of a positive triple (cid:104) h, r, t (cid:105) should be smaller than a negative sample by a margin m .",
"2 Specifically, in this paper, we take to be the L 2 -norm distance 1 We operate with binary gender here, because it is naturally encoded in KB.",
"2 For simplicity, we consider only linear loss in the rest of this paper.",
"This is a feasible choice both empirically and theoretically in our analyses.",
"Empirically, in experiments we observe that the link prediction errors of all triples converge ( h, r, t ) = (cid:107) h + r t (cid:107) 22 , where h , r , t R d are the embeddings of h , r and t , respectively.",
"In this paper, we use Freebase's (Bollacker et al., 2007) subset FB5M (Bordes et al., 2015) as the KB for training TransE embeddings and performing our analyses.",
"See Appendix A for detailed setup.",
"Influence Function (Cook and Weisberg, 1982; Koh and Liang, 2017) provides an efficient way to approximate each training sample's impact on correctly predicting a test sample.",
"Formally, let L ( z, ) be a convex loss function on a training set { z i } ni =1 with parameters .",
"The empirical risk minimizer (ERM) is given by = arg min 1 n (cid:80) ni =1 L ( z i , ) .",
"We are interested in a training sample z 's impact on , with a weight of .",
"In this case, the new ERM is given by z, = arg min 1 n (cid:80) ni =1 L ( z i , ) + L ( z, ) (Note that if = 1 n , it equals to removing z ).",
"Influence function provides an efficient method of approximating the difference between z, and , without retraining the model, z, I up,param ( z ) , (2) where I up,param ( z ) def = H 1 L ( z, ) .",
"H = 1 n (cid:80) i 2 L ( z i , ) is the Hessian matrix of the original loss function.",
"Moreover, we are interested in the change of the test performance, which is a function F of the test sample z test and the model parameter (LHS).",
"By applying chain rule to F and Equation 2, we can obtain the difference of test performance.",
"Formally, F ( z, , z test ) F ( , z test ) I up ,F ( z, z test ) , (3) where I up ,F ( z,z test ) def = F ( z test , ) (cid:62) I up,param ( z ) .",
"Similarly, by splitting perturbation to first remove then add, we can also inspect the change of F when a training sample z is perturbed to z (cid:48) .",
"Denote z,z (cid:48) , = arg min 1 n (cid:80) ni =1 L ( z i , ) L ( z, ) + L ( z (cid:48) , ) , and apply Equation 3 twice, we obtain F ( z,z (cid:48) , , z test ) F ( , z test ) I up ,F ( z (cid:48) , z test ) I up ,F ( z, z test ) def = I pert ,F ( z, z (cid:48) , z test ) .",
"Finally, besides single sample estimation, we are also interested in inspecting the influence of removing a group of training samples.",
"In these cases, at a larger-than-margin value.",
"Theoretically, when link prediction errors converge at values smaller than the margin, the gradients become 0. Its influence thus becomes 0, too.",
"following Koh and Liang (2017), we simply add up the influence of each removed training sample.",
"However, as noted by Koh and Liang (2017), when handling a group of samples, although influence function approximation still holds a strong correlation with the ground truth change of the parameters, the estimation can suffer from larger errors.",
"In this section, based on link prediction, we take two views to quantify gender biases in KB embeddings.",
"First, using correlation analysis, we take a macro view to inspect gender biases of a group of person entities (e.g., how gender influences the overall occupation prediction accuracy of a group of engineer entities).",
"Second, under the framework of counterfactual analysis, we take a micro view to assess gender biases of an individual person entity (e.g., how a specific engineer entity's gender influences its occupation prediction accuracy).",
"Afterwards, we build connections between them.",
"In this following, we adopt occupation prediction as our running example.",
"The reason is two fold.",
"First, among all of the relations connected with person entities, occupation relation has the highest coverage rate (i.e. connect with the most person entities).",
"Second, most previous relevant studies also focus on occupation.",
"Our choice makes it easier to perform comparative studies (Garg et al., 2018; Fisher et al., 2020b).",
"To see whether a group of entities exhibits bias, one direct solution is to deploy methods analog to those applied to analyze bias in word embeddings (Bolukbasi et al., 2016).",
"For example, we can compute the projection of the TransE embedding of an occupation o to the difference between male and female entities (Bourli and Pitoura, 2020), B wo = o (cid:62) ( m f ) , where m and f are the embeddings of male and female entity respectively.",
"However, we argue that because TransE follows a different type of learning objective (link prediction style objective instead of the vector-similarity-based ones in word embedding algorithms), directly adopt existing settings may not fully explore the semantics of TransE embeddings.",
"Therefore, we propose to detect group bias based on the correlation between genders and link prediction errors.",
"Intuitively, given an occupation o , person entities of o 's privileged gender will link to o with lower errors than those of unprivileged gender.",
"Formally, we define the group bias of o as B gr = 1 | F | (cid:80) s F ( s, r p , o ) 1 | M | (cid:80) s M ( s, r p , o ) , where M and F are the sets of all male and female person entities with o respectively, and r p is the relation people.person.profession .",
"The higher B gr is, the more o 's embedding is biased towards male.",
"Table 1 lists B gr of some occupations, as well as the gender frequency of this occupation in KB.",
"We make two observations.",
"First, we observe the existence of gender biases in KB embeddings, and note their consistency with real-world biases.",
"For example, engineer and nurse have more extreme bias scores respectively towards male and female, while singer and animator have more moderate ones (quantitative analyses in 4).",
"Second, although the gender ratio of person entities has a great impact on B gr , it is not the only decisive factor.",
"For example, animator has a gender ratio of 5.7:1, but its B gr is biased towards female.",
"Inspecting the Origins of Biases The second observation motivates us to trace the origins of biases.",
"More concretely, in the context of KB: how do different triples contribute to B gr ?",
"To answer this question, we apply influence function (Equa-tion 3) with F = B gr and observe how removing a training triple changes the overall group bias score.",
"One appealing property of TransE is that we are able to derive a closed-form Hessian matrix.",
"Moreover, by further analyses, we can directly obtain a diagonal approximation of the Hessian matrix, and thus the Hessian inverse I up, param .",
"Taking advantage of this, we can reduce the computation of I up , B gr to constant time complexity (w.r.t. training set size), which is much faster than the LiSSA algorithm (Agarwal et al., 2017) applied in (Koh and Liang, 2017), which requires O ( n ) time complexity to obtain a Hessian inverse approximation.",
"Concretely, using basic calculus, we have the following lemma and remarks.",
"We include their detailed proof and derivations in Appendix B. Lemma 1. Suppose we generate the corresponding negative sample of a positive sample (cid:104) h, r, t (cid:105) by randomly choosing h or t and corrupting it to a random entity in E , we can derive the closed-form Hessian matrix of TransE with entries E H = e e (cid:48) r r (cid:48) ... ... ... ... ... e ee I d ee (cid:48) I d er I d ... e (cid:48) ee (cid:48) I d ee I d er I d",
"where e , e (cid:48) and r , r (cid:48) are different entities and relations, ee , er , ee (cid:48) are three different coefficients dependent on the frequencies of the corresponding entities and relations, and I d is the identity matrix of R d d .",
"Remark 2. In practice, we approximate the closed-form Hessian from Lemma 1 with its diagonal elements, EH diag { , ee I d , (cid:124) (cid:123)(cid:122) (cid:125) entities , , 0 , (cid:124) (cid:123)(cid:122) (cid:125) relations } .",
"Remark 3. e could be zero or negative, which breaks the positive definiteness of H .",
"Following Koh and Liang (2017), we add I ( > 0 ) to H (i.e., ee ee + ), which equals adding an L 2 regularization on parameters.",
"Following Equation 3 ( = 1 / |G| ), we can compute the change of group bias (denoted by z B gr ) after removing a training triple z = (cid:104) h, r, t (cid:105) , 3 z B gr = 1 |G| B (cid:62) gr (cid:16) E H 1 L ( z, ) (cid:17) .",
"A triple z with positive z B gr means that re-training without it will increase B gr (i.e., towards mascu-line) and vice versa.",
"We note that due to the diagonal Hessian, z will have a non-zero influence iff it is reachable from o in two hops (i.e., entities of z take part in the computation of B gr ).",
"In practice, we calculate z B gr of each triple in KB regarding B gr of each occupation, and make three observations.",
"First, regarding relations in KB, we find most of the highly-influential triples (i.e. triples with highest absolute z B gr values) to be of the profession relation (i.e., r p ) and its inverse 4 .",
"For example, regarding the occupation of singer , these two relations occupy 74% of the top 1% positive triples and 78% of the top 1% negative triples.",
"It suggests that compared with indirectly (i.e. two-hop) connected triples, triples directly connect with an entity have larger impact on it, which matches our intuitions.",
"Second, regarding gender, we find that most person entities in triples with high positive z B gr are of female gender, and vice versa.",
"Figure 1 take the occupation of actor as an example to illustrate this.",
"5 This observation agrees with previous observation: triples with person entities of male gender will drive the overall biases towards masculine, and removing them will reverse this effect.",
"Third, regarding graph substructure, we find that if a triple contains a high degree person entity, it usually has a nearly zero z B gr (i.e. has small impact on other triples, see Figure 1), We suspect the reason to be as follows: the more neighbors 3 More precisely, z is a pair of triples ( (cid:104) h, r, t (cid:105) , (cid:104) h (cid:48) , r, t (cid:48) (cid:105) ) .",
"To handle randomness of negative samples, we adopt two strategies in our implementation.",
"First, we freeze negative samples in training epochs to get consistent results.",
"Second, we use EH to replace random H in influence functions.",
"4 i.e. people .",
"person .",
"people _ with _ this _ profession 5 Similar patterns are observed in other occupations for both this observation and the next one.",
"an entity has, the more constraints its embedding needs to put on others (by link prediction).",
"It makes the embedding less optimal to represent each constraint, and hence less influential to each triple.",
"Group-level correlation analyses offer us a coarse portrayal of biases.",
"However, we are also interested in finer characterization (for each group member).",
"Moreover, because of the complexity of KB structures, there very likely exist confounders between person entities and occupations (e.g. if two person entities of the same occupation are connected themselves, they are confounders of each other).",
"In this case, correlation does not imply causation .",
"In other words, gender differences are not guaranteed to be the only cause of B gr .",
"Therefore, in this section, we study a further question: can we perform analyses on a specific person entity and measure its gender biases based on how its gender change its link prediction error (i.e. causality)?",
"By virtue of the structured knowledge in KB, we are able to conduct this individual-level analysis in a tractable way.",
"The key idea is, what if we keep everything else identical and perturb only the gender?",
"Intuitively, given an occupation o , if flipping a person entity's gender from female to male will make it easier to connect the person with o , o should be biased towards male.",
"Formally, we define individual bias B in of (cid:104) s, r p , o (cid:105) as B in = ( s, r p , o ) | f ( s, r p , o ) | m , where | f ( | m ) is computed on a version of TransE where s 's gender is female (male).",
"B in means that, it is more difficult to predict s 's occupation if s is female.",
"We would thus argue that (cid:104) s, r p , o (cid:105) is biased toward male.",
"Because we keep all other attributes identical, this counterfactual definition naturally offers us causation.",
"The practical issue of B in is the computation of the counterfactual: for each triple, this definition naively requires the re-training of the entire embedding model.",
"This is intractable for large-scale analyses because of the extremely high computational cost.",
"To avoid this issue, we apply influence function (Equation 4) for a fast evaluation of B in .",
"Indeed, using Lemma 1 and Remark 2, we can obtain a closed-form B in (proof in Appendix B).",
"Corollary 4. Assume that for each person entity s , we have the same negative sample for (cid:104) s, r p , f (cid:105) and (cid:104) s, r p , m (cid:105) , then B in 4 s |G| ( s + r p o ) (cid:62) ( m f ) , (5) One important observation of B in is that it is essentially a mixture of local graph substructure information ( s , the degree of s in KB), and a projection of link prediction residual ( s + r p o ) onto the gender difference ( m f , a reminiscence of word embedding gender subspace proposed in Bolukbasi et al., 2016).",
"Compared with directly projecting o onto B wo (a hard generalization of word embedding bias measure), the link prediction residual is more compatible with the TransE learning objective.",
"Figure 2 exhibits the distributions of B in of several occupations.",
"We make two observations from the results.",
"First, although there are a number of outliers, most B in are tightly distributed.",
"It shows the consistency of the individual bias scores among different triples.",
"Second, the bias scores correlate well with real-world gender stereotypes: engineer and lawyer lean more towards male, while model and actor lean more towards female.",
"It suggests the validity of B in in describing biases in KB.",
"Differences with Fisher et al. (2020b) A similar definition of bias is proposed in Fisher et al. (2020b) (denoted as B (cid:48) in ).",
"B (cid:48) in is defined as follows: After training the embedding model to convergence, they perform one extra step of updating on the gender direction.",
"The bias score is defined as the difference of the link prediction error before and after the update.",
"We would like to note here the two aspects of differences between B in and B (cid:48) in .",
"First, compared with B (cid:48) in , B in offers better interpretability.",
"Concretely, in our definition, we approximate a purely counterfactual setting: flip the gender and re-train the entire model until convergence.",
"In contrast, Fisher et al. (2020b) update the embedding after the convergence, which may not happen in real-world training.",
"Compared with Equation 6, Equation 5 (approxi-mation of B in ) has an additional graph information term s .",
"Intuitively, s serves as a normalization term: entities with more connections will be less affected by a single perturbation.",
"In other words, the more connections an entity has, the less its link prediction error relies on one of them (i.e. gender).",
"Again, take the occupation of journalist as an example, we show the relationship between B in and B (cid:48) in in Figure 3 and make two observations.",
"First, there is a strong correlation between these two bias measures: points are approximately distributed along the diagonal.",
"Second, we notice that there exist a substantial number of data points with positive B (cid:48) in but near zero B in .",
"This suggests that the normalization term s can prevent the overestimation of biases of person entities with many connections.",
"This also corresponds to our third observation regarding the distribution of z B gr (3.1).",
"After obtaining B in , a remaining question is: given a group of person entities, how to use individual",
"biases to characterize the group's overall bias?",
"The rationale behind is that, if we can accurately measure biases of individuals, we should be able to aggregate them to represent biases of the group.",
"A natural solution to this question is to directly average B in .",
"However, in practice, we find that the averaged B in of all occupations align poorly with B gr ( r 0 . 27 ).",
"We suspect this inconsistency to originate from the mismatches among different person entities' contexts in KB (i.e. different connections and local substructure).",
"In other words, without controlling backdoor variables, simply averaging associations observed from each individual may not be suitable for representing association of the entire group (Pearl et al., 2016).",
"6 In our analyses, because of the complexity of KB, it is infeasible to control all factors.",
"Nevertheless, we can control some of them to alleviate this issue.",
"Here, we focus on controlling gender for two reasons.",
"First, occupations in KB are often of very imbalanced gender ratios (e.g., nurse connects with more female entities than male entities).",
"At the same time, different genders usually have different distributions of B in : female entities mainly have above zero B in , while B in of male entities distributes in a wider range.",
"7 Therefore, controlling gender should be able to effectively reduce the context mismatch.",
"Second, because we treat the average link prediction error of each gender equally in group bias ( 3.1), controlling gender can help us to obtain more comparable results.",
"We thus propose to average scores of each gender separately to calibrate this mismatch ( weighted averaging instead of vanilla averaging ).",
"Formally, 1 | F | (cid:80) s FB in ( (cid:104) s, r p , o (cid:105) ) + 1 | M | (cid:80) s MB in ( (cid:104) s, r p , o (cid:105) ) .",
"We find weighted averaging align much better with B gr ( r 0 . 50 ) and real-world biases ( 4.1).",
"One method of inspecting the validity of a bias measure is to analyze its connection with real-world statistics (e.g. gender ratios of occupations).",
"However, most existing datasets fail to meet our needs, 6 Other examples of this phenomenon include Simpson's Paradox and ecological fallacy .",
"7 We show B in distribution of the occupation of journalist as an example in Figure 5 in Appendix C, and find similar trends in other occupations.",
"because they have different occupation categories with FB5M (e.g. Garg et al., 2018; Du et al., 2019).",
"Accordingly, we take the following steps to build a new dataset.",
"First, we collect the gender distributions of occupations in 2018 by the U.S. census data (Ruggles et al., 2020).",
"Afterwards, we calculate their log proportions 8 and manually pair up them with occupations in KB.",
"9 We use it as our validation data and refer it as census data .",
"Table 2 shows the Pearson's r values and p values between census data and all five bias measures described in 3 ( B gr , B in and B (cid:48) in with both averaging strategies).",
"Our observations are two fold.",
"First, both B gr and B in exhibit significant correlations (especially under weighted averaging ) with census data ( p < . 01 ), indicating their validity of measuring gender biases in KB embeddings.",
"Second, individual bias measures ( B in and B (cid:48) in ) align better with census data under weight averaging than under vanilla averaging.",
"This backs up our suspicion regarding contexts' mismatches, as well as our solution strategy (weighted averaging).",
"Because the Hessian matrix we derived for the calculation of influence function is a diagonal approximation, and influence function of a group of training samples is only an approximation of the test performance change after re-training, one may concern the accuracy of our influence function.",
"Therefore, in this section, we perform a validation experiment to address this concern.",
"Specifically, for each occupation o , we first remove k triples with highest z B gr , then re-train the TransE model from scratch, and calculate their B gr regarding o .",
"Af-8 log-prop = p 1 p , where p is % of men in occupation.",
"terwards, we compare the sum of z B gr with the ground truth changes in B gr .",
"In practice, we take k s to be a arithmetic progression from 500 to 5000, with a common difference of 500.",
"We show a few occupations' alignment results as examples in Figure 4a-4c.",
"We observe strong correlations between our approximation and the ground truth ( r > 0 . 95 for all occupations).",
"It suggests the accuracy of our approximation (some additional results in Appendix C).",
"Our study can broadly benefit relevant future research regarding societal biases and KB.",
"As examples of such applications, based on our study in 3.1, we explore two strategies for de-biasing KB embeddings.",
"We note that these two strategies aim to exemplify the potential impacts of our previous study, and are not necessarily the best method to de-bias KB embeddings.",
"10 Instead, we highly encourage future studies to build better de-biasing methods on the basis of our findings.",
"Strategy 1: De-biasing by Adding In Table 1, we observe that gender ratio has a substantial impact on B gr .",
"Based on this, one natural idea of debiasing is to balance gender proportion by adding dummy triples.",
"The advantage of this strategy is that, because we do not remove triples, we are able to keep the information of the original KB intact.",
"Specifically, suppose an occupation o with M male entities and F female entities, where M is larger than F .",
"To alleviate bias, we create c ( M F ) 11 dummy entities and connect them with only the female gender and o .",
"Afterwards, we re-train TransE and observe the B gr regarding o .",
"Table 3 lists a few examples of the results.",
"We find that this de-biasing strategy overall works well.",
"It is worth noting that the changes of biases of some occupations (e.g. nurse ) are smaller, which matches our previous observation: gender ratio is not the only decisive factor of B gr .",
"Strategy 2: De-biasing by Removing Based on our study on the origins of biases, and inspired by the validation results in 4.2, we investigate a straightforward de-biasing strategy: we simply remove the top k most influential triples based on the approximation of influence function (IF-REMOVE).",
"Again, we take k s to be [500 , 1000 , 1500 , ..., 5000] .",
"Besides, for the purpose of controlling variable, we compare it to a naive baseline method, in which we randomly delete triples of all entities (Random-REMOVE).",
"Figure 4d-4f exhibit some examples of the results.",
"We observe that comparing with the baseline, where B gr rarely change, this de-biasing strategy is able to mitigate biases very effectively.",
"Several additional examples are included in Appendix C. 5 Related Work Various measures have been proposed to quantify gender biases in word embeddings (Bolukbasi et al., 2016; Caliskan et al., 2017; Swinger et al., 2019).",
"Many of them are based on vector similarity (e.g. cosine similarity) between words, which matches the training objective of most word embedding algorithms (maximize the vector similarities between similar words, Mikolov et al., 2013; Pennington et al., 2014).",
"Moreover, Garg et al. (2018) suggest that word embedding can reflect biases in the training corpora and hence our society.",
"Recently, a few studies have explored gender biases in KBs and their embeddings.",
"A pioneer study by Klein et al. (2016) investigates gender gap in Wikidata across time, space, culture, occupation and language.",
"A following study (Shaik et al., 2021) further analyzes the race and country of citizenship bias in KB regarding STEM representation.",
"Moreover, Janowicz et al. (2018) analyze the potential bias issues in KBs from both data and schema viewpoints.",
"Fisher et al. (2020b) propose a KB embedding bias measure based on the change of link prediction error after a one-step update towards male.",
"Fisher et al. (2020a) and Arduini et al. (2020) propose to use adversarial training objective to mitigate biases in KB embeddings.",
"Influence function is a commonly used technique in robust statistics (Cook and Weisberg, 1982).",
"Koh and Liang (2017) first use it to inspect each training point's influence on a neural network's prediction.",
"A following study by Koh et al. (2019) investigate the accuracy of influence function on measuring the effect of removing a group of training samples, and show that its approximation has strong correla-1388 tions with actual effects.",
"Afterwards, Brunet et al. (2019) apply influence function as a differential bias measure to study gender bias in word embedding.",
"Moreover, Pezeshkpour et al. (2019) use an simplification of influence function to perform adversarial attack on link prediction.",
"In this paper, we study the gender biases in KB embeddings.",
"First, we develop two bias measures to quantify biases: one from the group level and the other from the individual level.",
"Evidence of their validity are obtained in comparison with real-world biases.",
"Second, to understand the origins of biases, we adopt influence functions for triple-level analysis and develop an efficient method for fast evaluation.",
"The accuracy of this method is validated by comparing our approximation with group-truth changes after re-training.",
"Moreover, as examples of the potential applications of our find-ings, we propose two de-biasing strategies for KB embeddings and obtain promising performance.",
"Although we focus on Freebase (FB5M) and TransE in this paper, we note that our analyses are theoretically generalizable to other commonly-used KBs and embedding training algorithms.",
"For instance, Wikidata, another commonly-used KB, uses a different hierarchical structure to organize its data (Vrandecic and Krtzsch, 2014; Tanon et al., 2016; Wang et al., 2021).",
"However, it still loosely follows the triple structure used in Freebase, and therefore can be pre-processed to fit in our analyses.",
"Also, because our bias measures and bias tracing methods are built on simple and generalizable defi-nitions (i.e., differences between link predictions errors and influence function), they can naturally be adapted to other KB embedding algorithms (Lin et al., 2015; Yang et al., 2015; Peng et al., 2021).",
"However, we recognize that such generalizations are not trivial efforts.",
"Take Wikidata again for an instance, although a simple transformation is adequate for running the embedding algorithm, it is far from fully eliminating the differences between Freebase and Wikidata.",
"For example, Wikidata does not have an inverse predicate for each relation, and has a much smaller number of overall relations (Azmy et al., 2018; Diefenbach et al., 2017).",
"Such differences might have a large impact on the final results.",
"Also, to perform the same analyses with other embedding algorithms, we will need to develop algorithms to facilitate the computation of their influence function (as Lemma 1), too.",
"Therefore, we consider such generalizations to be promising future directions but out of the scope of our work.",
"We thank Dong Nguyen for her meticulous and valuable suggestions, as well as productive discussions.",
"We also thank all anonymous reviewers for their constructive and helpful feedback.",
"This research was (partially) supported by NSFC (62076097), STCSM(20511101205), and Shanghai Key Laboratory of Multidimensional Information Processing, ECNU (2020KEY001).",
"The corresponding authors are Yuanbin Wu and Yan Yang.",
"Intended Usage Our work intend to provide insights of gender biases in KB and its embeddings, on how to measure these biases and how to trace the origins of them.",
"Moreover, as discussed in 4.3, future studies can build better de-biasing methods based on our findings.",
"In this way, our framework can contribute to the development of models that are less biased and hence potentially less harmful.",
"Limitations In this study, we use gender information already encoded in KB to measure and trace gender biases.",
"However, because only binary gender is recorded in the KB that we use (Freebase), we take a narrow view of binary gender in our analyses.",
"We hope to see more future studies on gender biases in KB embeddings that consider non-binary gender identities as well as intersectional identities."
] | [
"abstain",
"method",
"objective",
"abstain",
"objective",
"method",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"objective",
"abstain",
"abstain",
"method",
"objective",
"objective",
"result",
"objective",
"objective",
"abstain",
"result",
"result",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"objective",
"method",
"objective",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"method",
"method",
"abstain"
] |
[
"Pre-trained cross-lingual encoders such as mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020a) have proven impressively effective at enabling transfer-learning of NLP systems from high-resource languages to low-resource languages.",
"This success comes despite the fact that there is no explicit objective to align the contextual embeddings of words/sentences with similar meanings across languages together in the same space.",
"In this paper, we present a new method for learning multilingual encoders, AMBER ( A ligned M ultilingual B idirectional E ncode R ).",
"AMBER is trained on additional parallel data using two explicit alignment objectives that align the multilingual representations at different granularities.",
"We conduct experiments on zero-shot cross-lingual transfer learning for different tasks including sequence tagging, sentence retrieval and sentence classification.",
"Experimental results on the tasks in the XTREME benchmark (Hu et al., 2020) show that AMBER obtains gains of up to 1.1 average F1 score on sequence tagging and up to 27.3 average accuracy on retrieval over the XLM-R-large model which has 3.2x the parameters of AMBER .",
"Our code and models are available at http://github.com/junjiehu/amber .",
"Cross-lingual embeddings, both traditional non-contextualized word embeddings (Faruqui and Dyer, 2014) and the more recent contextualized word embeddings (Devlin et al., 2019), are an essential tool for cross-lingual transfer in downstream applications.",
"In particular, multilingual contextualized word representations have proven effective in reducing the amount of supervision needed in a variety of cross-lingual NLP tasks such as sequence labeling (Pires et al., 2019), question answering (Artetxe et al., 2020), parsing (Wang et al., *Work partially done at Google Research. 2019), sentence classification (Wu and Dredze, 2019) and retrieval (Yang et al., 2019a).",
"Some attempts at training multilingual representations (Devlin et al., 2019; Conneau et al., 2020a) simply train a (masked) language model on monolingual data from many languages.",
"These methods can only implicitly learn which words and structures correspond to each-other across languages in an entirely unsupervised fashion, but are nonetheless quite effective empirically (Conneau et al., 2020b; K et al., 2020).",
"On the other hand, some methods directly leverage multilingual parallel corpora (McCann et al., 2017; Eriguchi et al., 2018; Conneau and Lample, 2019; Huang et al., 2019; Siddhant et al., 2020), which gives some degree of supervision implicitly aligning the words in the two languages.",
"However, the pressure on the model to learn clear correspondences between the contextualized representations in the two languages is still implicit and somewhat weak.",
"Because of this, several follow-up works (Schuster et al., 2019; Wang et al., 2020; Cao et al., 2020) have proposed methods that use word alignments from parallel corpora as the supervision signals to align multilingual contextualized representations, albeit in a post-hoc fashion.",
"In this work, we propose a training regimen for learning contextualized word representations that encourages symmetry at both the word and sentence levels at training time .",
"Our word-level alignment objective is inspired by work in machine translation that defines objectives encouraging consistency between the source-to-target and target-to-source attention matrices (Cohn et al., 2016).",
"Our sentence-level alignment objective encourages prediction of the correct translations within a mini-batch for a given source sentence, which is inspired by work on learning multilingual sentence representations (Yang et al., 2019a; Wieting et al., 2019).",
"In experiments, we evaluate the zero-shot crosslingual transfer performance of AMBER on four dif-Self-Attention \" # $%& \" ' $%& \" ( $%& Masked Attention Q K, V \" # $ ) # $%& ) ' $%& ) ( $%& Masked Attention Q K, V ) ' $ Masked Attention \" # + \" ' + \" ( + ) # + ) ' + ) ( + \" # & \" ' & \" ( & ) # & ) ' & ) ( & . . . \" # , \" ' , \" ( , ) # , ) ' , ) ( , Masked Attention Attention Masks K, V Q )\" \" # + \" ' + \" ( + ) # + ) ' + ) ( + Self-Attention \" # & \" ' & \" ( & ) # & ) ' & ) ( & \" # , \" ' , \" ( , ) # , ) ' , ) ( , . . . . . . Self-Attention Self-Attention \" )",
"(a) Word-Alignment",
"(b) Word-Alignment",
"(c) Sentence-Alignment Figure 1: (a-b) show the computation of the target-to-source attention matrix used for the word alignment objective:",
"(a) Masked attention for source/target (blue/green) sentences on the l -th layer;",
"(b) Attention from y to x on the top layer.",
"(c) shows the separate encoding of source/target sentences for the sentence alignment objective.",
"ferent NLP tasks in the XTREME benchmark (Hu et al., 2020) including part-of-speech (POS) tagging, paraphrase classification, and sentence retrieval.",
"We show that AMBER obtains gains of up to 1.1 average F1 score on cross-lingual POS tagging, up to 27.3 average accuracy score on sentence retrieval, and achieves competitive accuracy in paraphrase classification when compared with the XLM-R-large model.",
"This is despite the fact that XLM-R-large is trained on data 23.8x as large 1 and has 3.2x parameters of AMBER .",
"This shows that compared to large amounts of monolingual data, even a small amount of parallel data leads to significantly better cross-lingual transfer learning.",
"This section describes three objectives for training contextualized embeddings.",
"We denote the monolingual and parallel data as M and P respectively.",
"Masked Language Modeling (MLM) A masked language modeling objective takes a pair of sentences x, y , and optimizes the prediction of randomly masked tokens in the concatenation of the sentence pair as follows: (cid:96) MLM ( x, y ) = E s [1 , | z | ] log P ( z s | z \\ s ) , (1) where z is the concatenation of the sentence pair z = [ x ; y ] , z s are the masked tokens randomly sampled from z , and z \\ s indicates all the other tokens except the masked ones.",
"In the standard monolingual setting, x, y are two contiguous sentences in a monolingual corpus.",
"In Conneau and Lample (2019), x, y are two sentences in different languages from a parallel cor-1 AMBER is trained on 26GB parallel data and 80GB monolingual Wikipedia data, while XLM-R-large is trained on 2.5TB monolingual CommonCrawl data.",
"Sentence Alignment Our first proposed objective encourages cross-lingual alignment of sentence representations.",
"For a source-target sentence pair ( x, y ) in the parallel corpus, we separately calculate sentence embeddings denoted as c x , c y by averaging the embeddings in the final layer as the sentence embeddings.",
"2 We then encourage the model to predict the correct translation y given a source sentence x .",
"To do so, we model the conditional probability of a candidate sentence y being the correct translation of a source sentence x as: P ( y | x ) = e c Tx c y (cid:80) y (cid:48) MP e c Tx c y (cid:48) .",
"where y (cid:48) can be any sentence in any language.",
"Since the normalization term in Eq.",
"(2) is intractable, we approximate P ( y | x ) by sampling y (cid:48) within a mini-batch B rather than M P .",
"We then define the sentence alignment loss as the average negative log-likelihood of the above probability: (cid:96) SA ( x, y ) = log P ( y | x ) .",
"Bidirectional Word Alignment Our second proposed objective encourages alignment of word embeddings by leveraging the attention mechanism in the Transformer model.",
"Motivated by the work on encouraging the consistency between the source-to-target and target-to-source translations (Cohn et al., 2016; He et al., 2016), we create two different attention masks as the inputs to the Transformer model, and obtain two attention matrices in the top layer of the Transformer model.",
"We compute the target-to-source attention matrix A y x as follows: 2 In comparison, mBERT encodes a sentence pair jointly, then uses the CLS token embedding to perform its next sentence prediction task.",
"g ly i = Attn ( Q = g l 1 y i , KV = g l 1 [ y <i ; x ] ; W l ) , (4) g lx j = Attn ( Q = g l 1 x j , KV = g l 1 x ; W l ) , (5) Attn ( QKV ; W ) = softmax ( QW q ( KW k ) T ) VW v (6) A y x [ i, j ] = g Ly i g Lx j .",
"(7) where g ly t is the embedding of the t -th word in y on the l -th layer, A y x [ i, j ] is the ( i, j ) -th value in the attention matrix from y to x , and W = { W q , W k , W v } are the linear projection weights for Q, K, V respectively.",
"We compute the source-to-target matrix A x y by switching x and y .",
"To encourage the model to align source and target words in both directions, we aim to minimize the distance between the forward and backward attention matrices.",
"Similarly to Cohn et al. (2016), we aim to maximize the trace of two attention matrices, i.e., tr ( A y xT A x y ) .",
"Since the attention scores are normalized in [0 , 1] , the trace of two attention matrices is upper bounded by min( | x | , | y | ) , and the maximum value is obtained when the two matrices are identical.",
"Since the Transformer generates multiple attention heads, we average the trace of the bidirectional attention matrices generated by all the heads denoted by the superscript h (cid:96) WA ( x, y ) = 1 1 HH (cid:88) h =1 tr ( A hy xT A hx y ) min( | x | , | y | ) .",
"(8) Notably, in the target-to-source attention in Eq (4), with attention masking we enforce a constraint that the t -th token in y can only perform attention over its preceding tokens y <t and the source tokens in x .",
"This is particularly useful to control the information access of the query token y t , in a manner similar to that of the decoding stage of NMT.",
"Without attention masking, the standard Transformer performs self-attention over all tokens, i.e., Q = K = g hz , and minimizing the distance between the two attention matrices by Eq.",
"(8) might lead to a trivial solution where W q W k .",
"Combined Objective Finally we combine the masked language modeling objective with the alignment objectives and obtain the total loss in Eq.",
"(9).",
"Notice that in each iteration, we sample a mini-batch of sentence pairs from M P .",
"Following the setting of Hu et al. (2020), we focus on zero-shot cross-lingual transfer where we fine-tune models on English annotations and apply the models to predict on non-English data.",
"Models : Table 1 shows details of models in comparison.",
"We adopt the same architecture as mBERT for AMBER .",
"Notably, AMBER , XLM-15 and Unicoder are trained on the additional parallel data, while the others are trained only on monolingual data.",
"Besides, XLM-R-base/large models have 2.6x/4.8x the parameters of AMBER and are trained on the larger CommonCrawl corpus.",
"We use a simple setting for our AMBER variants in the ablation study to show the effectiveness of our proposed alignment objectives without other confounding factors such as model sizes, hyper-parameters and tokenizations in different existing studies.",
"Pre-training : We train AMBER on the Wikipedia data for 1M steps first using the default hyper-parameters as mBERT 3 except that we use a larger batch of 8,192 sentence pairs, as this has proven effective in Liu et al. (2019).",
"We then continue training the model by our objectives for another 1M steps with a batch of 2,048 sentence pairs from Wikipedia corpus and parallel corpus which is used to train XLM-15 (Conneau and Lample, 2019).",
"We use the same monolingual data as mBERT and follow Conneau and Lample (2019) to prepare the parallel data with one change to maintain truecas-ing.",
"We set the maximum number of subwords in the concatenation of each sentence pair to 256 and use 10k warmup steps with the peak learning rate of 1e-4 and a linear decay of the learning rate.",
"We train AMBER on TPU v3 for about 1 week.",
"Cross-lingual Part-Of-Speech (POS) contains data in 13 languages from the Universal Dependencies v2.3 (Nivre et al., 2018).",
"PAWS-X (Yang et al., 2019b) is a paraphrase detection dataset.",
"We train on the English data (Zhang et al., 2019), and evaluate the prediction accuracy on the test set translated into 4 other languages.",
"XNLI (Conneau et al., 2018) is a natural language inference dataset in 15 languages.",
"We train models on the English MultiNLI training data (Williams et al., 2018), and evaluate on the other 14.",
"Tatoeba (Artetxe and Schwenk, 2019) is a testbed for parallel sentence identification.",
"We select the 14 non-English languages covered by our parallel data, and follow the setup in Hu et al. (2020) finding the English translation for a given a non-English sentence with maximum cosine similarity.",
"In Table 2, we show the average results over all languages in all the tasks, and show detailed results for each language in Appendix A.3.",
"First, we find that our re-trained mBERT ( AMBER with MLM) performs better than the publicly available mBERT on all the tasks, confirming the utility of pre-training BERT models with larger batches for more steps (Liu et al., 2019).",
"Second, AMBER trained by the word alignment objective obtains a comparable average F1 score with respect to the best performing model (Unicoder) in the POS tagging task, which shows the effectiveness of the word-level alignment in the syntactic structure prediction tasks at the token level.",
"Besides, it is worth noting that Unicoder is initialized from the larger XLM-R-base model that is pre-trained on a larger corpus than AMBER , and Unicoder improves over XLM-R-base on all tasks.",
"Third, for the sentence classification tasks, AMBER trained with our explicit alignment objectives obtain a larger gain (up to 2.1 average accuracy score in PAWS-X, and 3.9 average accuracy score in XNLI) than AMBER with only the MLM objective.",
"Although we find that AMBER trained with only the MLM objective falls behind existing XLM/XLM-R/Unicoder models with many more parameters, AMBER trained with our alignment objectives significantly narrows the gap of classification accuracy with respect to XLM/XLM-R/Unicoder.",
"Finally, for sentence retrieval tasks, we find that XLM-15 and Unicoder are both trained on additional parallel data, outperforming the other existing models trained only on monolingual data.",
"Using additional parallel data, AMBER with MLM and TLM objectives also significantly improves over AMBER Model POS PAWS-X XNLI Tatoeba mBERT (public) 68.5 86.2 65.4 45.6 XLM-15 68.8 88.0 72.6 77.2 XLM-100 69.5 86.4 69.1 36.6 XLM-R-base 68.8 87.4 73.4 57.6 XLM-R-large 70.0 89.4 79.2 60.6 Unicoder 71.7 88.1 74.8 72.2 AMBER (MLM) 69.8 87.1 67.7 52.6 AMBER (MLM+TLM) 70.5 87.7 70.9 68.2 AMBER (MLM+TLM+WA) 71.1 89.0 71.3 68.8 AMBER (MLM+TLM+WA+SA) 70.5 89.2 71.6 87.9 Table 2: Overall results on POS, PAWS-X, XNLI, Tatoeba tasks.",
"with the MLM objective by 15.6 average accuracy score, while combining our word-level alignment objective yields a marginal improvement over AMBER with MLM and TLM objectives.",
"However, adding the sentence-level alignment objective, AMBER trained by the combined objective can further improve AMBER with the MLM and word-level alignment objectives by 19.1 average accuracy score.",
"This confirms our intuition that the explicit sentence-level objective can effectively leverage the alignment supervision in the parallel corpus, and encourage contextualized sentence representations of aligned pairs to be close according to the cosine similarity metric.",
"In Figure 2, we investigate the improvement of the alignment objectives over the MLM objective on low-resourced and high-resourced languages, by computing the performance difference between AMBER trained with alignment objectives and AMBER (MLM).",
"First, we find that AMBER trained with alignment objectives significantly improves the performance on languages with relatively small amounts of parallel data, such as Turkish, Urdu, Swahili, while the improvement on high-resourced languages is marginal.",
"Through a further analysis (Appendix A.3), we observe that AMBER (MLM) performs worse on these low-resourced and morphologically rich languages than on high-resourced Indo-European languages, while AMBER trained with alignment objectives can effectively bridge the gap.",
"Moreover, AMBER trained with our word-level alignment objective yields the highest improvement on these low-resourced languages on the POS task, and AMBER trained with sentence-level alignment performs the best on XNLI.",
"Methods en bg de el es fr Avg.",
"Cao et al. (2020) 80.1 73.4 73.1 71.4 75.5 74.5 74.7 AMBER (full) 84.7 74.3 74.2 72.5 76.9 76.6 76.5 Table 3: F1 scores of AMBER trained with all objectives and Cao et al. (2020) on 6 languages on XNLI.",
"Recent studies (Cao et al., 2020; Wang et al., 2020) have proposed to use a bilingual dictionary to align cross-lingual word representations.",
"Compared with these methods, our word-level alignment objective encourages the model to automatically discover word alignment patterns from the parallel corpus in an end-to-end training process, which avoids potential errors accumulated in separate steps of the pipeline.",
"Furthermore, an existing dictionary may not have all the translations for source words, especially for words with multiple senses.",
"Even if the dictionary is relatively complete, it also requires a heuristic way to find the corresponding substrings in the parallel sentences for alignment.",
"If we use a word alignment tool to extract a bilingual dictionary in a pipeline, errors may accumulate, hurting the accuracy of the model.",
"Besides, Wang et al. (2020) is limited in aligning only fixed contextual embeddings from the model's top layer.",
"Finally, we also compare AMBER trained with all the objectives and Cao et al. (2020) on a subset of languages on XNLI in Table 3.",
"We find that our full model obtains a gain of 1.8 average F1 score.",
"While cross-lingual alignment is a long-standing challenge dating back to the early stage of research in word alignment (Brown et al., 1993), cross-lingual embeddings (Faruqui and Dyer, 2014;",
"Xing et al., 2015; Devlin et al., 2019; Conneau et al., 2020a) are highly promising in their easy integration into neural network models for a variety of cross-lingual applications.",
"Analysis studies on recent cross-lingual contextualized representations (Pires et al., 2019; Wu and Dredze, 2019; Hu et al., 2020; Siddhant et al., 2020) further demonstrates this advantage for zero-shot cross-lingual transfer in a representative set of languages and tasks.",
"In particular to improve cross-lingual transfer, some attempts directly leverage multilingual parallel corpus to train contextualized representations (McCann et al., 2017; Eriguchi et al., 2018; Conneau and Lample, 2019; Huang et al., 2019) with the hope of aligning words implicitly.",
"The other line of work uses word alignments from parallel corpora as the alignment supervision in a posthoc fashion (Cao et al., 2020; Wang et al., 2020).",
"Notably, AMBER does not rely on any word alignment tools, and explicitly encourage the correspondence both on the word and sentence level.",
"In this paper, we demonstrate the effectiveness of our proposed explicit alignment objectives in learning better cross-lingual representations for downstream tasks.",
"Nonetheless, several challenging and promising directions can be considered in the future.",
"First, most existing multilingual models tokenize words into subword units, which makes the alignment less interpretable.",
"How to align a span of subword units with meaningful semantics at the phrase level deserves further investigation.",
"Second, several studies (Ghader and Monz, 2017; Li et al., 2019) have shown that attention may fail to capture word alignment for some language pairs, and a few works (Legrand et al., 2016; Alkhouli et al., 2018) proposed neural word alignment to improve the word alignment quality.",
"Incorporating such recent advances into the alignment objective is one future direction.",
"Third, how to fine-tune a well-aligned multilingual model on English annotations without catastrophic forgetting of the alignment information is a potential way to improve cross-lingual generalization on the downstream applications.",
"We'd like to thank Yinfei Yang and Wei-Cheng Chang for answering our questions on the data and code.",
"JH and GN were supported in part by a Google Faculty Award, and NSF Award #1761548."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Neural generative models have achieved promising performance on dialog generation tasks if given a huge data set.",
"However, the lack of high-quality dialog data and the expensive data annotation process greatly limit their application in real-world settings.",
"We propose a paraphrase augmented response generation (PARG) framework that jointly trains a paraphrase model and a response generation model to improve the dialog generation performance.",
"We also design a method to automatically construct paraphrase training data set based on dialog state and dialog act labels.",
"PARG is applicable to various dialog generation models, such as TSCP (Lei et al., 2018) and DAMD (Zhang et al., 2019).",
"Experimental results show that the proposed framework improves these state-of-the-art dialog models further on CamRest676 and MultiWOZ.",
"PARG also significantly outperforms other data augmentation methods in dialog generation tasks, especially under low resource settings.",
"1 2 1 Introduction Task-oriented dialog systems that are applied to restaurant reservation and ticket booking have attracted extensive attention recently (Young et al., 2013; Wen et al., 2017; Bordes et al., 2016; Eric and Manning, 2017).",
"Specifically, with the progress on sequence-to-sequence (seq2seq) learning (Sutskever et al., 2014), neural generative models have achieved promising performance on dialog response generation (Zhao et al., 2017; Lei et al., 2018; Zhang et al., 2019).",
"internship at University of California, Davis.",
"1 This work is supported by NSFC (No.61976122), Ministry of Education and China Mobile joint funding (No.MCM20170301).",
"However, training such models requires a large amount of high-quality dialog data.",
"Since each dialog is collected through a human-human or human-machine interaction, it is extremely expensive and time-consuming to create large dialog dataset covering various domains (Budzianowski et al., 2018).",
"After dialogs are collected, we also need to annotate dialog states and dialog acts, which are then used to train language understanding models and learn dialog policy.",
"Hiring crowd-sourcing workers to perform these annotations is very costly.",
"Therefore, we propose automated data augmentation methods to expand existing well-annotated dialog datasets, and thereby train better dialog systems.",
"We propose to augment existing dialog data sets through paraphrase.",
"Paraphrase-based data-augmentation methods have been proved to be useful in various tasks, such as machine translation (Callison-Burch et al., 2006), text classification (Zhang et al., 2015), question answering (Fader et al., 2013) and semantic parsing (Jia and Liang, 2016).",
"All these approaches first find a set of semantically similar sentences.",
"However, finding isolated similar sentences are not enough to construct a dialog utterances' paraphrase.",
"Because an utterance's paraphrase must fit the dialog history as well.",
"For example, when the system says Do you prefer a cheap or expensive restaurant? , the user may state his intent of asking for a cheap restaurant by Cheap please. or Could you find me a cheap restaurant? .",
"However, the latter is obviously an improper response which is not coherent with the system question.",
"In other words, a paraphrased dialog utterance needs to serve the same function as the original utterance under the same dialog context.",
"Therefore, we propose to construct dialog paraphrases that consider dialog context in order to improve dialog generation quality.",
"framework that jointly optimizes dialog paraphrase and dialog response generation.",
"To obtain dialog paraphrases, we first find all the user utterances that serve the same function in different dialogs, such as different ways of asking for Italian food.",
"Then we select the utterances that have the same semantic content but different surface form, to construct a high-quality dialog paraphrase corpus.",
"The corpus is then used to train a paraphrase generation model to generate additional user utterances.",
"Finally, the augmented dialog data is used to train a response generation model.",
"We leverage the multistage seq2seq structure (Lei et al., 2018; Zhang et al., 2019) for both paraphrase and response generation.",
"Moreover, these two models are connected through an additional global attention (Bahdanau et al., 2014) between their decoders, so they can be optimized jointly during training.",
"In our experiments, we apply our framework on two state-of-the-art models, TSCP (Lei et al., 2018) and DAMD (Zhang et al., 2019) on two datasets CamRest676 (Wen et al., 2017) and MultiWOZ (Budzianowski et al., 2018), respectively.",
"After applying our framework, the response generation models can generate more informative responses that significantly improves the task completion rate.",
"In particular, our framework is extremely useful under low-resource settings.",
"Our paraphrase augmented model only needs 50% of data to obtain similar performance of a model without paraphrase augmentation.",
"Our proposed method also outperforms other data augmentation methods, and its comparative advantage increases in settings where only a small amount of training data is available.",
"Data Augmentation has been used in various machine learning tasks, such as object detection (Red-mon et al., 2016) and machine translation (Fadaee et al., 2017).",
"It aims to expand training data to improve model performance.",
"In computer vision, many classical data augmentation methods such as random copy (Krizhevsky et al., 2012) and image pair interpolation (Zhang et al., 2017) have been widely used.",
"However, those approaches are not applicable for natural language processing since language is not spatially invariant like images.",
"The word order in a sentence impacts its semantic meaning (Zhang et al., 2015).",
"Therefore, human language augmentation methods aim to generate samples with the same semantic meaning but in different surface forms.",
"Such an idea led to recent augmentation work on the language understanding task (Hou et al., 2018; Kim et al., 2019; Yoo et al., 2019; Zhao et al., 2019).",
"However, there is no data augmentation work on task-oriented dialog generation.",
"Paraphrase is the technique that generates alternative expressions.",
"Most of the existing work on paraphrase aims to improve the quality of generated sentences.",
"For example, phrase dictionary (Cao et al., 2017) and semantic annotations (Wang et al., 2019) are used to assist the paraphrase model to improve the language quality.",
"To make a controllable paraphrase model, syntactic information (Iyyer et al., 2018) is also adopted.",
"And, recently, different levels of granularity (Li et al., 2019b) are considered to make paraphrase decomposable and interpretable.",
"In this paper, we utilize a language environment to assist paraphrase, and use paraphrase as a tool to augment the training data of dialog systems.",
"In this section, we first introduce how to construct a paraphrase dataset to train paraphrase generation models.",
"Then we describe the work flow of the proposed PARG model.",
"We propose a three-step procedure to find dialog utterances that are a paraphrase of each other.",
"First, we perform delexicalization to pre-process dialog utterances to reduce the surface form language variability.",
"Then for each user utterance, we match the utterances in other dialogs that play the same function to find its paraphrase candidates.",
"Finally, we filter out unqualified paraphrases which have a low semantic similarity or a low surface form diversity comparing to the original utterance.",
"Similar to the delexicalization process introduced in Henderson et al. (2014), we replace the slot values in each utterance by their slot name indicator.",
"For example, the user utterance I want a cheap restaurant. is delexicalized as I want a [pricerange] restaurant. .",
"The slot values can be dropped since their varieties only influence the database search results but have no impact on how the dialog progresses.",
"In other words, no matter whether the user is asking for a cheap or an expansive restaurant, he represents the same intent of requesting a restaurant with a specific price range Dialog Function:",
"in the dialog.",
"Therefore through delexicalization, the language variations brought by numerous slot values can be reduced, thus it is easier to find paraphrases.",
"After delexicalization, we find utterances that play the same role or serve the same dialog function in different dialogs.",
"We denote the dialog function of turn t as DF t .",
"It consists of three types of information: 1) current dialog domain D t , 2) slots mentioned S t in the current turn, and 3) system's dialog act A t 1 in the previous turn, which is formulated as: DF t = ( D t , S t , A t 1 ) (1) The slots mentioned represent the key information towards task completion, which is the most important information to determine the function of the utterance.",
"The dialog domain is included in the function to avoid ambiguities brought by slots that shared across different domains.",
"For example, asking for the location of a hotel is different from asking for a restaurant.",
"The previous system act is considered to ensure a coherent dialog context, since each turn's user utterance is a reply to the previous system response.",
"Fig.1 gives out an example of dialog function.",
"For each user utterance in the dialog dataset, we go through all the available training data and find all utterances with the same dialog function as paraphrase candidates of it.",
"As each utterance may have many paraphrase candidates, we only keep the high-quality paraphrase pairs that are similar in semantic but different in surface form.",
"We use the BLEU (Papineni et al., 2002) score and the diversity score proposed in Hou et al. (2018) to evaluate the paraphrase qual-Utterance Filter ActionDecoder ContextEncoder ParaphraseDecoder Belief Span Decoder ContextEncoder ResponseDecoder 1 & 1 & AugmentedDialog Corpus Original Dialog Corpus Paraphrase Generation Model Response Generation Model Attention 1 & Copy Copy Attention Copy Copy Copy Figure 2: Overview of our Paraphrase Augmented Response Generation (PARG) framework.",
"ity.",
"Specifically, if the BLEU score is too low (below 0.2 in our experiments), we consider the paraphrase pair as semantically irrelevant and filter it out.",
"If the diversity score is too low (below 3.4 in our experiments), we also filter out the paraphrase pair since it is too alike in terms of surface form language.",
"For those utterances that do not have any paraphrases after filtering, we gradually reduce the filter threshold of diversity score until each of them matches a paraphrase.",
"Figure 2 shows an overview of our paraphrase based data augmentation framework.",
"It consists of a paraphrase generation model, a low-quality paraphrase filter and a response generation model.",
"We describe each module in detail below.",
"Paraphrase Generation Model.",
"Our paraphrase generation model has a seq2seq architecture with a context encoder and two decoders for action decoding and paraphrase decoding.",
"The context encoder takes the concatenation of previous system response R t 1 and current user utterance U t as input and encodes them into hidden states.",
"Then the hidden states are used to decode the previous system action A t 1 , where the system action is also a sequence of tokens that first introduced in Zhang et al. (2019).",
"Finally the paraphrase decoder decodes the paraphrase U pt based on the hidden states of both the encoder and the action decoder.",
"where h A t 1 denotes the hidden states of the action decoder.",
"We leverage copy mechanism (Gu et al., 2016) to copy words from input utterances to previous system action and paraphrase.",
"The action decoding process is used to help paraphrase decoding through an attention connection between the decoders, whose significance lies in improving dialog context awareness.",
"Paraphrase Filter.",
"We then send the generated paraphrase into a filter module to determine if it qualifies as an additional training instance.",
"We aim to keep paraphrases that can serve the same dialog function with the original utterance.",
"So we filter out paraphrases that did not include all of the slots mentioned in the original utterance.",
"Besides, we also filter out paraphrases that have a different meaning and/or a similar surface form compared to the original utterance by the same way in our paraphrase data construction process.",
"We still use 0.2 and 3.4 as the thresholds for BLEU and diversity score respectively in our experiments.",
"Response Generation Model.",
"We use two state-of-the-art seq2seq model, TSCP (Lei et al., 2018) and DAMD (Zhang et al., 2019) for single domain and multi-domain response generation respectively.",
"We will describe the workflow of our framework based on the TSCP model, as shown in Fig.2.",
"For DAMD the process is similar since the only difference between these two models is that DAMD has an additional action span decoder between the belief span decoder and the response decoder.",
"The model input is the concatenation of the current user utterance U (cid:48) t , the previous belief span B t 1 (slots mentioned by user) and the system response R t 1 , where U (cid:48) t is either the original user utterance U t or its paraphrase U pt generated by the paraphrase generation model.",
"The model is a two-stage decoding network, where the belief span and system response are decoded sequentially using the copy mechanism.",
"Specifically, we introduce an attention connection between the paraphrase decoder and the belief span decoder to allow the gradient in the response generation model to back-propagate to the paraphrase generation model.",
"So the response generation model can guide the paraphrase decoder to generate better paraphrases and vice versa.",
"This process can be formulated as: h B t = Seq2Seq( R t 1 , U (cid:48) t , B t 1 | h U pt ) (4) R t = Seq2Seq( R t 1 , U (cid:48) t | h B t ) (5) where h B t and h U pt denote the hidden states of the belief span decoder and paraphrase decoder respectively.",
"Training and Evaluation.",
"The model is joint optimized through supervised learning.",
"Specifi-cally, the system action labels, the paraphrase data (collected through the process introduced in the previous section), the dialog state labels and the reference responses are used to calculate the cross-entropy loss of the four decoders, denoted as loss a , loss p , loss b , and loss r , respectively.",
"Then we calculate the sum of all the losses and perform gradient descent for training.",
"The total loss function for training are formulated as: loss = loss a + loss p + loss b + loss r (6) Note that we only augment user utterance as additional input utterances during training.",
"We alternatively use the original U t and generated U pt as input to the response generation model, while other elements such as belief spans and responses remain the same.",
"Since both decoders are forced to recognize more user expressions, the language understanding and response generation performance improve simultaneously.",
"If the generated U pt is in low quality and filtered out, only the original U t is used to train the response generation model in that turn.",
"This often happens at the beginning of training when the paraphrase model is under-fitting.",
"During testing, only the ground truth user utterances are used as input.",
"However, we still utilize the paraphrase generation model to compute attention between the paraphrase decoder and the belief span decoder.",
"This is because we believe that the paraphrase decoding process can help the belief span decoding process since it provides additional explanations of the user utterance.",
"We conduct our experiments based on two datasets, CamRest676 (Wen et al., 2017) and MultiWOZ",
"(Budzianowski et al., 2018).",
"Dialogs in both are collected through crowd-sourcing on the Amazon Mechanical Turk platform.",
"Besides experiments on the full datasets, we also conduct experiments using only 20% or 50% of dialog data for training to evaluate the promotion through data augmentation under low-resource settings.",
"CamRest676 is a single domain dataset consisting of dialogs about restaurant reservation.",
"The dataset has 676 dialogs which are split into training, development and testing set by the ratio of 3:1:1.",
"The average number of turns is 4.06.",
"There are 3 slot types and 99 allowable values in the task ontology.",
"We use three metrics for evaluation following Lei et al. (2018).",
"Entity Match Rate (EMR) is the proportion that the system capture the correct user goal.",
"Success F1 (Succ.F1) score measures whether the system can provide correct information requested by user.",
"While these two metrics are used for evaluating system's task completion ability, we use BLEU (Papineni et al., 2002) to evaluate the language fluency of generated responses.",
"MultiWOZ is a challenging large-scale multi-domain dataset proposed recently (Budzianowski et al., 2018).",
"It consists of dialogs between tourists and clerks at an information center, across seven domains including hotel, restaurant, train, etc.",
"There are 8433/1000/1000 dialogs in training, development and testing set respectively, and the number of turn is 6.85 on average.",
"Meanwhile, MultiWOZ has a complex ontology with 32 slot types and 2,426 corresponding slot values.",
"We use the evaluation metrics proposed by Budzianowski et al. (2018), which are how often the system provides an correct entity ( inform rate ) and answers all the requested information ( success rate ), and how fluent the response is ( BLEU ).",
"We also report a combined score computed via ( Inform + Success ) 0 .",
"5 + BLEU for overall quality measure as suggested in (Mehri et al., 2019).",
"We use a one-layer, bi-directional GRU as the context encoder and two standard GRU as the action decoder and paraphrase decoder.",
"The embedding size and hidden size are both 50 on CamRest676 and 100 for MultiWOZ.",
"The copy mechanism and attention connection are added as shown in Fig.2.",
"For the response generation model, we leverage the state-of-the-art model on each dataset, which is the Two-stage Copy Net (TSCP) (Lei et al., 2018) for CamRest676 and Domain Aware Multi-Decoder (DAMD) (Zhang et al., 2019) for MultiWOZ.",
"We use the model structures that follow the default settings in the open source implementation of TSCP 3 and DAMD 4 .",
"We use the the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.003 and 0.005 for CamRest676 and MultiWOZ, respectively.",
"We halve the learning rate when the total loss of our model on development set does not reduce in three consecutive epochs, and we stop the training when the total loss does not reduce in five consecutive epochs.",
"We set the learning rate to 0.0001 and the decay parameter to 0.8 during reinforcement fine tuning in TSCP.",
"We compare the proposed method with five other data augmentation methods, three of which are based on text replacement and the other two are based on neural paraphrase generation models.",
"WordSub denotes the rare word substitution method proposed by Fadaee et al. (2017).",
"It generates new sentences by replacing common words with rare ones.",
"A bi-directional LSTM language model is trained to select the proper substitution words.",
"We do not substitute key words associated with slot values to maintain the dialog function of utterances.",
"TextSub denotes the text span replacement method proposed by Yin et al. (2019).",
"It replaces a sequence of tokens (text span) by their paraphrase candidates from the lexicon database (PPDB (Pavlick et al., 2015)).",
"The selection of text spans is based on a policy network, which is trained jointly with the belief span decoder through reinforcement learning.",
"The slot values are also fixed with the same purpose as in WordSub.",
"UtterSub denotes the simple utterance replacement augmentation.",
"We use the paraphrases obtained in dialog dataset as new training samples directly instead of training the paraphrase model to generate new samples.",
"3 https://github.com/WING-NUS/sequicity 4 https://gitlab.com/ucdavisnlp/damd-multiwoz",
"et al. (2019a), injects random noise to the encoder's hidden states to improve generation varieties, which has proven to be effective in (Kurata et al., 2016).",
"For model implementation, we use the same GRU nets as in our paraphrase model.",
"And we multiply perturbations, sampled from the uniform distribution between 0.6 and 1.4, to the encoder's hidden states when generating paraphrases.",
"SRPara denotes a paraphrase model with SR-PB (Wang et al., 2019) structure.",
"In this structure, a semantic parser SLING (Ringgaard et al., 2017) is used to analyze the semantic frame of an utterance and the semantic role of each token in it.",
"Then the sequences of token, semantic frame labels and semantic role labels are fed into three parallel encoders separately.",
"The outputs of the three encoders are projected through a linear layer, and then sent to a decoder to generate the paraphrase.",
"The implementation of encoders and the decoder is the same as NAEPara.",
"We utilize the same dataset (CamRest676 or MultiWOZ) to train all the models for fair comparison.",
"Specifically, we use all the user utterances in the training corpus of CamRest676 or MultiWOZ to train the LSTM language model of WordSub and the policy network of TextSub.",
"And we use the same paraphrase data constructed in 3.1 to train the paraphrase models in NAEPara and SRPara.",
"The experimental results on CamRest676 and MultiWOZ are shown in Table 1 and Table 2, respectively.",
"In both tables, the first line is the baseline results without data augmentation, the second to sixth lines are results obtained by different data augmentation methods (substitution-based or paraphrase-based), and the last line is the performance of our proposal.",
"The results are grouped into three columns according to the size of training data (20%/50%/full).",
"We observe some common conclusions supported by the experimental results on both datasets.",
"First, our proposed data augmentation framework significantly improves the system's task completion ability (EMR, Succ.F1, Info and Succ) consistently without harming the language fluency.",
"This indicates that incorporating additional dialog paraphrases is beneficial for learning informative responses, since more user expressions are seen by the model during training.",
"Secondly, our framework outperforms other data augmentation methods in terms of dialog task relevant metrics under all circumstances.",
"In particular, paraphrase based methods are more likely to produce more fluent and informative responses than local substitution methods (WordSub and TextSub), because neural generative models consider dialog history to generate more coherent utterances.",
"The improvement of PARG over UtterSub suggests that our paraphrase generation model provides a more robust way of utilizing the additional information contained in paraphrases.",
"Our paraphrase generation model outperforms other paraphrase based methods (NAEPara and SRPara) since the decoding process of previous system action and the gradient back-propagation through the belief span decoder provide strong dialog context information for paraphrase generation.",
"Thirdly, the less data is available, the more improvement can be achieved through our data augmentation.",
"It is worth noting that after applying PARG, the model trained on only 50% data obtain comparable results to the model trained on the full dataset without data augmentation, in terms of task relevant metrics.",
"The similar results are also observed by comparing the models trained on 20% data with augmentation and 50% data without augmentation.",
"This indicates that our method is of great significance under low resource settings.",
"PARG sometimes gets a slightly lower BLEU score compared to other methods.",
"This is potentially because that although seq2seq models can learn responses which corresponding to a correct action, the surface language can still vary among training and testing utterances due to the natural variety of human languages.",
"Therefore, the BLEU score, which measures the likeness of surface language, may drop despite the system generate good functional responses.",
"We also observe some diverse results on CamRest676 and MultiWOZ.",
"Under the full data setting, the improvement gained by our data augmentation method on CamRest676 is lower than on MultiWOZ, since the single domain task in CamRest676 is easy and the data is enough for model training without conducting augmentation.",
"While for MultiWOZ, due to large language variations and the complex ontology, the utterance space is not well-explored, thus the response generation process can benefit more through incorporating additional dialog data.",
"In this section we investigate the function of each component in our paraphrase augmented response generation framework.",
"In particular, we discard 1) the act decoder (PARG w/o Act), 2) the utterance filter (PARG w/o Filt) or 3) joint training (PARG w/o Join) one at a time, then do model training and evaluation on the full MultiWOZ dataset.",
"The results are shown in Table 3.",
"We observe that removing the utterance filter brings the biggest drop in response quality in terms of combined score (-0.039).",
"This suggests the importance of using only high-quality paraphrases to train the response generation model, because the ill generation utterances will introduce errors to the downstream model.",
"The model also suffers from a performance drop (-0.028) after removing the previous system action decoder, which indicates that the supervision from previous system action labels is beneficial for generating better paraphrases.",
"Finally, we train the paraphrase generation model and response generation model separately and observe a slight drop of combined score (-0.010).",
"This is because through the attention connection between the paraphrase decoder and belief span decoder, the loss computed for response generation can also guide the paraphrase generation model to generate paraphrases that directly benefit to the response generation process.",
"Although the improvement is relatively marginal, joint training has additional advantages in simplifying the training process.",
"Specifically, we only need to conduct a single run of training and optimize a single set of hyperparameters.",
"We conduct several case studies to illustrate the response generation quality, paraphrase generation quality, as well as errors made by our model.",
"Table 4 compares the dialog state and system response generated by the original model TSCP to those generated by PARG.",
"We investigate the results from both the 50% and the full scale CamRest676 experiments, to further show our frame-work's superiority in low resource scenarios.",
"On full training data, TSCP and PARG both generate correct dialog state slots.",
"However, TSCP generates a wrong question Would you like something different?, as if no restaurant satisfies the user's request.",
"While PARG generates an appropriate User Utterance: Can you help me find a restaurant in the south that doesn't cost a lot of money.",
"Dialog Function Utterance Paraphrase Domain: train Previous Response: What time would you like to leave from norwich?",
"Slots Mentioned: leave Original Utterance: I would like to leave at 14:45.",
"What is the price?",
"Previous System Act: request-leave Matched Paraphrase: 14:45, please.",
"What is the duration of the train ride?",
"Domain: hotel Previous Response: Acorn Guest House is available if that works for you.",
"Slots Mentioned: parking Original Utterance: That is good.",
"And I need a free parking, does it have?",
"Previous System Act: inform-name Matched Paraphrase: This place is fine.",
"Is it near a hotel with free parking?",
"question Anything else you need? to ask user for further request about the recommended restaurant.",
"When we reduce the training data to half, TSCP generates wrong dialog state slots, and therefore recommends an expensive restaurant.",
"But PARG does not suffer from this problem and generates a correct response.",
"This example suggests that PARG can effectively improve the quality of dialog generation in low resource settings.",
"Although our paraphrase augmented data augmentation framework shows a notable superiority on the dialog generation quality, it still has some limitations.",
"Table 5 shows some errors that PARG made in our paraphrase data construction process.",
"In the first case, the question What is the price? raised by the original utterance doesn't match the question What is the duration of the train ride? in the paraphrase.",
"This error is made since we do not have user act labels in the dialog datasets.",
"Defining the dialog function of user utterance more precisely by adding its user act can solve this problem.",
"Another incoherence of paraphrase sources from the switch of dialog domains in multi-domain dialogs.",
"In the example, the word place in the paraphrase refers to another site irrelevant to the hotel in the previous system response, which might be an attraction or a restaurant.",
"The domain of the previous turn should also be considered in the dialog function to provide more domain information, which is regarded as a potential solution for this issue.",
"We also compare the utterances generated by different data augmentation methods to show the superiority of PARG in terms of paraphrase generation quality.",
"We select TextSub and SRPara for comparison, since they are the best replacement-based and paraphrase-based methods achieving the highest combined scores on MultiWOZ respectively.",
"Table 6 shows an example of paraphrases generated by the three methods.",
"We find that the paraphrase generated by TextSub is of bad quality because it is not in accordance with normal grammar, while the paraphrase generated by SRPara is fluent and semantically similar to the original utterance.",
"However, the paraphrase generated by our proposed PARG has higher quality.",
"It flexibly changes the rare word in-expensive to the common word cheap, which enlarges the surface form diversity.",
"The high-quality paraphrases can give better guidance to the downstream response generation model, which explains the significant improvement in terms of task completion rate obtained by PARG.",
"We conduct human evaluation to further illustrate PARG's superiority in terms of paraphrase generation.",
"We use one-to-one comparison to evaluate the relative quality of paraphrases generated by PARG versus strong baselines (NAEPara and SRPara).",
"In our experiments, we advise the judges to evaluate the quality of a paraphrase according to its similarity of user intent with the original utterance.",
"We sample one hundred dialog turns.",
"And in each turn, the paraphrase generated by PARG is given one-to-one comparisons with each baseline's paraphrase by five judges.",
"Specifically, we ask the judges to choose whether the paraphrase generated by PARG is of better, equal or worse quality than the paraphrase generated by NAEPara or SRPara, given the original utterance.",
"The results are shown in Table 7.",
"We report the percentage of different choices made by the judges in each one-to-one comparison, including the percentage of cases that PARG generates better (Bet-ter%), so-so (Equal%), or worse (Worse%) paraphrases.",
"We observe that PARG generates better paraphrases in a large proportion of cases, no matter compared to NAEPara or SRPara.",
"This suggests that PARG outperforms both NAEPara and SRPara in terms of paraphrase generation quality, which further proves that the dialog data augmented by PARG can provide better guidance to the response generation tasks.",
"In this paper, we propose to use dialog paraphrase as data augmentation to improve the response generation quality of task-oriented dialog systems.",
"We give out the definition of the paraphrase for a dialog utterance and design an approach to construct paraphrase dataset from a dialog corpus.",
"We propose a Paraphrase Augmented Response Generation (PARG) framework which consists of a paraphrase generation model, an utterance filter and a response generation model, where the models are trained jointly to take fully advantage of the paraphrase data for better response generation performance.",
"Our framework achieves significant improvements when it is applied to state-of-the-art response generation models on two datasets.",
"It also beats other data augmentation methods, especially under the low-resource settings."
] | [
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"method",
"objective",
"result",
"abstain"
] |
[
"A video-grounded dialogue system is required to understand both dialogue, which contains semantic dependencies from turn to turn, and video, which contains visual cues of spatial and temporal scene variations.",
"Building such dialogue systems is a challenging problem, involving various reasoning types on both visual and language inputs.",
"Existing benchmarks do not have enough annotations to thoroughly analyze dialogue systems and understand their capabilities and limitations in isolation.",
"These benchmarks are also not explicitly designed to minimise biases that models can exploit without actual reasoning.",
"To address these limitations, in this paper, we present DVD , a D iagnostic Dataset for V ideo-grounded D ialogues .",
"The dataset is designed to contain minimal biases and has detailed annotations for the different types of reasoning over the spatio-temporal space of video.",
"Dialogues are synthesized over multiple question turns, each of which is injected with a set of cross-turn semantic relationships.",
"We use DVD to analyze existing approaches, providing interesting insights into their abilities and limitations.",
"In total, DVD is built from 11 k CATER synthetic videos and contains 10 instances of 10 -round dialogues for each video, resulting in more than 100 k dialogues and 1 M question-answer pairs.",
"Our code and dataset are publicly available 1 .",
"Research in visual question answering (VQA) aims to develop intelligent systems that can reason and answer questions about visual information.",
"and Fritz, 2014; Zhu et al., 2016) Recently, many QA benchmarks have been proposed to extend the visual information from the image to video domain (Jang et al., 2017; Lei et al., 2018; Zadeh et al., 2019).",
"While image QA problems require a system to learn cross-modality interaction, video QA problems go beyond and capture visual information with temporal variance.",
"As an orthogonal extension from VQA problems, another line of research investigates image/video QA in a dialogue setting (Das et al., 2017; Seo et al., 2017; De Vries et al., 2017; Chattopadhyay et al., 2017; Alamri et al., 2019).",
"In this problem, questions about a given video or image are positioned in a multi-turn dialogue.",
"In each dialogue turn, a question usually exhibits different types of cross-turn relations to other questions in prior dialogue turns, such as object co-reference and topic alignment.",
"In this work, we investigate the problem of multi-turn video question answering (QA), also known as video-grounded dialogue.",
"Numerous approaches to video-grounded dialogue have shown remarkable performance in building intelligent multimodal systems (Hori et al., 2019; Schwartz et al., 2019; Le et al., 2019; Li et al., 2020; Le et al., 2020).",
"However, most of these methods exhibit marginal performance gain, and our ability to understand their limitations is impeded by the complexity of the task.",
"Existing benchmarks are not designed with enough information to determine whether current approaches are capable of sophisticated reasoning and not just exploiting biases, which has been a common concern in vision-language systems (Agrawal et al., 2016; Goyal et al., 2017; Qi et al., 2020).",
"To address the limitations of existing benchmarks and analyze dialogue systems more efficiently, we propose DVD , a D iagnostic Dataset for V ideo-grounded D ialogues .",
"We demonstrate an example dialogue in DVD in Figure 1.",
"From scene graphs and object action annotation of a CATER video (Girdhar and Ramanan, 2020), we simulate questions based on reasoning structures, also known as functional programs in CLEVR (Johnson et al., 2017).",
"Compared to CLEVR, we introduced 17 novel functional modules, designed for video and dialogue input components.",
"As illustrated in Figure 1, at each dialogue turn, a DVD question tests dialogue systems to perform different types of reasoning on videos, such as action recognition and spatio-temporal reasoning.",
"Across turns, we generate questions to be related to each other by incorporating different types of semantic relationships, including: (1) temporal relation, which requires a system to learn to localize different temporal segments of the video from turn to turn; (2) object reference, which requires a system to resolve visual objects mentioned throughout the dialogue history in either short-term references (pronouns) or long-term references (e.g. the earlier mentioned large object); and (3) topic transfer, which requires a system to maintain a memory of the last question turn to solve the question in the current turn.",
"On DVD, we trained a set of baseline methods and analyzed the results by several aspects of visual and linguistic complexity (Section 4).",
"We found that these methods struggle on questions requiring both video temporal and spatial localization.",
"They are also vulnerable to long-term reasoning in both videos and dialogues as they are not designed to track active visual objects or relevant video segments throughout dialogue context.",
"We hope the DVD dataset will lead to new research avenues to develop intelligent systems capable of complex reasoning on video and dialogue medium (further discussion in the Supplementary Material).",
"We compared DVD to existing datasets from the following four angles: 1) Vision-linguistic.",
"Vision-linguistic understanding benchmarks have been proposed, including captioning (Farhadi et al., 2010; Lin et al., 2014; Rohrbach et al., 2015), phrase grounding or object reference (Kazemzadeh et al., 2014; Plummer et al., 2015), scene graph learning (Krishna et al., 2017), and text-to-clip (Anne Hendricks et al., 2017).",
"Our benchmark, DVD, is more related to VQA in which a visual input is given and a system is required to answer a question about this input (Antol et al., 2015; Zhu et al., 2016; Jang et al., 2017; Lei et al., 2018).",
"Another related line of research is the research of navigation systems in a physical environment (Gordon et al., 2018; Wijmans et al., 2019).",
"Compared to the prior benchmarks, one major difference of DVD is the extension of single-turn interaction to a multi-turn human-machine dialogue.",
"2) Visually-grounded Dialogue.",
"Extended from the vision-linguistic understanding research, this line of research focuses on answering questions sequentially positioned over multiple turns (De Vries et al., 2017; Das et al., 2017; Chattopadhyay et al., 2017; Hori et al., 2019; Thomason et al., 2019).",
"A system has to understand the dialogue context and resolve cross-turn semantic dependencies.",
"However, due to the complexity of the tasks, involving cross-modality and cross-turn information, prior benchmarks are often subject to bias that models often exploit without actual reasoning (Qi et al., 2020).",
"In this work, we design a diagnostic benchmark with minimal bias and incorporate a set of specific reasoning requirements.",
"3) Diagnostic.",
"Our work is related to MNIST Dialogue (Seo et al., 2017) and CLEVR Dialog (Kottur et al., 2019).",
"They involve synthetic images to develop image-grounded dialogues.",
"Compared to them, DVD questions are extended from the image to the video domain and injected with more diverse cross-turn semantics.",
"As shown in Table 1 DVD contains a higher proportion of unique questions than related benchmarks.",
"DVD is also inspired by the dialogue state tracking task (DST) (Mrksic et al., 2017; Bordes et al., 2017; Kottur et al., 2021; Moon et al., 2020).",
"DST requires a system to detect Split #Videos/Images #Dialogs #Questions # UniqueQuestions DVD-Train 6,157 61,551 615,510 360,334 DVD-Val 1,540 15,396 153,960 99,211 DVD-Test 3,299 32,978 329,780 200,346 DVD-Total 10,996 109,925 1,099,250 620,739 CLEVR 100K N/A 1M 854K CLEVRER 20K N/A 305K 26.4K VisDial 123K 123K 1.2M 380K AVSD 11.1K 11.1K 101.2K 59K MNIST Dialog 50K 150K 1.5M 355 CLEVR Dialog 85K 425K 4.25M 73K Table 1: Statistics for DVD : Compared to synthetic dialogue benchmarks, MNIST Dialog and CLEVR Dialog, majority of questions in DVD are unique.",
"all information slots mentioned in dialogue, such as restaurant name and booking date.",
"Instead, in DVD, for each turn, we introduce an object tracking state, defined as visual objects and their attributes mentioned in dialogue context.",
"4) Multi-step reasoning.",
"A multi-step reasoning question is typically represented by a reasoning structure, also known as functional programs.",
"Earlier efforts (Andreas et al., 2016; Johnson et al., 2017) designed questions that are expressed as elementary operation programs.",
"More related to our work, Song et al. (2018); Yi* et al. (2020) extended the prior work to the video domain with questions focusing on the temporal variance of video frames.",
"A major difference between our work and these approaches is the extension of functional programs to a dialogue task with context-based operations, such as object tracking and interval tracking.",
"This extension brings a step toward more transparent dialogue systems capable of performing reasoning operations across question turns.",
"Our benchmark provides a dataset that can be used to conduct rich diagnostics to better understand the reasoning capabilities of dialogue systems.",
"Table 1 and Figure 3 to 6 give an overview of DVD.",
"Objects.",
"Objects are identified by their attributes, including object shapes, sizes, materials, and colors.",
"One unique characteristic of CATER objects is that each object can move multiple times in a single video.",
"From the CATER universe, we define 4 types of object actions: flying, rotating, sliding, and no action",
"(object being stationary).",
"Another characteristic of CATER objects is that one A2 A2 A2 is not left of B2 A2 is not left of B3 A2 is left of B4 A1 B2 A1 is not left of B1 A1 is left of B2 B1 B4 B5 B3 t1 t2 t3 t4 t0 T Object A: t1 t3 Object B: t2 t4 t5 t6 t5 t6 slides flies rotates Formulation of Video Intervals START END Stationary object A1 Moving object A2 Figure 2: Example spatial relationship : We demonstrate the projection of objects and their movements on the ground plane.",
"object can be contained by another object, resulting in a visual problem called object containment .",
"In our experiments, current dialogue systems are still vulnerable to this problem, making it hard to apply to the open world",
"(See Section 4.3).",
"Video intervals.",
"We define video intervals as continuous video frames, limited by a start and end point, each of which can be the start or end of an object's action or the start or end of the whole video.",
"We formulate two types of video intervals: 1)",
"Atomic intervals.",
"In these intervals, all objects have at most one action and they can be in only one of the two states: in motion or stationary.",
"To find atomic intervals, we simply collate the start and end timestamps of all object actions in a CATER video and sort them chronologically.",
"By definition, any non-overlapping interval between two timestamps is considered atomic.",
"This constraint allows us to identify the relative spatial relationships",
"(left, right, behind, and front)",
"between any two objects by using their coordinates at the start and end of the interval.",
"Note that in the CATER universe, all actions can be projected either as a straight line",
"(flying and sliding)",
"or a single point",
"(rotat-ing and no action).",
"Practically, we focus on spatial reasoning only when one of the two objects is stationary.",
"Figure 2 demonstrates the left spatial relation, and Figure 3",
"(Top)",
"shows an example question of atomic interval with spatial relation.",
"2)",
"Compositional intervals.",
"Compositional intervals are all other intervals that are not atomic.",
"In these intervals, an object can have more than one actions, i.e. be in more than one states such as flying then no action.",
"Therefore, its movement projections are not linear and we do not identify spatial relations in these cases.",
"Instead, we focus on information such as action set and action sequence to generate questions.",
"Figure 3",
"(Bottom)",
"presents an example question of compositional interval.",
"tervals in a video",
"(with a minimum duration of about 0 . 5 s), then randomly sample one interval, and proceed to create questions based on object movements and locations in this interval.",
"Figure",
"5-(a)",
"shows the percentages of DVD questions by video interval types.",
"Overall, more than 60% of questions are of compositional intervals and among the atomic-interval questions, the majority of them contain a spatial relation.",
"We still maintain a small percentage of temporal-agnostic instances",
"(none type)",
"to keep the dialogue flow natural.",
"Question representation.",
"We use question templates to materialize questions in natural language.",
"Each template associates with an applicable type of video interval and a functional program.",
"Compared to CLEVR functional programs",
"(Johnson et al., 2017), we introduce 17 new functional modules, of which 13 are extended for video-based inputs and 4 are extended for dialogue-based inputs.",
"Overall, we utilize 26 question templates for 8 question types.",
"Figure 3 illustrates two sample questions with corresponding reasoning structures and Figure",
"5-(b)",
"shows the statistics of question type distribution.",
"Please refer to the supplementary material for the full details of functional modules and question types and examples.",
"Dialogue Generation.",
"We generated dialogues with a fixed length of 10 turns.",
"In each turn, we adopted a Depth First Search",
"(DFS)",
"approach, as similarly used in CLEVR",
"(Johnson et al., 2017), to instantiate questions by sequentially traversing and executing functional programs.",
"To generate linguistic dependencies between dialogue turns, at each turn, we randomly sample and incorporate one or more of the 3 semantic relations below.",
"Figure 4 and 6 present examples of 2 questions and a dialogue with these semantic relations.",
"Type I: Video Temporal Relation",
"(TR): This type of semantic relation tests a system to localize video intervals in relation to past dialogue turns.",
"We randomly select one of three types of relation:",
"(1)",
"during relation reuses the same time interval as the last dialogue turn, e.g. the Q4 in Figure 6;",
"(2)",
"before and",
"(3)",
"after relations simulate a dialogue flow with references to the earlier and subsequent video segments.",
"TR synthesizes scenarios when humans either maintain or shift their attention temporally from one video segment to a related part.",
"Type II: Dialogue Object Reference (OR): We incorporate object references into a question by replacing original object phrase, such as the large rubber cone, with pronouns, such as it, to refer to object(s) mentioned in the earlier part of the dialogue.",
"The distance of reference is one turn and we call this a short-term memory OR.",
"Additionally, we simulate long-term memory OR by injecting unique objects mentioned further in the past dialogue turns.",
"We simulate this behavior by maintaining a dialogue object state at each turn.",
"To choose an object for references, we randomly",
"sample a past dialogue turn position and sample an object introduced in this turn.",
"This object then replaces the original object phrases in the question of the current turn.",
"For example, in question Q3 in Figure 6, the earlier mentioned small thing is identified from the object originally introduced in Q1.",
"Following this method, our dialogue simulates scenarios in which humans only focus on a subset of objects rather than all objects in the video scene and they can refer to those objects again over multiple dialogue turns.",
"Figure",
"5-(c) displays the boxplot of the number of active objects involved in each turn position.",
"Out of 10 objects (the maximum number of objects in a CATER video), 2 to 5 objects are involved on average per dialogue.",
"Figure",
"5-(d) shows the question distribution by the turn distance of long-term memory OR, with the majority of questions containing 2-turn distance references.",
"Type III: Topic Transfer (TT): This relation tests the model ability to memorize and reuse the context of the last dialogue turn to the current turn through 3 types of topic transfers: (1) Attribute transfer and (2) spatial transfer reuse the same question from the prior dialogue turn with a modification of object attribute or spatial relation (e.g. Q2 and Q5 in Figure 6).",
"Compared to TR, these two types of topic transfers focus on human attention shifts in spatial space rather than temporal space; (3) Temporal transfer introduces a unique setting of situated dialogue in DVD.",
"Instead of using a fixed video input for each dialogue instance, at the first dialogue turn, we shorten a CATER video by a cutoff point, e.g. T 0 .",
"At each later turn, for 30 % of time, we update the current video input to a new cutoff point later than the previous one e.g. T i +1 > T i .",
"We do not update when the cutoff reaches the end of the original CATER video T i.e. T i +1 = T .",
"For instance, in Figure 6, at Q7, we reuse the same context from Q6 but with new extended visual content.",
"We introduce temporal transfer as a preliminary step to challenge dialogue systems in a dynamic environment with a continuous visual stream.",
"After sampling question templates and semantic dependencies, the ground-truth answers are obtained by executing corresponding functional programs.",
"For each question template, we discard dominating instances to maintain an approximate uniform distribution of answer values, minimizing bias resulting from question-conditioned data distributions.",
"Additionally, at each turn, we remove any question that is ill-posed or becomes redundant when positioned in dialogue.",
"For instance, the question how many red rubber objects are there? is removed if in a prior dialogue turn, the question is how many red objects are there? and the answer is already 1 .",
"To do this, we perform a check at every dialogue turn to determine whether involving objects and their attributes are already mentioned in the dialogue object state.",
"Finally, we only keep dia-Dialogue Dialogue Object State TR OR TT A S T Q1 : before the large thing 's first flight , what color is the average thing that is in front of the small thing?",
"logues that have cross-turn dependencies in 9 out of 10 turns, considering the first turn semantically independent.",
"Figure",
"5-(e) provides the distribution of dialogues by the number of TR, OR, and TT relations.",
"For more analysis of DVD, please refer to the supplementary material.",
"The video-grounded dialogue task in DVD is defined as a turn-based retrieval task from multiple-choice candidate answers.",
"At each dialogue turn i ( i = 1 , 2 , ..., 10 ), video input V i , the ground-truth dialogue context, including question and answer pairs up to the last dialogue turn, C i = ( Q k , A k ) | k = i 1 k =1 , the question of the current turn Q i , are provided.",
"The system is given a set of candidate answers A , predefined as all possible answer values for all question types, with |A| = 40 in DVD, and is required to select one answer from A .",
"We evaluate models by the accuracy of predicted answers against the ground-truth answers.",
"For a system denoted as , the objective function is: A i = arg max AP ( A i |V i , Q i , C i ; ) .",
"Baselines .",
"We experimented with a representative set of baseline approaches on DVD, including: (1) Answer Prior , which selects the most popular answer option as predicted answers; (2) Q-type (Random/Frequency) , which assume known question types and select a random or most popular answer from the corresponding answer space; (3) Q-retrieval (TF-IDF) , which retrieves the most similar question from the training set and use its answer as the predicted answer; (4) RNN(Q) and HRNN(C+Q) , which encode dialogue-only components without seeing visual information to predict answers; (5) HRNN(C+Q)+CNN(V)/TA(V) , same as (4) but with access to visual information which is encoded by pretrained CNN models and temporal attention (TA) (Jang et al., 2017; Lei et al., 2018; Hori et al., 2019); (6) TF(C+Q+V) , which uses a Transformer-based architecture to encode visual and language information (Schwartz et al., 2019; Le et al., 2019; Li et al., 2020).",
"Finally, we conducted internal human evaluation on a subset of the DVD test split.",
"For each test sample, a human received an input video, dialogue history, and the question for the current turn.",
"The human was required to select an answer from the list of 40 candidates A to answer the question.",
"Experiments .",
"Video-grounded dialogues entail a lot of visio-linguistic and reasoning challenges that are not easy to be studied in isolation using existing datasets.",
"To address this issue with DVD, we exploit the rich annotations of DVD in our experiments during evaluation.",
"We designed our experiments to systematically analyze model capabilities and shortcomings through unique challenges in video-grounded dialogue systems.",
"Specifically, in Section 4.2, we analyzed the results of all models overall as well as by each question type.",
"In Section 4.3, we leverage the spatio-temporal annotation of visual objects to analyze model performance by related video interval types, spatial reasoning (results by object containment), and temporal reasoning (results by relative interval length).",
"In terms of dialogue contextual complexity, in Section 4.4, Accuracy Answer Prior Q-type (Random) Q-type (Freq) Q-retrieval (TF-IDF) RNN (Q) HRNN (C+Q) HRNN (C+Q)+ CNN(V) HRNN (C+Q)+ TA(V) TF (C+Q +V) Human All 21.3 27.8 35.3 32.1 39.7 45.8 49.3 50.2 51.1 89.3 Action count 0.0 9.3 23.4 19.8 16.3 28.2 37.8 36.0 38.8 87.5 Action query 0.0 12.7 23.7 20.6 25.8 33.1 36.7 38.6 39.4 88.1 Attribute query 0.0 32.9 38.7 39.4 38.1 39.2 43.3 45.1 43.1 98.0 Compare action seq 33.4 34.1 37.3 35.1 45.5 52.5 58.2 57.5 61.6 91.5 Compare action set 25.1 28.2 36.3 28.2 32.8 40.0 43.0 44.3 45.4 82.9 Compare action freq 48.5 50.0 50.5 44.4 58.4 56.9 62.3 65.2 67.1 88.5 Object count 0.0 9.1 23.3 18.8 26.2 38.6 40.0 40.2 39.9 90.6 Object exist 48.9 49.8 51.1 54.4 66.4 67.0 69.2 69.4 69.0 92.3 None 0.0 32.1 38.3 39.0 38.3 39.5 43.1 45.1 43.4 99.1 Atomic (non-spatial) 18.8 26.3 31.9 42.4 47.2 47.8 49.9 50.7 48.9 83.3 Atomic (spatial) 21.2 27.3 35.5 27.6 36.8 46.0 47.5 47.6 47.1 93.9 Compositional 22.8 28.0 35.4 32.1 40.0 45.8 50.2 51.4 53.2 87.1 Transfer (attribute) 0.0 30.7 45.5 37.1 40.8 45.7 54.5 57.3 57.7 100.0 Transfer (spatial) 49.8 42.4 44.9 26.4 29.6 48.1 47.7 47.4 48.0 90.5 Transfer (temporal) 28.9 38.4 22.6 3.0 30.2 53.5 62.2 64.6 69.0 79.8 Table 2: Experiment results on the DVD test split : Models are evaluated by overall accuracy and by question types (Top), accuracy by video intervals in question (Center), and transferability accuracy (Bottom).",
"we use cross-turn relation annotations to analyze model performance by temporal-based attention shift (TR), dialogue turn distance (OR), and short-term transferability (TT).",
"From Table 2 (Top), we observe that blind systems that use answers only or questions only, achieve quite poor results up to 39% accuracy.",
"By selecting the most popular answer option, Answer Prior only achieves 21% accuracy.",
"When a blind model has access to dialogue history, the performance increases up to 45% .",
"This increment shows that dialogue context contains useful information for a dialogue system to infer answers.",
"We note that on average there are nearly 3 out of 10 question turns with a topic transfer per dialogue (see Figure",
"5-(e)).",
"In such cases, a model can randomly make a good guess by just reusing the answer of the last question turn.",
"When a system is presented with the visual input, we observe model performance increases up to 51% .",
"However, in the best system, the performance is still far below the human level with a performance gap of 38 absolute points.",
"In Table 2 (Top), from the results of Q-type(Random) per question type, we observed that answers are balanced in each question type.",
"The table also shows performance drops between pairs of object-oriented vs. action-oriented question types.",
"For instance, TF(C+Q+V) achieves 38% accuracy in Action count vs. 39% in Object count, and 39% accuracy in Action query vs. 43% in Attribute query.",
"In comparison-based questions, comparing action sets tend to be more challenging than comparing action sequences.",
"To compare action sets of two objects in a video interval, a system needs to process the interval completely.",
"However, to compare action sequences, in most cases, the system can determine the answer after the first few action steps the objects perform.",
"For more analysis of question types and sub-types, please refer to the supplementary material.",
"To understand the drive of the performance by visual inputs, we investigated the results by the visual complexity in questions.",
"In Table 2 (Center), compared to HRNN(C+Q)+CNN(V) , models using attention, either through TA(V) or Transformer, show more improvement in compositional interval questions with increments up to 3 absolute points.",
"In other types of intervals, the performance gains are not very significant.",
"Particularly, in atomic-interval questions that require spatial localization, the performance does not change when applying attention.",
"This observation necessitates systems that focus on both spatial and temporal space of visual inputs.",
"In Figure 7 (Left), we analyzed model performance by the number of objects mentioned in questions that are contained in video scenes.",
"We noted that current models are vulnerable to visual object containment, as the accuracy decreases by the number of contained objects.",
"This observation is consistent with the results of CATER action recognition tasks (Girdhar and Ramanan, 2020).",
"In Figure 7 (Right), we investigated model performance Figure 7: Experiment results by visual properties : Left: results by the number of objects mentioned in question that are contained in video scenes.",
"by the relative length of ground-truth video interval in question, measured as the percentage of the whole video length.",
"To make a fair analysis, we removed cases in which a question can be solved correctly without localizing the specific video interval but simply using the whole video.",
"We observed that model performance decreases as the interval length increases, demonstrating the challenge of long-term video understanding in video scenes.",
"We noted that there is a drop in performance in the lowest range of interval lengths, 0 10% .",
"As this range often represents atomic intervals, the majority of which include questions with spatial relations, systems are negatively affected and the curve drops initially in this low range.",
"We examined model performance in a multi-turn setting by cross-turn semantic relations.",
"First, we investigated the effect of TR.",
"In a TR-injected question, a system is required to learn to retrieve a video segment related to the last used segment.",
"However, some questions may be correctly answered without localizing the correct segments.",
"For instance, at the current dialogue turn, a question is of interval ( t m , t n ) and at the next turn, a question with an after TR is of interval ( t n , t q ) (s.t. t m < t n < t q ) might be solved if the visual context is the same in both intervals.",
"We separate such question turns and measured the results of the remaining questions with TR relations after and before.",
"From Figure 8, we observed that current systems are not optimal to learn to shift attention to related intervals, depending on the type of questions.",
"In action-based questions (AC, AQ, CASeq, CASet, and CAF), the results of before and after TR are lower than those without a TR relation, but in object-based questions (OC, OE), we observed differently.",
"This difference can be explained by the dynamics of actions vs. objects.",
"Between video intervals, information about object actions (e.g.",
"fre-(a) TF(C+Q+V)",
"(b) Action Query Figure 9: Experiment results for cross-turn reasoning : Results of Action count questions by turn position (Left) and by turn distance of object references (Right).",
"Secondly, we analyzed the impacts of long-term memory OR.",
"From Figure 9 (Left), we noticed that model performance becomes more stable in systems where dialogue history is introduced as an input.",
"For instance, compared to RNN(Q) , the performance curve of TF(C+Q+V) follows a more gentle downward trend from low to high dialogue turn positions.",
"To fairly analyze performance by OR turn distance, we discard any instances that do not require systems to use dialogue context to resolve the references, but simply rely on the input video.",
"For example, a question with a reference the earlier mentioned red object is removed if there is indeed only one red object in the video scene.",
"From results by OR turn distance in Figure 9 (Right), we observed all systems are relatively unstable, even as dialogue history is introduced as an input.",
"This difference against the results by turn position exhibits a limitation of current systems as they struggle to resolve object references by existing dialogue encoding techniques.",
"quency, types) tends to change more easily than objects themselves.",
"Action-based questions challenge systems through cross-turn temporal reasoning more than object-based questions.",
"Finally, to analyze the effect of TT relations, we investigate a new metric, called transferability , in Table 2 (Bottom).",
"When a system is presented with a question turn with a topic transfer, it should learn to derive the answer in relation to the context of the last dialogue turn.",
"If the last answer is right, an intelligent system should be able to consistently answer in the current turn correctly.",
"For instance, given a question-answer pair what is the color of the sliding cube? red, a human can often infer the answer to a TT(A)-injected question what about its material? based on the same visual object.",
"We gather questions that precede questions containing topic transfers and call this set Q ttprior .",
"For each question q ttprior that the model answered correctly, we measure the accuracy over the corresponding transferred question q tt and average the scores.",
"We observed a clear performance gain from RNN(Q) to HRNN(C+Q) in terms of transferability metric, demonstrating the impacts of dialogue context on TT questions.",
"A chance-based system can achieve approximately 50% transferability by just recycling answers from prior turns.",
"The best system results, however, are still far from human-level performance.",
"This observation necessitates systems designed with a better contextual memory to adapt past context in new dialogue turns.",
"We have introduced DVD, a diagnostic dataset designed to analyze video-grounded dialogue systems.",
"DVD dataset is generated with tight control of data bias through balancing the question and answer distribution and questions are built based on a principled approach to reflect the complexity in videos and dialogues.",
"Our results have shown that DVD can provide interesting insights into system abilities and limitations.",
"Specifically, our analysis has revealed some key shortcomings of current models, including: (1) limited ability to efficiently integrate visual information from both spatial and temporal space; (2) limited ability to recognize and compile multiple actions in long-ranged video intervals; (3) inconsistent performance across dialogue turns, especially in cases when systems are required to switch attention temporally; and (4) unstable performance to resolve object co-reference in the dialogue context, especially when the turn distance of the object references increases.",
"These insights provide potential avenues where we hope DVD will be a useful benchmark to explore new ideas.",
"Specifically, we discuss two research directions: Dialogue object tracking.",
"memory reasoning ability to track objects and their attributes mentioned in the dialogue context.",
"We are inspired by research work of dialogue state tracking in task-oriented dialogues (Bordes et al., 2017) and propose to use tracking accuracy metric in video-grounded dialogue systems.",
"At each turn t , a video-grounded dialogue system should be able to track and update a dialogue state S t , defined as a set of all mentioned objects o ti and their attributes, including sizes z ti , colors c ti , materials m ti , and shapes s ti : S t = ( o t 1 , o t 2 , ... ) = (( z t 1 , c t 1 , m t 1 , s t 1 ) , ( z t 2 , c t 2 , m t 2 , s t 2 ) , ... ) .",
"We define two tracking metrics, including joint accuracy , measuring the accuracy of prediction of all objects and attributes as a set, and slot accuracy , measuring the accuracy of predicted attributes individually.",
"The introduction of these evaluation metrics necessitates a new learning task, dialogue object tracking (DOT) in video-grounded dialogue systems, to better understand current systems' long-term reasoning ability.",
"Video interval tracking.",
"Another aspect of dialogue systems that we want to diagnose is their ability to localize video segments in a multi-turn setting.",
"Each question turn often focuses on different parts of the video as the dialogue extends over time.",
"It is important to learn how a system can localize the right segments of the video from turn to turn.",
"Similar to DOT, we define a new learning task for video interval tracking (VIT) in a similar nature as text-to-clip tasks (Anne Hendricks et al., 2017).",
"The task can be defined as a ranking task of segment candidates to choose the relevant segments in each question turn.",
"This task is evaluated by ranking metrics such as Rank @1 or Rank @2 , and mean intersection over union (mIoU).",
"Alternatively, we can adapt grounding , a simple metric used by Hudson et al. (2019) to assess spatial attention of image regions.",
"in DVD, grounding can be used in temporal attention-based approaches to determine model ability to localize the right position of video intervals in question.",
"Finally, we want to emphasize that DVD is designed as a synthetic dataset for diagnosis purposes to systematically evaluate model capabilities.",
"The benchmark should not be used to replace data of human dialogues but be used to supplement real-world dialogue datasets."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"abstain",
"other",
"result",
"result",
"abstain",
"objective",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"other",
"method",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain"
] |
[
"Question answering and conversational systems are often baffled and need help clarifying certain ambiguities.",
"However, limitations of existing datasets hinder the development of large-scale models capable of generating and utilising clarification questions.",
"In order to overcome these limitations, we devise a novel bootstrapping framework (based on self-supervision) that assists in the creation of a diverse, large-scale dataset of clarification questions based on post-comment tuples extracted from stackexchange.",
"The framework utilises a neural network based architecture for classifying clarification questions.",
"It is a two-step method where the first aims to increase the precision of the classifier and second aims to increase its recall.",
"We quantitatively demonstrate the utility of the newly created dataset by applying it to the downstream task of question-answering.",
"The final dataset, ClarQ, consists of 2M examples distributed across 173 domains of stackexchange.",
"We release this dataset 1 in order to foster research into the field of clarification question generation with the larger goal of enhancing dialog and question answering systems.",
"The ubiquitous nature of conversations has led to the identification of various interesting problems (Clark et al., 2019).",
"One of these problems is the ability of a system to ask for clarifications (Rao and Daume III, 2018; Aliannejadi et al., 2019) to a natural language question.",
"A user's complex information need is often lost due to the brevity of the posed question.",
"This leads to an under-specified question containing information gaps which lowers the probability of providing the correct answer.",
"Thus, it would be an improvement if a conversational or a question answering system had a mechanism for refining 1 https://github.com/vaibhav4595/ClarQ user questions with follow-ups (De Boni and Man-andhar, 2003).",
"In literature, such questions have been termed Clarification Questions (De Boni and Manandhar, 2003; Rao and Daume III, 2018, 2019).",
"In the domain of question-answering, the major advantages of a clarification question are its ability to resolve ambiguities (Wang et al., 2018; Aliannejadi et al., 2019) and to improve the probability of finding the most relevant answer.",
"For conversational systems, asking such questions help in driving the conversation deeper along with better engagement of the user (Li et al., 2016; Yu et al., 2016).",
"Recently, Rao and Daume III (2018, 2019) have provided a dataset based on stackexchange and used it for clarification question retrieval as well as generation.",
"They also modify a dataset based on Amazon Question-Answering and Product Reviews (McAuley et al., 2015; McAuley and Yang, 2016) to make it suitable for the same task.",
"On the other hand, Aliannejadi et al. (2019) created a dataset (Qulac) built on top of TREC web collections.",
"However, there are several shortcomings to these datasets, which limit the development of generalizable and large-scale models aimed to tackle the problem of clarification question generation.",
"The stackexchange dataset (Rao and Daume III, 2018) is created by utilising simple heuristics.",
"This adds a lot of noise, thereby reducing the number of actual clarification questions.",
"It also limits the inclusion of diverse types of questions as it is collected from three similar domains (askubuntu, superuser and unix).",
"The question generation model of Rao and Daume III (2019) achieves a very low BLEU score when trained on this dataset.",
"On the other hand, the dataset based on Amazon reviews is a poor proxy for clarification questions because product descriptions are not actual questions demanding an answer and there is no information gap that needs to be addressed.",
"To overcome the shortcomings of existing datasets, we devise a novel bootstrapping framework based on self-supervision to obtain a dataset of clarification questions from various domains of stackexchange.",
"The framework utilises a neural network based architecture to classify clarification questions.",
"In a two step procedure, the framework first increases the precision of the classifier and then increases its recall.",
"The first step is called down-sampling, where the classifier is iteratively trained on the most confident predictions (carried forward over from the previous iteration).",
"The second step is the up-sampling procedure, where the classifier is iteratively trained by successively adding more positively classified examples.",
"This step provides a boost in recall while restricting the drop in precision to a minimum.",
"The classifier trained on the final iteration is then used for identification of clarification questions.",
"The overall process ensures that the final dataset is less noisy and, at the same time, consists of a large and diverse number of examples.",
"We must emphasize that, given the large amount of data available on stackexchange, a classifier with moderate recall still serves our purpose.",
"However, it is imperative that precision of the classifier be reasonably high.",
"Stackexchange is a network of online question answering websites.",
"On these websites, users may comment on the original post with content such as third party URLs, clarifying questions, etc.",
"We only want to select comments which act as clarifying questions and remove the rest as noise.",
"To this end, we devise a bootstrapping framework for training a classifier capable of identifying clarifying questions.",
"The bootstrapping method utilises a neural network based classifier L which is posed with the task of clarification question detection.",
"Formally, given a tuple ( p, q ) , where p P is a post and q q p is a comment made on p , the task is to predict whether q is an actual clarification question for p .",
"This makes it a binary classification problem, where a label 1 indicates q being an an actual clarification question and 0 indicates otherwise.",
"We first utilise the stackexhange data dump available at https://archive.org/details/ stackexchange .",
"We extract the posts and the comments made by users on those posts from 173 different domains.",
"We remove all posts which did not have a provided answer.",
"The comments made on the posts act as a potential candidate for clarifying question.",
"This leads to 6,186,934 tuples of ( p, q ) .",
"First, we initialise a seed dataset that is used to train L using the process of iterative refinement as described later.",
"Iterative-refinement itself is subdivided into two parts: (1) Down-Sampling (2) Up-Sampling.",
"We utilise a neural network based architecture for the classifier L .",
"Inspired by Lowe et al. (2015), L utilises a dual encoder mechanism i.e it uses two separate LSTMs (Hochreiter and Schmidhu-ber, 1997) for encoding a post p and a question q .",
"The dual encoder generates hidden representations h p and h q for p and q respectively.",
"The resulting element-wise product of h p and h q is further passed on to fully connected layers before making predictions via softmax.",
"More formally, the entire process can be summarised as follows: h p = LST MP ( p ) (1) h q = LST MQ ( q ) (2) h pq = ( h p (cid:12) h q ) (3) y = Softmax ( h p q ) (4) where, (cid:12) represents the element-wise product, represents the non-linearity introduced by the fully connected layers and represents the final classification layer.",
"In order to select seeds for the bootstrapping procedure, we consider all the collected posts but only use the last comment made on these posts as clarifying questions.",
"We make the assumption that the comments act as a proxy for a clarification question.",
"Later, we remove all ( p, q ) tuples where q does not have a question mark.",
"Intuitively, the last comment can be a better signal for identifying clarifying questions as it has more chances of capsulizing the requirements of the original post.",
"It can also be more opinionated than others.",
"We then randomly sample a question from the same domain as that of the post and treat it as an instance of a negative clarification question.",
"Thus each question gets paired with a positive and a negative clarification question.",
"We denote this seed dataset as D 0 .",
"The procedure is described in Algorithm 1.",
"This entire process can be segmented into two parts.",
"Down-Sampling : The aim of this step is to increase the precision of the classifier.",
"In the first iteration of this step, the classifier L is trained on the seed dataset D 0 .",
"After training is complete, L classifies D 0 and the most confident 40% of the positives are selected to train L in the next iteration.",
"This process is continued for N iterations.",
"Each iteration leads to a new dataset D i (which is smaller in size than D i 1 .",
"Intuitively, the precision of L on the task of selecting actual clarification question should increase at the end of each iteration as it is successively trained only on the examples which it was more confident about in the previous round.",
"Up-Sampling : This step is intended to improve the recall of L while restricting the loss of precision to a minimum.",
"In the first iteration, L is trained on SN = DN i.e the data obtained at the last iteration of the down-sampling procedure.",
"After training is complete, L is used for classifying DN 1 (which is obtained during the second-list iteration of the down-sampling process).",
"The tuples which get classified as positive are used for training L in the next round.",
"This process continued for N iterations.",
"Note that this procedure has two major differences to the iterative procedure of the down-sampling process.",
"First, instead of using L for classifying the same dataset which it was trained on, it is used for classifying an up-sampled version of the current dataset.",
"Second, it relaxes the condition of selecting 40% of the most confident examples.",
"Intuitively, this relaxation should help in increasing the recall of the classifier and at the same time should not drastically hamper the precision (as it operates only on the examples which it classifies as positives).",
"Note that, in order to provide the classifier with examples of non-clarifying questions, we randomly sample negative examples at the end of each iteration (during both up and down-sampling).",
"This is similar to the way in which the D 0 is created.",
"At the end of the iterative refinement procedure, we obtain a dataset on which L can achieve a good precision and moderate recall on the task of classifying clarification questions.",
"Thus, L is fi-nally trained on S 0 and used for classifying the 6,186,934 tuples of ( p, q ) extracted from stackexchange.",
"We again emphasize that it is more important to obtain better precision, as it reduces the amount of noise added to the dataset.",
"Given that there are a large number of ( p, q ) tuples, a moder-Iteration Precision Recall F1 1 0.736 0.601 0.662 2 0.758 0.561 0.645 3 0.771 0.390 0.518 4 0.827 0.286 0.426 5 0.829 0.270 0.407 Table 1: Performance of the classifier on the annotated test set at the end of each iteration of the down-sampling procedure.",
"ate recall can still ensure the incorporation of large and diverse types of ( p, q ) tuples.",
"Test Set Creation : We first create a manually annotated test set to evaluate the effectiveness of the classifier at each step of the iterative refinement process.",
"For this, we randomly sample 100 ( p, q ) tuples each from 7 different domains (Ap-ple, cooking, gaming, money, photography, scifi, travel).",
"These questions are either the last, second last or the third last comments of their corresponding posts.",
"The annotated test set has a 7:3 ratio of positives to negatives.",
"Seed Dataset : It is created based on the method described in Section 2.2.2.",
"It consists of 1,800,000 ( p, q ) tuples, amongst which 50% are randomly sampled negative instances.",
"The classifier is then iteratively trained based on Algorithm 1.",
"The results of the down-sampling and the up-sampling procedure are discussed below:",
"Table 1 describes the performance of the classifier on the annotated test set during the down-sampling process.",
"It can be clearly observed that the precision of the classifier increases with each iteration.",
"Even though there is a substantial decline in recall, the down-sampling procedure helps in increasing the overall precision.",
"Table 2 describes the performance of the classifier on the annotated test set during the up-sampling process.",
"It can be clearly observed that recall of the classifier increases with each iteration, although the final recall (i.e at iteration 5) is lower than the recall obtained in the first iteration of the down-sampling process.",
"Given that there are a large number of ( p, q ) tuples, a drop in recall will not hamper the quality nor the diversity of the dataset.",
"At the end of the process, we also observe that there is only a marginal drop in precision.",
"Thus, at the end of the last iteration we are able to obtain a classifier which has a high precision and a reasonable recall.",
"We evaluate the utility of the clarification question in ClarQ by using it for the task of reranking answers.",
"We first randomly sample 1000 ( p, q ) tuples from 11 different domains (Apple, askubuntu, biology, cooking, english, gaming, money, puzzling, scifi, travel, unix).",
"Corresponding to each tuple, we randomly sample a list of 99 answers (from the same domain as that of the post) and append the actual answer to this list.",
"We first rerank the answers based on the post alone.",
"Later, we rerank the answers by concatenating the post and the clarifying question.",
"Based on the results from Table 3, we observe that concatenating the clarification question to the post does help in improving the performance.",
"The success of this experiment depicts the usefulness of our created dataset.",
"The classifier obtained at the end of iterative refinement procedure is used for classifying the initially collected ( p, q ) tuples of 6,186,934.",
"The classifier predicts 2,079,300 tuples as actual clarification questions.",
"As can be seen from Figure 1, these tuples are unequally distributed across 173 different domains.",
"The top 20 domains account for 69 .",
"18% of the total ( p, q ) tuples in the dataset.",
"The remaining 155 domains account for the remaining 30 .",
"82% of the total number of tuples.",
"It is noteworthy that our provided dataset also comprises of actual answers to each post.",
"This would help researchers in evaluating the quality of the clarification questions in a standalone perspective and at the same time with respect to the downstream task of question-answering.",
"question generation.",
"It is created by a two-step iterative bootstrapping framework based on self-supervision.",
"ClarQ consists of 2M post-question tuples spanning 173 different domains.",
"We hope that this dataset will encourage research into clarification question generation and, in the long run, enhance dialog and question-answering systems.",
"We would like to extend our sincere gratitude to Abhimanshu Mishra, Mrinal Dhar and Yash Kumar Lal for helping us understand the structure of the comments and their distribution across domains."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other"
] |
[
"Multimodal Machine Translation (MMT) aims to introduce information from other modality, generally static images, to improve the translation quality.",
"Previous works propose various incorporation methods, but most of them do not consider the relative importance of multiple modalities.",
"In MMT, equally treating text and images may encode too much irrelevant information from images which may introduce noise.",
"In this paper, we propose the multimodal self-attention in Transformer to solve the issues above.",
"The proposed method learns the representations of images based on the text, which avoids encoding irrelevant information in images.",
"Experiments and visualization analysis demonstrate that our model benefits from visual information and substantially outperforms previous works and competitive baselines in terms of various metrics.",
"Multimodal machine translation (MMT) is a novel machine translation (MT) task which aims at designing better translation systems using context from an additional modality, usually images (See Figure 1).",
"It initially organized as a shared task within the First Conference on Machine Translation (Specia et al., 2016; Elliott et al., 2017; Barrault et al., 2018).",
"Current works focus on the dataset named Multi30k (Elliott et al., 2016), a multilingual extension of Flickr30k dataset with translations of the English image descriptions into different languages.",
"Previous works propose various incorporation methods.",
"Calixto and Liu (2017) utilize global image features to initialize the encoder/decoder hidden states of RNN.",
"Elliott and Kadar (2017) model the source sentence and reconstruct the image representation jointly via multi-task learning.",
"Recently, Ive et al. (2019) propose a translate-and-refine ap-Figure 1: An Example for Multimodal Machine Translation.",
"proach using two-stage decoder based on Transformer (Vaswani et al., 2017).",
"Calixto et al. (2019) put forward a latent variable model to learn the interaction between visual and textual features.",
"However, in multimodal tasks the different modalities usually are not equally important.",
"For example, in MMT the text is obviously more important than images.",
"Although the image carries richer information, it also contains more irrelevant content.",
"If we directly encode the image features, it may introduce a lot of noise.",
"To address the issues above, we propose the multimodal Transformer.",
"The proposed model does not directly encode image features.",
"Instead, the hidden representations of images are induced from the text under the guide of image-aware attention.",
"Meanwhile, we introduce a better way to incorporate information from other modality based on a graph perspective of Transformer.",
"Experimental results and visualization show that our model can make good use of visual information and substantially outperforms the current state of the art. 2 Methodology Our model is adapted from Transformer and it is also an encoder-decoder architecture, consisting of stacked encoder and decoder layers.",
"The focus of our work is to build a powerful encoder to incorporate the information from other modality.",
"Thus, we will first begin with an introduction to the incorporation method.",
"Then we will detail the multimodal self-attention.",
"The final representations of text and images are sent to the sequence decoder to generate the target text.",
"The method of incorporating information from other modality is based on a graph perspective of Transformer.",
"The core of Transformer is self-attention which employs the multi-head mechanism.",
"Each attention head operates on an input sequence x = ( x 1 , ..., x n ) of n elements where x i R d , and computes a new sequence z = ( z 1 , ..., z n ) of the same length where z R d : z i = n (cid:88) j =1 ij (cid:0) x j WV (cid:1) (1) where ij is weight coefficient computed by a softmax function: ij = softmax (cid:32)(cid:0) x i WQ (cid:1) (cid:0) x j WK (cid:1) T d (cid:33) (2) WV , WQ , WV R d d are layer-specific trainable parameter matrices.",
"Thus we can see that each word representation is induced from all the other words.",
"If we consider every word to be a node, then Transformer can be regarded as a variant of GNN which treats each sentence as a fully-connected graph with words as nodes (Battaglia et al., 2018; Yao et al., 2020).",
"In traditional MT tasks, the source sentence graph only contains nodes with text information.",
"If we want to incorporate information from other modality, we should add the nodes with other modality information into the source graph.",
"Therefore, as the words are local semantic representations of the sentence, we extract the spatial features which are the semantic representations of local spatial regions of the image.",
"We add the spatial features of the image as pseudo-words in the source sentence and feed it into the multimodal self-attention layer.",
"As stated before, in MMT the text and images are not equally important.",
"Directly encoding images which contain a lot of irrelevant content may introduce noise.",
"Therefore, we propose the multimodal self-attention to encode multimodal information.",
"In multimodal self-attention, the hidden representations of the image are induced from text under Figure 2: Multimodal self-attention the guide of image-aware attention which provides a latent adaptation from the text to the image.",
"A visual representation is illustrated in Figure",
"2. Formally, we consider two modalities text and img , with two entries from each of them denoted by x text R n d and x img W img R p d , respectively.",
"The output of multimodal self-attention is computed as follows: c i = n (cid:88) j =1 ij (cid:0) x text j WV (cid:1) (3) where ij is weight coefficient computed by a softmax function: ij = softmax (cid:0) x i WQ (cid:1) (cid:16) x text j WK (cid:17) T d (4) where c R ( n + p ) d is the hidden representation of words and the image.",
"At last layer, c is fed into sequence decoder to generate target sequence.",
"We can see that the hidden representations of the image is only induced from words but under the guide of image-aware attention.",
"The extracted spatial features of the image are not directly encoded in the model.",
"Instead, they adjust the attention of each word to compute the hidden representations of the image.",
"In each encoder layer we also employ residual connections between each layer as well as layer normalization.",
"We compare the performance of our model with previous kinds of models: (1) sequence-to-sequence model only trained on text data (LSTM, Trans-former).",
"(2) Previous works trained on both text and image data.",
"We evaluated the translation quality of our model in terms of BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014), which have been used in most previous works.",
"We build and test our model on the Multi30k dataset (Elliott et al., 2016), which consists of two multilingual expansions of the original Flickr30k dataset referred to as M30k T and M30K C , respectively.",
"Multi30k contains 30k images, and for each of the images, M30k T has one of its English description manually translated into German by a professional translator.",
"M30K C has five English descriptions and five German descriptions, but the German descriptions were crowdsourced independently from their English versions.",
"The training, validation, test sets of Multi30k contain 29k, 1014 and 1k instances respectively.",
"We use M30k T as the original training data and M30k C for building additional back-translated training data following Calixto et al. (2019).",
"We present our experiment results on English-German (En-De) Test2016.",
"We use LSTM trained on the textual part of the M30KT dataset (De-En, the original 29k sentences) without images to build a back-translation model (Sennrich et al., 2016), and then apply this model to translate 145k monolingual German description in M30k C into English as additional training data.",
"This part of data we refer to as back-translated data.",
"We preprocess the data by tokenizing and lower-casing.",
"Word embeddings are initialized using pretrained 300-dimensional Glove vectors.",
"we extract spatial image features from the last convolutional layer of ResNet-50.",
"The spatial features are 7 7 2048 -dimensional vectors which are representations of local spatial regions of the image.",
"Our encoder and decoder have both 6 layers with 300-dimensional word embeddings and hidden Model BLEU4 METEOR LSTM 36.8 54.9 Transformer 37.8 55.3 IMGD (Calixto and Liu, 2017) 37.3 55.1 NMT SRC+IMG (Calixto et al., 2017) 36.5 55.0 Transformer+Att (Ive et al., 2019) 36.9 54.5 Del+obj (Ive et al., 2019) 38.0 55.6 VMMTF (Calixto et al., 2019) 37.6 56.0 Ours 38.7 55.7 + back-translated data IMGD (Calixto and Liu, 2017) 38.5 55.9 NMT SRC+IMG (Calixto et al., 2017) 37.1 54.5 VMMTF (Calixto et al., 2019) 38.4 56.3 Ours 39.5 56.9 Table 1: Comparison results on the Multi30k test set.",
"states.",
"We employ 10 heads here and dropout=0.1.",
"We used Adam optimizer (Kingma and Ba, 2014) with 1 = 0 .",
"9 , 2 = 0 .",
"98 and minibatches of size 32 or 128 (depends on if add the back-translated data).",
"Meanwhile, we increase learning rate linearly for the first warmup steps , and decrease it thereafter proportionally to the inverse square root of the step number.",
"We used warmup steps = 8000 .",
"The similar learning rate schedule is adopted in (Vaswani et al., 2017).",
"The results of all methods are shown in Table",
"1. We can see our Transformer baseline has comparable results compared to most previous works, When trained on the original data, our model substantially outperforms the SoTA according to BLEU and gets a competitive result according to METEOR.",
"Moreover, we note that our model surpasses the text-only baseline by above 1 BLEU points.",
"It demonstrates that our model benefits a lot from the visual modality.",
"To further investigate our model performance on more data, we also train the models with additional back-translated data, and the comparison results are shown in the lower part of Table",
"1. We can see that almost all models get improved with the additional training data, but our model obtains the most improvements and achieving new SoTA results on all metrics.",
"It demonstrates that our model will perform better on the larger dataset.",
"Figure 3 depicts translations for two cases in the test set.",
"Colors highlight improvement.",
"Furthermore, we visualize the contributions of different local regions of the image in different attention heads, which shows our model can focus on the appropriate regions of the image.",
"For example, our model pays more attention to the building and the person in the first case, and thus the model understands that the person is working on the building rather than just standing there.",
"In the second case, most attention heads attend to the balance beam and the jean dress of the girl, avoiding errors in the translation.",
"To further study the influence of the individual components in our model, we conduct ablation experiments to better understand their relative importance.",
"The results are presented in Table",
"2. Firstly, we investigate the effect of multimodal self-attention.",
"As shown in the second columns (replace with self-attention) in Table",
"2. If we simply concatenate the word vectors with the image features and then perform self-attention, we will lose 0.6 BLEU score and 0.4 METEOR score.",
"Inspired by Elliott (2018), we further examine the utility of the image by the adversarial evaluation.",
"When we replace all input images with a blank picture, the performance of the model drops a lot.",
"When we replace all input images with a random image (the context of image does not match the description in the sentence pair), the model performs even worse than the text-only model.",
"The image here is actually a noise which distracts the translation.",
"In this paper, we propose the multimodal self-attention to consider the relative importance between different modalities in the MMT task.",
"The hidden representations of less important modality (image) are induced from the important modality (text) under the guide of image-aware attention.",
"The experiments and visualization show that our model can make good use of multimodal information and get better performance than previous works.",
"There are various multimodal tasks where multiple modalities have different relative importance.",
"In future work, we would like to investigate the effectiveness of our model in these tasks.",
"This work was supported by National Natural Science Foundation of China (61772036), MSRA Collaborative Research Program, and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology).",
"We thank the anonymous reviewers for their helpful comments.",
"Xiaojun Wan is the corresponding author."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"objective",
"other",
"other",
"other"
] |
[
"Lexical substitution is the task of generating meaningful substitutes for a word in a given textual context.",
"Contextual word embedding models have achieved state-of-the-art results in the lexical substitution task by relying on contextual information extracted from the replaced word within the sentence.",
"However, such models do not take into account structured knowledge that exists in external lexical databases.",
"We introduce LexSubCon, an end-to-end lexical substitution framework based on contextual embedding models that can identify highly-accurate substitute candidates.",
"This is achieved by combining contextual information with knowledge from structured lexical resources.",
"Our approach involves:",
"(i) introducing a novel mix-up embedding strategy to the target word's embedding through linearly interpolating the pair of the target input embedding and the average embedding of its probable synonyms;",
"(ii) considering the similarity of the sentence-definition embeddings of the target word and its proposed candidates; and,",
"(iii) calculating the effect of each substitution on the semantics of the sentence through a fine-tuned sentence similarity model.",
"Our experiments show that LexSubCon outperforms previous state-of-the-art methods by at least 2% over all the official lexical substitution metrics on LS07 and CoInCo benchmark datasets that are widely used for lexical substitution tasks.",
"Lexical Substitution (McCarthy and Navigli, 2007) is the task of generating appropriate words which can replace a target word in a given sentence without changing the sentence's meaning.",
"The increased research interest in Lexical Substitution is due to its utility in various Natural Language Processing (NLP) fields including data augmentation, paraphrase generation and semantic text similarity.",
"Contextual word embedding models (such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019)) have achieved state-of-art results in many NLP tasks.",
"These models are usually pre-trained on massive corpora and the resulting context-sensitive embeddings are used in different downstream tasks (Howard and Ruder, 2018).",
"Zhou et al. (2019) have achieved state-of-the-art results on the lexical substitution task by improving the BERT's standard procedure of the masked language modeling task.",
"However, the current state-of-the-art contextual models have yet to incorporate structured knowledge that exists in external lexical database into their prediction process.",
"These lexical resources could boost the model's performance by providing additional information such as the definitions of the target and candidate words (in order to ensure that the candidate word is semantically similar to the target word and not only appropriate for the sentence's context) or by enriching the proposed candidate word list so it will not only be based on the vocabulary of the contextual model.",
"In this paper, we present and publicly release 1 a novel framework for the lexical substitution task.",
"Specifically,",
"(i) we are the first, to the best of our knowledge, to propose a novel mix-up embedding strategy that outperforms the previous state-of-the-art strategy of word embedding dropout for the input embedding of the target word in a contextual model (Zhou et al., 2019) for the task of predicting accurate candidate words;",
"(ii) we propose the combined usage of features from contextual embedding models and external lexical knowledge bases in order to determine the most appropriate substitution words without modifying the meaning of the original sentence, such as introducing a new gloss (definition) similarity metric which calculates the similarity of the sentence-definition embeddings of the target word and its proposed candidates;",
"(iii) we generate a highly accurate fine tuned sentence similarity model by taking advantage of popular data augmentation techniques (such 1 https://github.com/gmichalo/LexSubCon 1226 as back translation), for calculating the effect of each candidate word in the semantics of the original sentence; and,",
"(iv) finally, we show that LexSubCon achieves state-of-the-art results on two popular benchmark lexical substitution datasets (McCarthy and Navigli, 2007; Kremer et al., 2014).",
"The lexical substitution task consists of two subtasks:",
"(i) generating a set of meaning preserving substitute candidates for the target word and",
"(ii) appropriately ranking the words of the set by their ability to preserve the meaning of the initial sentence (Giuliano et al., 2007; Martinez et al., 2007).",
"However, lexical substitution models can also be tested in a simpler problem where the set of substitute candidates is composed of human-suggested words and the task is to accurately rank the substitute words that are provided (Erk and Pad, 2010).",
"The authors in (Melamud et al., 2015b) proposed the use of a word2vec model which utilizes word and context embeddings to represent the target word in a given context.",
"Their model ranked the candidate substitutions by measuring their embedding similarity.",
"In (Melamud et al., 2016) the context2vec model was introduced where the context representation of the word was calculated by combining the output of two bidirectional LSTM models using a feed-forward neural network.",
"Peters et al. (2018) introduced contextualized word embeddings in a bidirectional language model (ELMo).",
"This allowed the model to change the embedding of a word based on its imputed meaning which is derived from the surrounding context.",
"Subsequently, Devlin et al. (2019) proposed the Bidirectional Encoder Representations from Transformers (BERT) which uses bidirectional transformers (Vaswani et al., 2017) to create context-dependent representations.",
"The authors in (Gar Soler et al., 2019) used ELMo in the lexical substitution task by calculating the cosine similarity between the ELMo embedding of the target word and all the candidate substitutes.",
"In addition, Zhou et al. (2019) achieved state-of-the-art results on the lexical substitution task by applying a dropout embedding policy to the target word embedding and by taking into account the similarity between the initial contextualized representation of the context words and their representation after replacing the target word by one of the possible candidate words.",
"An analysis of state-of-the-art contextual model on the lexical substitution task was presented in (Arefyev et al., 2020).",
"Finally, external knowledge from knowledge bases has been used to enhance the performance of deep learning models.",
"Sense-BERT (Levine et al., 2020) was pre-trained to predict the semantic class of each word by incorporating lexical semantics (from the lexical database WordNet (Miller, 1995)) into the model's pre-training objective.",
"Furthermore, Faruqui et al. (2015) and Bahdanau et al. (2017) used external knowledge (namely WordNet) in order to enhance word embeddings and to create more accurate representations of rare words.",
"In the lexical substitution task, a model aims to firstly generate a set of candidate substitutions for each target word and secondly to create an appropriate ranking of the elements of the candidate set.",
"In addition, there are two main conditions for a lexical substitute model to satisfy:",
"(i) to be semantically similar to the target word and",
"(ii) and to be compatible with the given context (sentence) (Melamud et al., 2015b).",
"We present the LexSubCon framework which achieves state of the art results on the lexical substitution task by combining contextual information with knowledge from structured external lexical resources.",
"The architecture of LexSubCon is depicted in Figure 1. The key characteristic of LexSubCon is its capability of unifying different substitution criteria such as contextualized representation, definition and sentence similarity in a single framework in order to accurately identify suitable candidates for the target words in a specific context (sentence).",
"The standard BERT architecture (Devlin et al., 2019) can be used in the lexical substitution task by masking the target word and letting the model to",
"propose appropriate substitute candidates that preserve the initial meaning of the sentence.",
"Zhou et al. (2019) argued that applying embedding dropout to partially mask the target word is a better alternative than masking the whole word.",
"This is because the model may generate candidates that are semantically different but appropriate for the context of the initial sentence.",
"Their experiments showed that this policy is indeed more beneficial than completely masking, or not masking, the target word.",
"However, in this paper we demonstrate that a mix-up embedding strategy can yield even better results.",
"The main disadvantage of dropout embedding is that it sets random positions in the embedding vector of the target words to zero.",
"We propose that by using external knowledge, we can obtain probable synonyms of the target word and use that knowledge in a mix-up scenario (Zhang et al., 2018) through linearly interpolating the pair of the target input embedding and the average embedding of its synonyms.",
"This allows the model to generate a new synthetic input embedding by repositioning the target embedding around the neighborhood of the embedding of its synonyms.",
"In order to obtain appropriate synonyms we use WordNet (Miller, 1995) which is an extensive lexical database where words are grouped into sets of synonyms (synsets).",
"In our experiments, the best performance was achieved when the list of synonyms was extracted from the complete set of synsets for each word as it minimizes the chance of having a synonym set that only includes the target word itself.",
"Finally, we use a mix-up strategy to calculate a new input embedding for the target word X target as shown in equation 1: X target = X target + (1 ) X synonyms (1) Where X target is the initial input embedding of the target word, X synonyms is the average embedding of all the synonyms.",
"It should be noted that Wordnet does not contain information about some words, such as pronouns, conjunctions, or nouns that are not commonly used in the English vocabulary.",
"To address this limitation, whenever a target word cannot be found in the WordNet database, we replace the mix-up strategy by injecting Gaussian noise to the input embedding of the target word.",
"This produces a similar effect as the mix-up strategy since the target embedding is re-positioned around itself in the embedding space (equation 2): X target = X target + e (2) where e is a Gaussian noise vector with components e i N ( i , 2 i ) .",
"We use the BERT architecture to calculate the proposal score for each candidate.",
"The input embedding vectors pass through multiple attention-based transformer layers where each layer produces a contextualized embedding of each token.",
"For each target word x t , the model outputs a score vector y t RD , where D is the length of the model's vocabulary.",
"We calculate the proposal score s p for each candidate word x c , using the score vector y t of the BERT's language modeling process, as the probability for the BERT model to propose the word x c over all the candidates words x c when the target word's sentence is provided as input to it: s p ( x c ) = exp ( y t [ x c ]) (cid:80) x c exp ( y t [ x c ]) (3) 3.2 Gloss-Sentence Similarity Score In the previous section, we analyzed our model which ranks candidate substitute words by calculating their individual proposal scores.",
"However, Zhou et al. (2019) and Arefyev et al. (2020) showed that the proposal score does not provide sufficient information about whether the substitute words will modify the sentence's meaning.",
"Thus, in this section, we present a new metric which ranks the candidate words by considering the gloss (a dictionary-style definition) of each word.",
"By extracting the appropriate information from the Wordnet database, a list of potential glosses is created for each target or candidate word.",
"In addition, we can determine the most appropriate gloss based on the word and its specific context (sentence) by taking advantage of recent fine-tuned contextual models that have achieved state-of-the-art results in the Word Sense Disambiguation (WSD) task (Huang et al., 2019).",
"As the glosses are sentences (sequence of words) they can be represented in a semantic space through a sentence embedding generating model.",
"A ranking of each candidate word is produced by calculating the cosine similarity between the gloss sentence embedding of the target word and the gloss sentence embedding of each candidate word.",
"There are many methods for generating sentence embeddings, such as calculating the weighted average of its word embeddings (Arora et al., 2017).",
"We select the sentence embeddings of the stsb-roberta-large model in (Reimers and Gurevych, 1228 2019) which has been shown to outperform other state-of-the-art sentence embeddings methods.",
"Given a sentence s , a target word x t and a candidate word x c , our model first identifies the most appropriate gloss g t for the target word given its context.",
"After replacing the target word with the candidate x c to create a new sentence s , the most appropriate gloss g c for the candidate word is also determined.",
"A gloss-similarity score s g for each candidate is then calculated as the cosine similarity between the two glosses-sentences embeddings.",
"We also chose to calculate the effect of each substitution in the semantics of the original sentence by calculating the semantic textual similarity between the original sentence s and the updated sentence s (a sentence where we have replaced the target word with one of its substitutions).",
"In order to accurately calculate a similarity score between s and s , we fine-tune a semantic textual similarity model based on the stsb-roberta-large model (Reimers and Gurevych, 2019) by using the training portion of the dataset in order to create pairs of sentences between the original sentence and an updated sentence where we have substitute the target word with one of its proposed candidates.",
"Using the methods that we described in section 3.2, we can identify the most appropriate synset (from WordNet) for each target word and create a new pair of sentences between the original sentence and an updated sentence where we have updated the target word with the synonyms of the previous mentioned synset.",
"However, due to the limited size of the training dataset, our model is still not provided with enough training data in order to be fully fine-tuned.",
"This is the reason why we employ a data augmentation technique in order to produce the examples needed for this task.",
"Specifically, we create a back-translation mechanism in order to generate artificial training data.",
"Back-translation or round-trip translation is the process of translating text into another language (forward translation) and then translating back again into the original language (back translation) (Aiken and Park, 2010).",
"Back-translation has been used in different tasks in order to increase the size of training data (Sennrich et al., 2016; Aroyehun and Gelbukh, 2018).",
"In our case, we provide to the back-translation module the initial sentence s and it produces a sightly different updated' sentence s u .",
"For the s u sentences that still contain the target word we can create pair of sentences between the s u and an alternative version of the s u sentence ( s u ) where the target word is substituted with one of the candidate words or synonyms that we mentioned in the above paragraph.",
"The main disadvantage of this techniques is that it may return the same initial sentence without any changes.",
"In this case, we add a second translation level where the initial sentence is translated into two different languages before being translated back.",
"In our experiments we have also included the substitute candidate validation metric from (Zhou et al., 2019) as it has been shown to have a positive affect on the performance of a lexical substitution model.",
"The substitute candidate validation metric is represented as the weighted sum of the cosine similarities between the contextual representation of each token in the initial and in the updated sentence where the weight of the cosine similarity of the token i is calculated as the average self-attention score of all heads in all layers from the token of the target word to token i .",
"As mentioned in (Zhou et al., 2019), this metric evaluates the influence of the substitution on the semantic of the sentence.",
"Finally, LexSubCon uses a linear combination of the above mentioned features to calculate the final score for each candidate word.",
"The candidates for each target word are extracted using the external lexical resource of WordNet and the BERT-based lexical substitution approach where the model provides probabilities for each candidate based on the context (sentence).",
"We create a list of candidates based on the synonyms, the hypernyms, and hyponyms of each target word that could be identified in WordNet.",
"In addition, we include in the list the candidate words with the higher probability that could be identified using the mixup strategy that we described in section 3.1.",
"We chose to include candidates from WordNet because we do not want our model to only include candidates words from the BERT vocabulary and we also include candidates words from a BERT-based model because target words may not be included in WordNet.",
"We evaluate LexSubCon on the English datasets SemEval 2007 (LS07) 2 (McCarthy and Navigli, 2007) and Concepts-In-Context (CoInCo) 3 (Kre-mer et al., 2014) which are the most widely used datasets for the evaluation of lexical substitution models.",
"(i) The LS07 dataset is split into 300 train and 1710 test sentences where for each of the 201 target words there are 10 sentences (extracted from http://corpus.leeds.ac.uk/internet.html).",
"The gold standard was based on manual annotation where annotators provided up to 3 possible substitutes.",
"(ii) The CoInCo dataset consists of over 15K target word instances (based on texts provided in the Open American National Corpus) where 35% are training and 65% are testing data.",
"Each annotator provided at least 6 substitutes for each target word.",
"Our experiments with all datasets are consistent with their intended use, as they were created for research purposes.",
"We manually investigate the existence of information that names individuals or offensive content, however, we did not find any indication of either of them.",
"In order to have a fair comparison with the previous state-of-the-art models, for both datasets we used their processed versions as used in (Melamud et al., 2015b, 2016).",
"All-ranking task: In this task no substitution candidates are provided.",
"We use the official metrics that the organizers provided in the original lexical substitution task of SemEval-2007 4 .",
"These were best and best-mode which validate the quality of the model's best prediction and both oot (out-of-ten) and oot-mode to evaluate the coverage of the gold substitute candidate list by the 10-top predictions.",
"We also use P recision @1 to have a complete comparison with the model in (Zhou et al., 2019).",
"Candidate ranking task: In this task the list of candidates are provided and the goal of the model is to rank all the candidate words.",
"For the candidate ranking task we follow the policy of previous works and construct the candidate list by merging all the substitutions of the target lemma and 2 license: https://tinyurl.com/semeval-license 3 license: CC-BY-3.0-US 4 www.dianamccarthy.co.uk/files/task10data.tar.gz POS tag over the whole dataset.",
"For measuring the performance of the model we use the GAP score (Kishida, 2005) 5 which is a variant of the MAP (Mean Average Precision).",
"Following (Melamud et al., 2015b), we discard all multi-words from the gold substitutes list and remove the instances that were left with no gold substitutes.",
"We use the uncased BERT large model (Devlin et al., 2019) for the calculation of the proposal score and candidate validation score.",
"For the identification of the most appropriate glosses we employ the pre-trained model in (Huang et al., 2019) which has achieved the state-of-the-art results in the Word Sense Disambiguation (WSD) task.",
"Finally, the sentence-similarity metric is computed by fined-tuning the stsb-roberta-large model in (Reimers and Gurevych, 2019) and by employing the OPUS-MT models (Tiedemann and Thottingal, 2020) (namely opus-mt-en-romance, opus-mt-fr-es and opus-mt-romance-en) for the creation of the back-translated sentences.",
"We use the LS07 trial set for training the sentence similarity metric model (for 4 epochs) and for fine-tuning the parameters of our framework based on the best score.",
"Empirically, the parameter of the mix-up strategy was set to 0 .",
"25 and the weights to 0 .",
"05 , 0 .",
"05 , 1 , 0 .",
"5 for the proposal score, gloss-sentence similarity score, sentence similarity score and candidate validation score respectively (with the search space for all the parameters being [0 , 1] 6 ).",
"Finally, for the Gaussian noise we choose a mean value of 0 and standard deviation 0.01.",
"We propose 30 candidates for each target word.",
"In order to achieve more robust results, we run LexSubCon on five different (random) seeds and we provide the average scores and standard deviation.",
"All the contextual models are implemented using the transformers library (Wolf et al., 2019) on PyTorch 1.7.1.",
"All experiments are executed on a Tesla K80 GPU with 64 GB of system RAM on Ubuntu 18.04.5 LTS.",
"It should be noted that LexSubCon contains 1136209468 parameters.",
"To enable direct comparison and to isolate gains due to improvements solely on the post-processing strategy that each model uses (which has the potential to change its performance (Arefyev et al., 2020)), we opt to reproduce and use the same strat-5",
"egy for the tokenization of the target words from Bert s p ,s u (Zhou et al., 2019).",
"We focus our comparison on Bert s p ,s u as it has achieved impressive state of the art results on both benchmark datasets 7 .",
"The results of LexSubCon and the previous state-of-the art results in both LS07 and CoInCo benchmark datasets are presented in Table 1. LexSubCon outperformed the previous methods across all metrics in the LS07 and the CoInCo datasets due to the fact that all features have a positive contribution on its performance (see ablation details in section 4.5) as they encourage LexSubCon to take into consideration different substitution criteria.",
"The standard deviation of the results of LexSubCon is not zero due to the fine-tuning process of the sentence similarity model.",
"However, the results indicate that there are no large fluctuations.",
"LexSubCon and our implementation of Bert s p ,s u had a running time of 74k and 30k for LS07 and 580K and 266K seconds for the CoInCo dataset respectively.",
"In order to evaluate the mix-up strategy for the input embedding of the proposal model, we study the effect of different input embedding policies.",
"The results of this study are listed in Table 2. It 7 Note that the method proposed by (Zhou et al., 2019) was implemented to the best of our abilities to be as faithful to the original work as possible using elements of code that the method's authors kindly provided upon request.",
"However, the authors could not make the complete original code available to us.",
"can be observed that even the simpler strategy of injecting Gaussian noise to the input embedding outperformed the standard policy of masking the input word.",
"These results indicate that a contextual model needs information from the embedding of the target word in order to predict accurate candidates but it may over-rely on this information when it is provided with an intact input embedding.",
"Fur-1231 thermore, the mix-up strategy outperformed all the other policies and specifically the dropout embedding strategy (Zhou et al., 2019) as the mix-up strategy re-positions the target embedding around the neighborhood of the embedding of its synonyms and it does not erase a part of the embedding that the model can learn from.",
"In order to evaluate the effect of each feature on the performance of LexSubCon, we conducted an ablation study.",
"The results are presented in Table 3. As Table 3 shows, LexSubCon achieved its best performance when it has access to information from all the features described in section 3. By testing the performance of the individual features, we observe that the gloss sentence similarity feature achieves the worst performance out of all the features.",
"This is likely because many candidate words cannot be identified in Wordnet and thus we assign a zero value to their gloss sentence score.",
"Another factor is that the models that were used for the selection of the most appropriate gloss for each word may introduce noise in the process of the gloss-similarity score model as they may select not-optimal glosses.",
"We also evaluate LexSubCon in the candidate ranking task for both the LS07 an CoInCo dataset.",
"In this sub-task the candidate substitution words are provided and the main task of the system is to create the most appropriate ranking of the candidates.",
"Table 4 provides the evaluation results in the candidate ranking task of LexSubCon and of the previous state-of-the art models.",
"We report the results both on the test set and on the entire dataset (trial+test), in order to have a complete comparison as some of the previous state of the art models were evaluated on the entire datasets and some were evaluated only in the testing portion of the datasets.",
"It can be observed that all the features have a positive effect on the performance of LexSubCon thus allowing it to outperform the previous state-of-the-art methods.",
"Specifically, the results demonstrate the 1232 Word Sentence Gold Ranking LexSubCon BERT based terrible",
"positive effect of the features on accurately ranking a list of potential candidates as LexSubCon outperforms the previous methods even in the scenario where they are all provided with the same substitution candidate list.",
"In Table 5, we provide different examples of target words and their top lexical substitutes proposed by LexSubCon and the BERT based model in order to demonstrate the effect of external lexical resources on the performance of a contextual model.",
"As it can be observed, for the target word terrible , the BERT based model proposes a candidate word ( positive ) which may fit in the sentence but has the opposite meaning of the target word.",
"However, LexSubCon provides semantically similar candidates by using information from different signals (e.g comparison of the definition of each word).",
"In addition, for the target word return , our model identifies appropriate candidates that are not in the vocabulary of the contextual model (the word regress ) by introducing candidates from a external lexical database.",
"These examples showcase that enriching contextual models with external lexical knowledge can assist the model to provide more accurate candidates.",
"We evaluate the performance of LexSubCon in the context of textual data augmentation.",
"Specifically, we conduct experiments on a popular benchmark text classification tasks of the English sub-jectivity/objectivity dataset (SUBJ) (Pang and Lee, 2004) 8 .",
"The SUBJ dataset contains 5000 subjective and 5000 objective processed sentences (movie reviews).",
"We train the LSTM model (with the same hyperparameters) which was used in (Wei and Zou, 2019) to measure the effect of different data augmentation techniques.",
"We compare our method with previous state-of-the-art lexical 8 license: https://tinyurl.com/t-license substitution models and with other popular textual data augmentation techniques:",
"(i) the back-trans-lation technique (described in section 3.3)",
"(ii) the EDA framework (Wei and Zou, 2019) which utilizes four operations of Synonym Replacement and Random Insertion/Swap/Deletion in order to create new text.",
"Following the data generation algorithm in (Arefyev et al., 2020), LexSubCon creates new examples by sampling one word for each sentence, generating the appropriate substitute list for this word and sampling one substitute with probabilities corresponding to their substitute scores (which were normalized by dividing them by their sum) to replace the original word with the sampled substitute.",
"Figure 2 demonstrates how data augmentation affects the classification depending on the size of the training set (Arefyev et al., 2020; Wei and Zou, 2019).",
"As it is expected the effect of each text augmentation technique on the performance of the model becomes more significant while the size of the train set is reduced.",
"Figure 2 also shows that the data created with lexical substitution have a more positive effect to the performance of the model than the other data augmentation techniques since back translation techniques may provide text that does not follow the syntactic rules of the target language and the EDA framework may create examples that could confuse the model by changing the structure of the sentence due the random insertion and 1233 swapping of words.",
"Finally, since LexSubCon can create more accurate substitution candidates than the standard BERT model and the Bert s p ,s u model, the texts that were created with LexSubCon have a more positive effect on the model's performance.",
"This paper presents LexSubCon, an end-to-end lexical substitution framework based on contextual embedding models.",
"LexSubCon establishes a new mix-up embedding strategy that outperforms the previous SOTA strategy of word embedding dropout for the embedding of the target word for the task of predicting accurate candidate words.",
"LexSubCon introduces the combined usage of features from contextual embedding models and external lexical knowledge bases in order to calculate accurately the semantic similarity between a target word and its candidates.",
"We confirm that these features can improve the LexSubCon's performance as it outperforms other state-of-the-art results on two benchmark datasets.",
"As for future work, we plan to address the limitations of this study including:",
"(i) examining the effect of using other models as the basis of our features (e.g. Albert (Lan et al., 2020));",
"(ii) exploring other candidate features for the ranking of the candidates (e.g. parser information (Szarvas et al., 2013a))",
"(iii) testing LexSubCon in datasets of other languages using multi-language lexical databases (e.g. MultiWordNet (Pianta et al., 2002) or Balka-Net (Oflazer et al., 2001)) to investigate further the model's general availability.",
"Lexical substitution can be useful in various natural language processing (NLP) tasks such as textual data augmentation, paraphrase generation and text simplification.",
"The results that we present in this paper suggest that contextual word embeddings models, such as our framework (LexSubCon), can be a valuable tool for providing accurate substitution candidates that can be further used in a variety of down-stream tasks.",
"We believe that there are many benefits of using our contextual embeddings models.",
"For example, LexSubCon can be used as a data augmentation tool to provide artificial training data for tasks where the lack of sufficient training data may hurt the performance of the model.",
"However, there are potential risks of over-relying on any lexical substitution tool.",
"Particularly, a lexical substitution model can unintentionally change the meaning of the original text thus leading to erroneous conclusions."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"other",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain"
] |
[
"In recent years, neural models have often outperformed rule-based and classic Machine Learning approaches in NLG.",
"These classic approaches are now often disregarded, for example when new neural models are evaluated.",
"We argue that they should not be overlooked, since for some tasks, well-designed non-neural approaches achieve better performance than neural ones.",
"In this paper, the task of generating referring expressions in linguistic context is used as an example.",
"We examined two very different English datasets ( WEBNLG and WSJ ), and evaluated each algorithm using both automatic and human evaluations.",
"Overall, the results of these evaluations suggest that rule-based systems with simple rule sets achieve on-par or better performance on both datasets compared to state-of-the-art neural REG systems.",
"In the case of the more realistic dataset, WSJ , a machine learning-based system with well-designed linguistic features performed best.",
"We hope that our work can encourage researchers to consider non-neural models in future.",
"Natural Language Generation (NLG) is concerned with the generation of natural language text from non-linguistic input (Gatt and Krahmer, 2018).",
"One step in a classic generation pipeline (Reiter and Dale, 2000) is Referring Expression Generation (REG, Krahmer and van Deemter (2019) for an overview).",
"REG has important practical value for commercial natural language generation (Re-iter, 2017), computer vision (Mao et al., 2016), and robotics (Fang et al., 2015), for example.",
"It has also been used as a tool to understand human language use (van Deemter, 2016).",
"REG contains two different problems.",
"One is to find a set of attributes to single out a referent from a set (also Equal contribution.",
"called one-shot REG ).",
"The other is to generate referring expressions (REs) to refer to a referent at different points in a discourse (Belz and Varges, 2007).",
"We will focus on the latter task.",
"We call this the REG-in-context task.",
"In earlier works, REG is often tackled in two steps (Henschel et al., 2000; Krahmer and Theune, 2002).",
"The first step decides the form of an RE.",
"For example, whether a reference should be a proper name ( Marie Skodowska-Curie ), a description ( the physicist ), or a pronoun ( she ) at a given point in the context.",
"The second step is concerned with content selection, i.e., the different ways in which a referential form can be realised.",
"For example, to generate a description of Marie Curie , the REG system decides whether it is sufficient to mention her profession (i.e., the physicist ) or whether it is better to mention her nationality as well (i.e., a Polish-French physicist ).",
"Thanks to the rapid development of deep learning techniques, recent NLG models are able to generate RE in an End2End (E2E) manner, i.e., to tackle the selection of form and content simultaneously (Castro Ferreira et al., 2018a; Cao and Cheung, 2019; Cunha et al., 2020).",
"The task of End2End (E2E) REG was proposed by Castro Ferreira et al. (2018a), who extracted a corresponding corpus from the WebNLG corpus (Castro Ferreira et al., 2018b) 1 .",
"Grounding on the WEBNLG dataset, they proposed a neural REG system built on a sequence-to-sequence with attention model.",
"Their automatic and human evaluation results suggested that neural REG systems significantly outperform rule-based and feature-based machine learning (ML) baselines.",
"However, it can be argued that Castro Ferreira et al. did not use very strong baselines for their comparison: OnlyName is a rule-based system that always generates a proper name given an entity, and Ferreira is a feature-based model 1 We refer to this extracted REG corpus as WEBNLG .",
"that uses Naive Bayes with only 3 simple features .",
"We present several rule-based and feature-based baselines to examine how neural models perform against well-designed non-neural alternatives.",
"Note that a well-designed model is not necessarily complex.",
"For example, it can be a rule-based system with one or two simple, well-designed rules.",
"Since one of the advantages of neural E2E models is that they require little effort for feature engineering, we used two types of baselines, namely models that require minimal expert effort and models that use more demanding (but linguistically well-established) rules or features.",
"Therefore, our main research question is: Do state-of-the-art neural REG models always perform better than rule-based and machine learning-based models?",
"To answer this question fairly, we consider the amount of resources used by each model.",
"For example, the neural models require fewer human resources when it comes to linguistic expertise and annotation, but they require input from Deep Learning experts.",
"Resources such as computing power and data needs should also be considered.",
"Another issue with previous studies concerns the datasets that were used: in WEBNLG , approximately 99.34% of entities in the test set also appear in the training set; consequently, evaluations using WEBNLG do not take unseen entities into consideration.",
"Furthermore, since many sentences in WEBNLG are paraphrases of one another, evaluating neural models on WEBNLG alone may overestimate their performance.",
"Castro Ferreira et al. (2019) recently extended WEBNLG to include unseen domains that contain many unseen entities 3 , and Cunha et al. (2020) have developed new models to handle them.",
"Their test set has two subsets: one consists of documents 99.34% of whose entities are seen , while the other consists of documents 92.81% of whose entities are unseen .",
"This arguably makes the data in WEBNLG unrealistic (see 2 for discussion).",
"Therefore, we created what we believe to be a more realistic dataset based on the Wall Street Journal ( WSJ ) portion of the OntoNotes corpus (Hovy et al., 2006; Weischedel et al., 2013) 4 .",
"2 The human evaluation in Cunha et al. (2020) showed a slightly different result: the OnlyName model performed as well as the Neural REG models in terms of fluency, grammaticality, and adequacy.",
"However, since their human evaluation involved only two subjects, these outcomes need to be approached with caution.",
"3 We used version 1.5 of the WEBNLG dataset in https: //github.com/ThiagoCF05/webnlg .",
"Linguis-We evaluate all models on both WEBNLG and WSJ , using automatic and human evaluation experiments.",
"The human experiments included a total of 240 participants and 16920 judgments.",
"This paper is structured as follows: in 2 and 3, we describe the datasets used and the REG models.",
"In 4, we provide a detailed description of our automatic and human evaluations.",
"In 5 and 6, we compare the results across different dimensions and make suggestions for future studies.",
"The code for reproducing the results in this article can be found at: https://github.com/ a-quei/neuralreg-re-evaluation .",
"This section explains the REG-in-context task and the two English datasets used to conduct the experiments.",
"Given a text whose REs have not yet been generated, and given the intended referent for each of these REs, the REG-in-context task is to build an algorithm that generates all these REs.",
"Consider the delexicalised text in Table",
"1. Given the entity AWH_Engineering_College , REG selects an RE based on the entity and its pre-context ( AWH_Engineering_College is in Kuttikkattoor , India in the state of Kerala . ), and its post-context ( has 250 employees and Kerala is ruled by Kochi . The Ganges River is also found in India. ).",
"Gardent et al. (2017) introduced the WEBNLG corpus for evaluating NLG systems.",
"Using crowd-sourcing, each crowdworker was asked to write a description for a given Resource Description Framework (RDF) triple (Table 1).",
"The number of triples varied from 1 to",
"7. This corpus was later enriched and delexicalised (Castro Ferreira et al., 2018a,b) to fit the REG-in-context task.",
"Castro Ferreira et al. (2019) further extended WEBNLG and divided the documents into test sets seen (where all data are from the same domains as the training data) and unseen (where all data are from different domains than the training data).",
"This results in that almost all entities from the seen test set appear in the training set (9580 out of 9644), while only a tic Data Consortium (LDC) https://catalog.ldc.",
"upenn.edu/LDC2013T19 .",
"few entities from the unseen test set appear in the training set (688 out of 9644).",
"Note that the maximum number of triples in the unseen set is five.",
"So, one would expect the data in the unseen set to be less complex than the seen data.",
"We used version 1.5 of WEBNLG , which contains 67,027, 8278, and 19,210 REs in the training, development, and test sets.",
"From the point of view of the present study, WEBNLG has some notable shortcomings.",
"For a start, it consists of rather formal texts that may not reflect the everyday use of REs, and in which very simple syntactic structures dominate.",
"The texts in WEBNLG also stand out for other reasons.",
"For example, the texts are extremely short, with an average length of only 1.4 sentences.",
"Consequently, as many as 85% of the REs are first-mentions, while 71% of the REs are proper names.",
"Finally, in any given test sample, either more than 90% of the entities are seen or more than 90% are unseen.",
"Realistic data should contain a reasonable amount of mixtures of seen and unseen entities.",
"For all these reasons, we decided to test all algorithms on a second corpus as well.",
"Using the Wall Street Journal portion of the OntoNotes corpus, we constructed a new English REG dataset, following a similar approach as Castro Ferreira et al. (2018a).",
"This corpus ( WSJ ) has very different characteristics from the WEBNLG .",
"The WSJ consists of 582 newspaper articles containing 20,186, 2362 and 2781 REs in the training, development and test sets, respectively.",
"The average length of the documents is 1189 words, and each document consists of 25 sentences on average.",
"Furthermore, 23% of the instances are first-mention REs and the rest are subsequent mentions.",
"For each RE, we created its preand post-context at the local sentence-level and added K preceding and following sentences to the local context.",
"We refer to K as the context length and set K =2 for this experiment.",
"To create the dataset, we first delexicalised the REs.",
"The dataset contains nearly 8000 coreferential chains.",
"The REs in each chain were replaced with corresponding delexicalised expressions (similar to table 1).",
"For delexicalisation, we used (1) the POStag information, (2) the fine-grained annotation of the referential forms, and (3) the entity type of each referent.",
"To delexi-calise human REs, for example, we looked for concise but informative REs such as the combination of first and last names (e.g., Barack Obama ) 5 .",
"When such an expression was found in a coreferential chain, its delexicalised version (tokens being separated by underscores, e.g., Barack_Obama ) was assigned to all REs in the chain.",
"We then moved on to the next tag.",
"Below is the order in which the human referents were searched and delexicalised: [ firstname-lastname ] , [ title-firstname-lastname ] , [ modified firstname-lastname ] , [ title-lastname ] , [ lastname ] , [ modified-lastname ] , [ firstname ] .",
"For more details on the preparation of the WSJ documents and a delexicalised example, see Appendix A. 3 REG Models In this section, we introduce the rule-based, ML-based, and the SOTA neural REG models.",
"The term ML-based here refers to models that require feature engineering and follow a pipeline architecture.",
"5 The reason for choosing concise and informative REs for delexicalisation is that these labels are also used in the realisation process.",
"Rule-based models have been widely used for generating REs in context (McCoy and Strube, 1999; Henschel et al., 2000).",
"Here, we build rule-based systems for binary classification into two classes, namely pronominal and non-pronominal REs.",
"Otherwise, r is realised as a non-pronominal RE.",
"An entity r is defined as discourse-old if it has been mentioned in the previous context.",
"A competitor is an entity that can be referred to with the same pronoun as r .",
"We also build a dictionary that stores the pronouns associated with each entity.",
"For seen entities, we extract pronouns from the training data.",
"If an entity has multiple possible pronominal forms, we extract the most frequent one.",
"For unseen entities, we determine their pronominal forms based on their meta-information, which is also used in E2E systems (Cunha et al., 2020).",
"For example, if an entity in WEBNLG has the type PERSON and the gender FEMALE , we assign she to this entity.",
"For the surface realisation of each entity, we realise its non-pronominal form by replacing the underscores in the entity label with whitespaces (e.g., Adenan_Satem to Adenan Satem ), as previously described by Castro Ferreira et al. (2018a).",
"We realise the pronominal forms according to Castro Ferreira et al. (2016) by using the grammatical role of each entity (e.g., if the entity is in the object position, then we realise he as him ).",
"Linguistically-informed Rule-based System ( RREG-L ).",
"We build RREG-L by adopting a set of pronominalisation rules from Henschel et al. (2000).",
"The fundamental concepts used by these rules are the idea of local focus , which is a simpler implementation of the Centering Theory (Grosz et al., 1995), and parallelism , i.e., whether r and its antecedent in the previous sentence have the same grammatical role (Henschel et al., 2000).",
"The RREG-L is described in detail in Appendix B. 3.2 ML-based REG The GREC Shared Task (Belz et al., 2010) triggered a plethora of ML-based models for building REG-in-context (e.g., Greenbacker and McCoy, 2009; Hendrickx et al., 2008).",
"These models differ from each other in the features and the ML algorithms they have used.",
"In this study, we build ML-based REG models using CatBoost (Prokhorenkova et al., 2018).",
"It predicts whether a reference is realised as a pronoun, proper name, or description.",
"Once the referential form is predicted, the next step is to select the content.",
"The most frequent variant (with the same referential form as the predicted class) is selected in the training corpus given the referent and the full set of features.",
"If no matching RE is found, a back-off method (Castro Ferreira et al., 2018a) is used, removing one feature at a time in order of importance.",
"The order is calculated using the inherent feature importance method of the CatBoost algorithm.",
"Depending on which features are used, we build two variants of ML-based models, namely ML-S and ML-L .",
"The detailed list of the features used in these models can be found in Appendix C. Features obtained by minimum effort ( ML-S ).",
"To find out what the upper bound is for a system that does not require any additional linguistic information or any additional annotation effort, we developed ML-S .",
"In this model, we have relied only on the features that can be extracted directly from the corpus.",
"Therefore, features such as grammatical role (which requires a syntactic parser) are not included in this model.",
"Linguistically Informed Features ( ML-L ).",
"To evaluate the upper bound performance of ML-based systems, we developed ML-L with the features that could affect the choice of referential form and could improve the overall accuracy of the REG systems suggested by the previous linguistic and computational studies (Ariel, 1990; Gundel et al., 1993; Brennan, 1995; Arnold and Griffin, 2007; Fukumura and Van Gompel, 2011; Kibrik et al., 2016; von Heusinger and Schumacher, 2019; Same and van Deemter, 2020).",
"For example, we included features encoding grammatical role, recency, gender, and animacy in ML-L .",
"Note that ML-L makes full use of the syntactic information 6 and entity meta-information (e.g., GENDER and TYPE which are also used by both the rule-based systems and the neural models).",
"A limitation of the rule-based and ML-based models mentioned above is that they are not able to handle situations where an RE form (e.g., a proper name) can have multiple realisations, e.g., Lady Gaga/Stefani Germanotta.",
"End2End NeuralREG can address this by generating REs from scratch.",
"This study examines three NeuralREG systems that have been developed to deal with unseen entities as well.",
"All of them were developed using the sequence-to-sequence with attention model (Bah-danau et al., 2014).",
"ATT+Copy .",
"Cunha et al. (2020) proposed using three bidirectional LSTMs (Hochreiter and Schmidhuber, 1997) to encode a pre-context, a post-context, and the proper name of an entity (i.e., replacing underscores in entity labels with whitespaces) into three hidden vectors h ( pre ) , h ( post ) and h ( r ) , respectively.",
"An auto-regressive LSTM-based decoder generates REs based on context vectors.",
"To handle unseen entities, Cunha et al. used the copy mechanism, which allows the decoder to copy words from the contexts directly as output.",
"ATT+Meta .",
"ATT+Meta (Cunha et al., 2020) used meta information of each entity to improve the quality of the generated REs.",
"In each decoding step t , the context vector v ( c ) t is concatenated with meta information embeddings before being fed to the decoder.",
"In WEBNLG , meta information are the entity type v ( type ) and gender embeddings v ( gender ) ; while in WSJ , in addition to v ( type ) and v ( gender ) , there is also plurality embedding v ( pl ) .",
"ProfileREG .",
"Cao and Cheung (2019) made ProfileREG to leverage the content of entity profiles extracted from Wikipedia.",
"More specifically, instead of encoding the proper name of each entity, ProfileREG asks the entity encoder to encode the whole entity's profile to obtain h ( r ) .",
"Note that since profiles of entities in WSJ are not accessible, we evaluate ProfileREG only on WEBNLG .",
"We evaluated all the systems described in 3 on both WEBNLG and WSJ using automatic and human evaluations.",
"We implemented the neural models based on the code of Cunha et al. (2020) and Cao and Cheung (2019) 7 .",
"For WEBNLG , we used their original parameter setting, while for WSJ , we tuned the parameters on the development set and used the best parameter set.",
"To determine the optimal context length K of WSJ , we varied K from 1 to 5 sentences before and after the target sentence, then tested ATT+Meta on the development set with the different K contexts.",
"It reaches the best performance when K = 2 .",
"Metrics.",
"Following Cunha et al. (2020), we evaluated REG systems from 3 angles.",
"(1) RE Accuracy and String Edit Distance (SED, Levenshtein, 1966) were used to evaluate the quality of each generated RE.",
"(2) After adding the REs to the original document, BLEU (Papineni et al., 2002) and Text Accuracy were used to evaluate the output text.",
"(3) Precision , recall , and F1 score were used to assess pronominalisation.",
"Results of WEBNLG .",
"Table 2 depicts the results of WEBNLG 8 .",
"Overall, the classic ruleand ML-based models performed better than neural models, while neural models did a better job on pronominalisation.",
"For generating REs, ML-L had the best performance, as it obtained the highest RE 7 ATT+Copy and ATT+Meta : github.com/ rossanacunha/NeuralREG ; and ProfileREG : github.com/mcao610/ProfileREG .",
"8 Note that there is a discrepancy between our replication results and the results of Cunha et al. (2020).",
"The reason for this difference is that we found a bug in the code for preprocessing provided by the original paper and fixed it after consultation with Cunha et al. 5558 Model RE Acc.",
"accuracy and BLEU scores and the second best SED and text accuracy score.",
"For pronominalisation, ProfileREG yields the best performance, followed by RREG-S.",
"We were surprised to find that the simplest rule-based system, RREG-S , performs remarkably well.",
"It not only defeats the linguistically informed, rule-based RREG-L , but also outperforms the SOTA neural models ATT+Copy and ATT+Meta on both RE generation and pronominalisation.",
"Table 3 shows the breakdown of the seen and unseen subsets.",
"The SOTA neural models (i.e., ATT+Copy , ATT+Meta , and ProfileREG ) have the top 3 performance on seen data, and the worst RE generation performance (i.e., RE Acc., SED, BLEU, and Text Acc.) on unseen data.",
"The ML-based models achieve the fourth and fifth best performance on seen data, and lower performance (but not as low as the neural models) on unseen data.",
"The nature of WEBNLG could explain this drop in performance on unseen data: the models may have limited ability to handle unseen entities, for instance, because they fail to conduct domain transfer (remember that unseen data comes from different domains than seen data).",
"Since rule-based systems do not rely on training data, this explanation does not apply to them, which explains why they did not show the same drop in performance.",
"In fact, they performed even better on unseen data, possibly because unseen data contained fewer triples than seen data (see 2).",
"Concretely, rule-based systems have lower REG accuracy but higher pronominalisation accuracy on unseen data compared to seen data.",
"Additionally, ML-based models have low performance in the pronominalisation of unseen entities.",
"The pronominalisation accuracy of the rule-based models is based on a 2-way distinction between a pronominal and a non-pronominal form, while the ML-based models make a 3-way distinction between a pronoun, a proper name and a description.",
"Another factor that might have lowered the performance of the ML models is the annotation practices in WEBNLG .",
"Since these models are data-driven, the quality of the annotations directly affects their performance.",
"It appears that whenever a (nominal) RE starts with a determiner, it is marked in WEBNLG as description ; otherwise, it is marked as proper name .",
"For instance, United States is marked as a proper name, while The United States is wrongly marked as a description.",
"To allow comparison with previous work, we have not corrected the annotations, but it is important to keep in mind that this issue can cause ML-based models to underperform.",
"Results of WSJ .",
"Table 4 shows the results of WSJ .",
"Once again, ML-L performs best both in RE generation and in pronominalisation, outperforming the other models by a large margin.",
"RREG-L outperforms RREG-S on WSJ on all evaluation metrics, which could be seen as confirmation of our hunch that WSJ contains different, and potentially more naturalistic texts than WEBNLG (see 2.2).",
"accuracy.",
"Also, the inclusion of meta-information significantly boosts the recall of pronominalisation comparing ATT+Copy with ATT+Meta .",
"Table 10 in Appendix D shows an original text and different outputs generated by the WSJ models.",
"Materials.",
"For WEBNLG seen entities, we randomly sampled 4 instances from each triple size group of 2-7 from the test set.",
"In the case of the unseen data, we randomly chose 6 instances from size groups of 2-5.",
"In this way, we obtained a total number of 48 reference instances (24 seen and 24 unseen).",
"In addition to each reference instance, we selected its 7 different versions generated by the models (3 neural, 2 ML-based and 2 rule-based models).",
"This yields a total of 384 items (48 8).",
"Design.",
"The 384 items were randomly distributed into 12 lists of 32 items.",
"Each list was rated by 10 participants.",
"Participants were asked to rate each text for its fluency (does the text flow in a natural, easy to read manner?), grammaticality (is the text grammatical (no spelling or grammatical errors)?) and clarity (does the text clearly express the data in the table?) on a 7-point Likert scale anchored by 1 (very bad) and 7 (very good).",
"The definition of each criterion was taken from Castro Ferreira et al. (2018a).",
"Participants.",
"We used Amazon Mechanical Turk (MTurk) for human evaluation.",
"We restricted MTurk workers to those located in the United States, with an approval rating of 95% and 1,000 or more HITs approved.",
"We rejected workers if they: (1) gave human-produced descriptions a score lower than 2 more than 3 times; or (2) gave scores with a standard deviation less than 0.5.",
"120 workers (12 lists 10 workers) participated, providing us with 11520 judgements (384 items 3 criteria 10 judgements/item).",
"The participants were 80 males, 36 females, and 4 oth-ers/unanswered, with an average age of",
"37. Results.",
"Table 5 shows the results of the human evaluation WEBNLG .",
"Few of the differences reach significance (using Wilcoxon's signed-rank test with Bonferroni correction 9 ), suggesting that WEBNLG may be ill-suited for differentiating between REG models 10 .",
"The only two significant differences appear when comparing RREG-S with ATT+Meta and ProfileREG in terms of the grammaticality of unseen data.",
"The results suggest that RREG-S is the best model for generating REs on WEBNLG , performing on a par with neural models on seen data and better than neural models on unseen data.",
"Unlike our automatic evaluation, ATT+Meta does not outperform ATT+Copy in human evaluation.",
"Materials.",
"We randomly selected 30 documents from the test set of WSJ (reference text).",
"We included the 6 different outputs generated by the 6 WSJ models (hereafter target texts).",
"In this way we obtained a total of 180 reference-target pairs.",
"Design.",
"As mentioned in 2.3, the WSJ documents have an average length of 25 sentences.",
"Since there are no input representations (e.g., in RDF) for WSJ , we decided to ask participants to compare texts using a Magnitude Estimation (ME) (Bard et al., 1996).",
"The participants saw the reference and one of the target texts side by side, and they were asked to rate the target relative to the reference text.",
"To make the task manageable for participants, texts were shortened to a maximum of the first 20 sentences.",
"The 180 reference-target pairs were randomly distributed over 12 lists, each list having 15 items.",
"Each list was rated by 10 participants.",
"They were asked to rate the fluency, grammaticality and clarity of the target texts.",
"The definition of fluency and grammaticality were as in the WEBNLG task, and clarity was defined as how clearly does the target text allow you to understand the situation described in the standard text 11 ?\".",
"The question asked for each of the 3 criteria was: assuming that standard text has a score of 100, how do you rate the fluency | grammaticality | clarity of target text?",
"Participants were allowed to choose any positive number.",
"Participants.",
"The MTurk worker restrictions were similar to the WEBNLG experiment.",
"Workers with scores less than 5 standard deviations were rejected.",
"The experiment included 120 participants, resulting in 5400 judgements (180 items 3 criteria 10 judgements/item).",
"The participants were 65 males, 54 females, and 1 oth-ers/unanswered, with an average age of",
"38. Results.",
"Since typos are possible in ME (e.g., a worker might type 600 instead of 60), we excluded outliers, defined as a score that is lower than the median minus 3 standard deviations, or higher than the median plus 3 standard deviations of that item.",
"The remaining scores were down-sampled for conducting significant testing.",
"The results are shown in Table",
"6. Unlike WEBNLG , significant differences are frequent.",
"For fluency, ML-S and ML-L perform the best while ATT+Meta performs the worst.",
"For grammaticality, ML-L is still the best model, which significantly defeats RREG-L and ATT+Meta .",
"A more detailed study is needed to investigate why RREG-L is the second worst in terms of grammaticality, which we found surprising.",
"For clarity, no significant difference was found, perhaps because it was difficult for participants to compare long 11 We refer to the reference text as standard text .",
"documents.",
"In sum, on WSJ , ML-L has the best performance, and the simpler ML-S and RREG-S also have considerably good performances.",
"Why does Neural REG not defeat rule-based REG?",
"Received wisdom has it that although neural models may be inferior to other models in terms of interpretability, they are nonetheless superior in terms of performance.",
"Although it is possible that future neural models will perform better than the ones examined here, our results call into question whether this received wisdom is correct.",
"One possible explanation is the observation that Neural NLG systems tend to perform very well on surface realisation tasks, but less well on tasks that focus on semantic content (see e.g., Reiter (2018) on hallucinations in the Data2Text generation tasks).",
"REG, after all, is a task that focuses in large part on semantic content.",
"There may be other reasons, which should be investigated in future work.",
"Role of Linguistically-informed Features.",
"Rule-based models did particularly well on WEBNLG , outperforming other models.",
"By contrast, on WSJ , the linguistically-informed feature-based model ( ML-L ) outperformed all other models.",
"This suggests that the type of text, and consequently, the complexity of the REG task, might be a factor in choosing the REG method.",
"Linguistically-informed features seem to have a more pivotal role in the case of more complex text types, whereas simpler texts can be handled at least as well by simpler rule-based models.",
"Resources Use.",
"As mentioned before, different approaches require different amounts of human resources and annotation efforts.",
"But we believe that other resource types should also be taken into consideration when models are compared, including the following: (1) The amount of context : the neural models access the whole pre-context and post-context for WEBNLG , while they access K preceding and K following sentences around the target entity for WSJ .",
"The ML-based models extract features taking only the current sentence and the whole pre-context into account.",
"The rule-based models only look at the current sentence and the previous one; (2) External tools : the neural models need no external tools, while the rule-based and ML-L models need a syntactic parser (which is also used for constructing datasets); (3) 5561 External information : rule-based models, ML-L , and ATT+Meta need entities' meta-information.",
"ProfileREG requires the profile description of each entity, which, for most REG tasks, is hard to obtain; (4) Computing resources : the neural models need GPUs while other models can be constructed using merely personal computers; (5) The amount of training data : the rule-based models need no training data, while other models require training data (large-scale naturalistic versions of which, for the task of REG, is not available).",
"As we have seen, RREG-S and ML-S perform remarkably well on both WSJ and WEBNLG .",
"Taking resources into consideration, the advantage of using a model such as RREG-S and ML-S becomes more pronounced.",
"RREG-S uses less human resources, less context, less computing resources, and no training data 12 compared to other models.",
"ML-S needs more context and training data; it probably also needs more human effort for feature engineering and selecting ML models, but it needs no external tools and no meta-information.",
"In aggregate, one's choice of model may depend partially on what resources are available.",
"For instance, for classic pipeline NLG systems, syntactic position and meta-information are often decided by earlier steps in the pipeline (Gatt and Krahmer, 2018).",
"Therefore, if one's aim is to rapidly construct a pipeline NLG system, then RREG-S should probably be preferred.",
"Generalisability.",
"We used neural REG to illustrate the importance of non-neural baselines.",
"Our findings may not be generalisable to End2End NLG.",
"However, if complex rule/template-based NLG systems are taken into account, Duek et al. (2018) found that although these systems cannot defeat neural approaches, they still have competitive performance.",
"It would be interesting to compare different types of models for other sub-tasks in the NLG pipeline (e.g., content determination, aggregation, and lexicalisation) in a similar way as has been done in the present paper 13 .",
"In this work, we have re-evaluated state-of-the-art Neural REG systems by considering four well-12",
"well-12 The pronominal form (e.g., he or she) of an entity can either be extracted from the training data or decided by the entity's meta-information.",
"13 Note that pipelined NLG systems are sometimes thought to yield better outputs than fully End2End NLG systems (Castro Ferreira et al., 2019) designed ruleand ML-based baselines.",
"In addition to the existing WEBNLG corpus, we built a new dataset for the task of REG-in-context on the basis of the WSJ corpus, arguing that this dataset may be more appropriate for the task.",
"In the reevaluation, we examined both our baselines and SOTA neural REG systems on both datasets, using automatic and human evaluations.",
"The results suggest that the simplest rule-based baseline RREG-S achieves equally good or better performance compared to SOTA neural models.",
"Our results on the WSJ suggest that, on that corpus, the linguistically-informed ML-based model ( ML-L ) is best.",
"We hope these results will encourage further research into the comparative strengths and weaknesses of neural, non-neural and hybrid methods in NLP.",
"In future, we have 4 items on our TODO list: (1) Investigate bottleneck features for Neural based models based on the feature set of ML-L ; (2) Explore other neural architectures (e.g., testing models that leverage pre-trained language models) and construct larger realistic REG corpora; (3) Explore better human evaluation methods for longer documents that are better suited for evaluating the task of generating referring expressions in context; (4) Extend our research to other languages, especially in other language families, including languages that are morphological very rich or very poor and languages that frequently use zero pronouns (e.g., Chinese (Chen et al., 2018)).",
"We thank the anonymous reviewers for their helpful comments.",
"Guanyi Chen is supported by China Scholarship Council (No.201907720022).",
"Fahime Same is supported by the German Research Foundation (DFG) Project-ID 281511265 SFB 1252 Prominence in Language and the Junior Special Fund of the Cologne Center of Language Sciences (CCLS).",
"We collected our human evaluations using Amazon Mechanical Turk.",
"For the WEBNLG task, which used a 7-point Likert scale, the workers were paid 0.03$ per item, in line with rates for similar tasks.",
"For the more demanding WSJ task, we paid 0.10$ per item.",
"The payment for each task was set at $7.5/hour (slightly above the US minimum wage, i.e., $7.25/hour).",
"We expected the amount to be a fair remuneration, but given the actual time some 5562 participants needed, their remuneration turned out to be on the low side.",
"In future crowd-sourcing experiments, we will base our remuneration on a more generous estimate of the duration per experimental task.",
"We asked for demographic information, age, gender and English proficiency level, explicitly stating in the experiment that Your information will be used for research purposes only.",
"All your data will be held anonymously",
".\" These fields were not marked as mandatory fields.",
"The demographic information will not be made publicly available."
] | [
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"result",
"result",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Recent machine reading comprehension datasets such as ReClor and LogiQA require performing logical reasoning over text.",
"Conventional neural models are insufficient for logical reasoning, while symbolic reasoners cannot directly apply to text.",
"To meet the challenge, we present a neural-symbolic approach which, to predict an answer, passes messages over a graph representing logical relations between text units.",
"It incorporates an adaptive logic graph network (AdaLoGN) which adaptively infers logical relations to extend the graph and, essentially, realizes mutual and iterative reinforcement between neural and symbolic reasoning.",
"We also implement a novel subgraph-to-node message passing mechanism to enhance context-option interaction for answering multiple-choice questions.",
"Our approach shows promising results on ReClor and LogiQA.",
"Machine reading comprehension (MRC) has drawn much research attention.",
"Early MRC datasets are not difficult for state-of-the-art neural methods.",
"Indeed, BERT (Devlin et al., 2019) has outperformed humans on SQuAD (Rajpurkar et al., 2016).",
"Recent datasets become more challenging.",
"For example, ReClor (Yu et al., 2020) and LogiQA (Liu et al., 2020) require understanding and reasoning over logical relations described in text, where neural methods showed unsatisfactory performance.",
"For instance, consider the MRC task in Figure 1. The context consists of a set of textual propositions describing logical relations between elementary discourse units (EDUs) (Mann and Thompson, 1988).",
"For example, the first sentence describes an implication between two EDUs: the company gets project A implies that product B can be put on the market on schedule.",
"With the help of propositional calculus, humans can formalize propositions and then apply inference rules in proposi-Context: If the company gets project A, product B can be put on the market on schedule.",
"Product B is put on schedule if and only if the company's fund can be normally turned over.",
"If the company's fund cannot be turned over normally, the development of product C cannot be carried out as scheduled.",
"The fact is that the development of product C is carried out as scheduled.",
"Question: This shows: Options:A.",
"The company gets project A and product B is put on the market on schedule.",
"B. The company does not get project A and product B is not put on the market on schedule.",
"C. Product B is put on the market on schedule and the company's fund is turned over normally.",
"D. Product B is not put on the market on schedule, and the company's fund turnover is extremely abnormal.",
"Figure 1: An example MRC task (adapted from a task in LogiQA).",
"Existing Methods and Limitations To solve it, conventional neural models are insufficient for providing the required reasoning capabilities, while symbolic reasoners cannot directly apply to unstructured text.",
"One promising direction is to consider a neural-symbolic solution, such as the recent DAGN method (Huang et al., 2021a).",
"It breaks down the context and each option into a set of EDUs and connects them with discourse relations as a graph.",
"Then it performs graph neural network (GNN) based reasoning to predict an answer.",
"However, we identify two limitations in this method.",
"L1: Despite the graph representation, it is predominantly a neural method over discourse relations.",
"It is debatable whether the required symbolic reasoning over logical relations (e.g., implication, negation) can be properly approximated.",
"L2: The graph is often loosely connected and composed of long paths.",
"Node-to-node message passing implemented in existing GNN models (Kipf and Welling, 2017; Schlichtkrull et al., 2018; Velickovic et al., 2018) is prone to provide insufficient interaction be-7147",
"(b) Extended TLG.",
"Dashed nodes and edges represent adaptively inferred EDUs and logical relations, respectively.",
"Double edges represent subgraph-to-node message passing.",
"Our Approach.",
"While we follow the general framework of DAGN, i.e., graph construction and then graph-based reasoning, we overcome its two limitations with a novel neural-symbolic approach.",
"To address L1, Figure 3 sketches out our idea.",
"Specifically, we propose to construct a text logic graph (TLG) representing EDUs and their logical relations as opposed to discourse relations, so we can explicitly perform symbolic reasoning to extend the TLG with inferred logical relations, as illustrated in Figure 2. The inferred relations may provide crucial connections to be used in the subsequent graph-based message passing, i.e., symbolic reasoning reinforces neural reasoning .",
"Further, while trivially computing and admitting the deductive closure may extend the TLG with irrelevant connections which would mislead message passing, we leverage signals from neural reasoning to adaptively admit relevant extensions, i.e., neural reasoning reinforces symbolic reasoning .",
"Moreover, we iterate the above mutual reinforcement by restarting inference in each iteration with signals from the previous iteration to accommodate corrections to the reasoning process and allow sufficient neural-symbolic interaction.",
"To address L2, we aggregate the information in the context subgraph of TLG and employ a novel subgraph-to-node message passing mechanism to enhance the interaction from the holistic context Figure 3: Our main idea: mutual and iterative reinforcement between symbolic and neural reasoning.",
"subgraph to each node in the option subgraph, and vice versa , as illustrated in Figure 2b.",
"We incorporate the above two ideas into our new Adaptive Logic Graph Network (AdaLoGN).",
"To summarize, our technical contributions include a novel neural-symbolic approach where neural and symbolic reasoning mutually and iteratively reinforce each other, and a novel aggregation-based enhancement of message passing in graph-based neural reasoning.",
"Outline.",
"We elaborate our approach in Section 2, present experiments in Section 3, discuss related work in Section 4, and conclude in Section 5.",
"Our code is available on GitHub: https:// github.com/nju-websoft/AdaLoGN .",
"A MRC task (cid:104) c, q, O (cid:105) consists of a context c , a question q , and a set of options O .",
"Only one option in O is the correct answer to q given c .",
"The goal of the task is to find this option.",
"Figure 4 outlines our implementation.",
"For each option o O , we generate the representations of c, q, o (i.e., g c , g q , g o , respectively) by a pre-trained language model (Section 2.1), and we construct a raw TLG where nodes (i.e., u 1 , . . . , u | V | ) represent EDUs extracted from c, q, o and edges represent their logical relations (Section 2.2).",
"With their initial representations (i.e., h (0) u 1 , . . . , h (0) u | V | ) obtained from the pre-trained language model, in an iterative manner, we adaptively extend the TLG (i.e., symbolic reasoning) and then pass messages (i.e., neural reasoning) to update node representations (i.e., h ( l +1) u 1 , . . . , h ( l +1) u | V | ) for generating the representation of the TLG (i.e., h G ) (Section 2.3).",
"Finally, we predict the correctness of o (i.e., score o ) based on the above representations (Section 2.4).",
"We use RoBERTa (Liu et al., 2019), a pre-trained language model, to encode three token sequences c = c 1 c | c | , q = q 1 q | q | , and o = o 1 o | o | which are concatenated by the classifier token < s >",
"and the separator token < /s > : [ g < s > ; g c 1 ; . . . ; g < /s > ; g q 1 ; . . . ; g o 1 ; . . . ; g < /s > ] = RoBERTa ( < s > c 1 < /s > q 1 o 1 < /s > ) .",
"(1) The output vector representations are averaged to form the representations of c, q, o : g c = 1 | c | | c | (cid:80) i =1 g c i , g q = 1 | q | | q | (cid:80) i =1 g q i , g o = 1 | o | | o | (cid:80) i =1 g o i .",
"For a piece of text, its TLG is a directed graph G = (cid:104) V, E (cid:105) where V is a set of nodes representing EDUs of the text (Mann and Thompson, 1988), and E V R V is a set of labeled directed edges representing logical relations between EDUs described in the text.",
"We consider six types of common logical relations R = { conj , disj , impl , neg , rev , unk } : conjunction ( conj ), disjunction ( disj ), implication ( impl ), and negation ( neg ) are standard logical connectives in propositional logic; Rhetorical Relation Logical Relation LIST, CONTRAST conj DISJUNCTION disj RESULT impl CAUSE, PURPOSE, CONDITION, BACKGROUND rev Table 1: Mapping from rhetorical relations in Graphene to logical relations in TLG.",
"represent the inverse relation of impl ; unk represents an unknown relation.",
"relations, edges labeled with them are bidirectional.",
"Observe the difference between our TLG and the discourse-based logic graph considered in DAGN (Huang et al., 2021a): edges in the former represent logical relations, while those in the latter represent discourse relations.",
"Therefore, we can explicitly perform symbolic reasoning on TLG.",
"We initialize a raw TLG from c and o .",
"Following Huang et al. (2021a), we ignore q as it is usually uninformative in existing datasets.",
"Specifically, we use Graphene (Cetto et al., 2018) to extract EDUs and their rhetorical relations (Mann and Thompson, 1988) from c and o .",
"Rhetorical relations are converted to logical relations via the mapping in Table 1. Note that each impl edge is always paired with an inverse rev edge, and vice versa.",
"We also define a small number of syntactic rules to identify EDUs that negate each other and connect them with neg .",
"The rules are based on part-of-speech tags and dependencies.",
"For example, one such rule checks whether two EDUs differ from each other only by an antonym of an adverb.",
"In addition, for each pair of EDUs that are adjacent in the text (including the last EDU of c and the first EDU of o ) but have none of the above logical relations, we connect them with unk because Graphene may fail to identify their relation.",
"Since TLG consists of logical relations, we explicitly perform symbolic reasoning by applying inference rules to extend the TLG with inferred logical relations to benefit the subsequent neural reasoning.",
"However, rather than computing the deductive closure which may undesirably provide many relations that are irrelevant to answering the question 7149",
"and mislead neural reasoning, we perform adaptive extension by leveraging signals from neural reasoning to identify and admit relevant extensions.",
"For neural reasoning, we perform message passing to update node representations, which finally are pooled into the representation of the TLG to be used in the subsequent answer prediction.",
"We iterate the above process by restarting inference on the raw TLG in each iteration with signals from the previous iteration to accommodate corrections to the reasoning process and let symbolic and neural reasoning sufficiently interact with each other.",
"We transform the above idea into a new model named AdaLoGN outlined in Figure 4 and detailed below.",
"Let G = (cid:104) V, E (cid:105) be a raw TLG.",
"For symbolic reasoning over the logical relations in G , we apply two inference rules about implication in propositional logic .",
"Other rules are left for future work.",
"Hypothetical Syllogism: (( u i u j ) ( u j u k )) (cid:96) ( u i u k ) .",
"Specifically, if E contains two edges (cid:104) u i , impl , u j (cid:105) and (cid:104) u j , impl , u k (cid:105) , we can add two edges (cid:104) u i , impl , u k (cid:105) and (cid:104) u k , rev , u i (cid:105) to E , as illustrated in Figure 5a.",
"Transposition: ( u i u j ) (cid:96) ( u j u i ) .",
"Specifically, if E contains an edge (cid:104) u i , impl , u j (cid:105) , we can add two edges (cid:104) u j , impl , u i (cid:105) and (cid:104) u i , rev , u j (cid:105) to E , as illustrated in Figure 5b.",
"Note that if u i (resp. u j ) is not incident from/to any neg edge, i.e., u i (resp. u j ) is not a node in V , we will add u i (resp. u j ) to V whose text negates that of u i (resp. u j ), and then add a bidirectional neg edge between u i and u i (resp. u j and u j ) to E .",
"Besides, recall that unk represents a potential logical relation between EDUs that are adjacent in text.",
"Considering that an EDU often inherits logical relations from its adjacent EDUs, we heuristically define and apply the following inference rule.",
"Adjacency-Transmission: (( u i (cid:63) u j ) ( u i u k )) (cid:96) ( u k (cid:63) u j ) , (5) where (cid:63) { , , } and represents adjacency in text.",
"For example, if E contains two edges (cid:104) u i , conj , u j (cid:105) and (cid:104) u i , unk , u k (cid:105) , we can add a bidirectional conj edge between u k and u j to E , as illustrated in Figure 5c.",
"While this rule may generate false propositions, we expect our adaptive reasoner to apply it properly.",
"For example, it is useful for handling the following sentence: ... only 1 person in the group knew 3 of the group ( u k ), 3 people knew 2 of the group ( u i ), and 4 people know 1 of the group ( u j ).",
"Graphene identifies (cid:104) u i , conj , u j (cid:105) and (cid:104) u i , unk , u k (cid:105) but misses (cid:104) u k , conj , u j (cid:105) , which can be generated by applying this rule.",
"Our symbolic reasoning is adaptive .",
"We rely on signals from neural reasoning to decide which inference steps are relevant to answering the questions and hence are admitted to extend the TLG.",
"Specifically, each candidate extension (cid:15) applies an inference rule over a set of nodes V (cid:15) V .",
"We average their vector representations (which will be detailed later) to form the representation of (cid:15) : h (cid:15) = 1 | V (cid:15) | (cid:88) u i V (cid:15) h u i .",
"Since (cid:15) is for predicting the correctness of o , we interact h (cid:15) with the representation of o , i.e., g o in Equation (2), to predict the relevance score of (cid:15) :",
"where (cid:107) represents vector concatenation.",
"We admit all possible (cid:15) to extend G such that rel (cid:15) > where is a predefined threshold.",
"Moreover, our neural-symbolic reasoning is iterative .",
"In the ( l +1) -th iteration, we restart symbolic reasoning with the raw TLG and recompute Equation (6) with node representations h ( l ) u i from neural reasoning in the l -th iteration (which will be detailed in Section 2.3.3).",
"The initial node representations h (0) u i are obtained from a pre-trained language 7150 model.",
"Specifically, we flatten V into a sequence of nodes in the order they appear in the text.",
"Recall that V is divided into V c = { u 1 , . . . , u | V c | } and V o = { u | V c | +1 , . . . , u | V | } representing the nodes extracted from c and o , respectively.",
"Each node u i is a token sequence u i = u i 1 u i | ui | .",
"We use RoBERTa to encode V c and V o which are concatenated by < s > and < /s > , where nodes inside V c and V o are separated by a special token | : [ h < s > ; h u 11 ; ... ; h | ; ... ; h < /s > ; h u | Vc | +11 ; ... ; h | ; ... ; h < /s > ] = RoBERTa ( < s > u 1 1 | < /s > u | V c | +1 1 | < /s > ) .",
"(8) The output vector representations are averaged to form the initial representation of each node u i V : h (0) u i = 1 | u i | | u i | (cid:88) j =1 h u ij .",
"To let the nodes in TLG interact with each other and fuse their information, our neural reasoning performs graph-based message passing (Gilmer et al., 2017) to update node representations in each iteration from h ( l ) u i to h ( l +1) u i .",
"Since TLG is a heterogeneous graph containing multiple types of edges, we incorporate the node-to-node message passing mechanism in R-GCN (Schlichtkrull et al., 2018) as a basis.",
"Further, observe that TLG is usually loosely connected and prone to cause insufficient interaction between V c and V o via long paths in limited iterations, which cannot be alleviated by simply increasing the number of iterations because it would raise other issues such as over-smoothing (Li et al., 2018; Chen et al., 2020).",
"To enhance such interaction which is critical to predicting the correctness of o , we incorporate a novel subgraph-to-node message passing mechanism to holistically pass the information aggregated from a subgraph (e.g., V c ) to a node (e.g., each u i V o ).",
"Specifically, without loss of generality, for each u i V o , we compute the u i -attended aggregate representation of V c by an attention-weighted sum of node representations over V c : h ( l ) V c ,u i = (cid:88) u j V c i,j h ( l ) u j , where i,j = softmax j ([ a i, 1 ; . . . ; a i, | V c | ] (cid:124) ) , a i,j = LeakyReLU ( linear ( h ( l ) u i (cid:107) h ( l ) u j )) .",
"(10)",
"Let N i be the set of neighbors of u i .",
"Let N ir N i be the subset under logical relation r R .",
"We update the representation of u i by passing messages to u i from its neighbors and from V c : h ( l +1) u i = ReLU ( (cid:88) r R (cid:88) u j N ir i,j | N ir | W ( l ) r h ( l ) u j + W ( l ) 0 h ( l ) u i + i W ( l ) subgraph h ( l ) V c ,u i ) , where i,j = softmax idx ( a i,j ) ([ . . . ; a i,j ; . . . ] (cid:124) ) for all u j N i , a i,j = LeakyReLU ( linear ( h ( l ) u i (cid:107) h ( l ) u j )) , i = sigmoid ( linear ( h ( l ) u i (cid:107) h ( l ) V c ,u i )) , (11) W ( l ) r , W ( l ) 0 , W ( l ) subgraph are matrices of learnable parameters, and idx ( a i,j ) returns the index of a i,j in the | N i | -dimensional vector [ . . . ; a i,j ; . . . ] (cid:124) .",
"In an analogous way, for each u i V c , we compute the u i -attended aggregate representation of V o denoted by h ( l ) V o ,u i and update h ( l +1) u i .",
"Observe two differences between Equation (11) and its counterpart in the original R-GCN.",
"First, we incorporate subgraph-to-node message passing and control it by a gating mechanism (i.e., i ).",
"Second, we weight node-to-node message passing by an attention mechanism (i.e., i,j ).",
"After L iterations where L is a hyperparameter, for each node u i V , we fuse its representations over all the iterations with a residual connection:",
"h fus u i = h (0) u i + linear ( h (1) u i (cid:107) (cid:107) h ( L ) u i ) .",
"(12)",
"Inspired by Huang et al. (2021a), we feed all h fus u i into a bidirectional residual GRU layer (Cho et al., 2014) to finalize node representations: [ h fnl u 1 ; . . . ; h fnl u | V | ] = Res BiGRU ([ h fus u 1 ; . . . ; h fus u | V | ]) .",
"We aggregate these node representations by computing an o -attended weighted sum: h V = (cid:88) u i V i h fnl u i , where i = softmax i ([ a 1 ; . . . ; a | V | ] (cid:124) ) , a i = LeakyReLU ( linear ( g o (cid:107) h fnl u i )) , (14) and g o is the representation of o in Equation (2).",
"We concatenate h V and the relevance scores to form the representation of G : h G = ( h V (cid:107) rel E (1) (cid:107) (cid:107) rel E ( L ) ) , where rel E ( l ) = 1 |E ( l ) | (cid:88) (cid:15) E ( l ) rel (cid:15) , (15) 7151 E ( l ) is the set of candidate extensions in the l -th iteration, and rel (cid:15) is in Equation (7).",
"In this way, we are able to train the network in Equation (7).",
"We fuse the representations of c, q, o and the TLG to predict the correctness of o :",
"score o = linear (tanh( linear ( g c (cid:107) g q (cid:107) g o (cid:107) h G ))) (16)",
"where g c , g q , g o are in Equation (2).",
"Let o gold O be the correct answer.",
"We optimize the cross-entropy loss with label smoothing: L = (1 ) score (cid:48) o gold 1 | O | (cid:88) o i O score (cid:48) o i , where score (cid:48) o i = log exp( score o i ) (cid:80) o j O exp( score o j ) , (17) and is a predefined smoothing factor.",
"ReClor (Yu et al., 2020) consists of 6,138 four-option multiple-choice questions collected from standardized exams such as GMAT and LSAT.",
"The questions were divided into 4,638 for training, 500 for development, and 1,000 for testing.",
"The test set was further divided into 440 easy questions (Test-E) where each question could be correctly answered by some strong baseline method using only the options and ignoring the context and the question, and the rest 560 hard questions (Test-H).",
"LogiQA (Liu et al., 2020) consists of 8,768 four-option multiple-choice questions collected from the National Civil Servants Examination of China, which were translated into English.",
"The questions were divided into 7,376 for training, 651 for development, and 651 for testing.",
"We experimented on NVIDIA V100 (32GB).",
"We tuned hyperparameters on the development set of each dataset.",
"Specifically, for text encoding, we used RoBERTa-large with hidden layer = 24 and hidden units = 1 , 024 implemented by Hugging Face (Wolf et al., 2020).",
"For message passing, our implementation was based on DGL (Wang et al., 2019).",
"For both datasets, we used the Adam optimizer, and set attention heads = 16 , dropout rate = 0 .",
"1 , epochs = 10 , batch size = 16 selected from { 8 , 16 , 24 } , number of iterations L = 2 from { 2 , 3 } , and maximum sequence length = 384 .",
"For ReClor, we set warm-up proportion = 0 .",
"1 from { 0 .",
"1 , 0 .",
"2 } , learning rate = 7 e 6 from { 6 e 6 , 7 e 6 , 8 e 6 , 1 e 5 } , and seed = 123 from { 123 , 1234 , 42 , 43 } .",
"For LogiQA, we set warm-up proportion = 0 .",
"2 from { 0 .",
"1 , 0 .",
"2 } , learning rate = 8 e 6 from { 6 e 6 , 7 e 6 , 8 e 6 , 1 e 5 } , and seed = 42 from { 123 , 1234 , 42 , 43 } .",
"For the relevance score threshold below Equation (7), we set = 0 .",
"6 from { 0 .",
"4 , 0 .",
"5 , 0 .",
"6 , 0 .",
"7 } for both datasets.",
"For the smoothing factor in Equation (17), we set = 0 .",
"25 for both datasets.",
"To fit in our GPU's memory, we restricted a raw TLG to contain at most 25 nodes and 50 edges by, if needed, randomly merging nodes connected by an unk edge and/or deleting non-bridge edges while keeping the graph connected.",
"We compared our approach, referred to as AdaLoGN, with popular pre-trained language models and with other known methods in the literature.",
"Reasoning-based MRC, like other MRC tasks, can be solved by using a pre-trained language model with a classification layer.",
"Yu et al. (2020) reported the results of BERTLARGE , RoBERTa LARGE , and XLNet LARGE on ReClor.",
"Huang et al. (2021a) reported the results of BERTLARGE and RoBERTa LARGE on LogiQA.",
"In the literature, we found the results of DAGN (Huang et al., 2021a), Focal Reasoner (Ouyang et al., 2021), and LReasoner (Wang et al., 2021a,b) on both datasets.",
"For a fair comparison with our approach, we presented their results on RoBERTa LARGE , while LReasoner achieved better results with ALBERT.",
"Between the two variants of LReasoner, one without data augmentation (w/o DA) and the other with data augmentation (w/ DA), we presented both of their results but mainly compared with the former because our approach and other baseline methods would also benefit if data augmentation were incorporated.",
"Following the literature, we reported accuracy, i.e., the proportion of correctly answered questions.",
"For our approach we reported the max across 3 runs on the development set of each dataset.",
"On ReClor, as shown in Table 2, AdaLoGN outperformed all the baseline methods on the test set by at least 1.30%, except for LReasoner (w/ DA) which performed data augmentation so that the comparison might be unfair.",
"AdaLoGN and LReasoner (w/ DA) both exceeded 60%, being comparable with human-level performance (63%).",
"On LogiQA, as shown in Table 3, AdaLoGN outperformed all the baseline methods on the test set, including LReasoner (w/ DA).",
"Still, our result (40.71%) was not comparable with human-level performance (86%).",
"In particular, on both ReClor and LogiQA, AdaLoGN exceeded DAGN on the test set by 1.39%1.90%, which demonstrated the effectiveness of our approach in addressing the limitations of DAGN mentioned in Section 1. 3.6 Ablation Study We conducted an ablation study to evaluate the effectiveness of the two main technical contributions in our approach: adaptive extension of TLG and subgraph-to-node message passing.",
"We compared the standard version of AdaLoGN with two variants removing adaptive extension.",
"AdaLoGN no-ext performs no extension.",
"AdaLoGN full-ext performs full extension by computing and admitting the deductive closure.",
"On ReClor, as shown in Table 4, both variants exhibited a fair decrease in accuracy on the test set by 0.70%1.40%.",
"On LogiQA, as shown in Table 5, the decreases were larger, 1.69% on the test set, possibly because the questions in LogiQA were harder so that the effectiveness of our adaptive extension became more noticeable.",
"Interestingly, on both datasets, AdaLoGN full-ext was not better than AdaLoGN no-ext on the test set, indicating that a naive injection of logical reasoning into neural reasoning might not have positive effects.",
"We analyzed the distributions of relevance scores of candidate extensions, i.e., rel (cid:15) in Equation (7).",
"As shown in Figure 6, they approximated a normal distribution on both datasets.",
"By setting the threshold = 0 .",
"6 , we admitted 19.57% and 4.86% of the extensions on ReClor and LogiQA, respectively.",
"We also compared with a variant of AdaLoGN using a subset of inference rules.",
"By ignoring the adjacency-transmission rule, AdaLoGN no-at showed a decrease in accuracy on the test sets by 0.77%0.80%, suggesting the usefulness of this rule despite its heuristic nature.",
"We compared the standard version of AdaLoGN with two variants removing subgraph-to-node message passing or implementing it in a different way.",
"AdaLoGN n2n only performs node-to-node message passing in a standard way.",
"AdaLoGN n2n+ only performs node-to-node message passing but, as an alternative to our holistic subgraph-to-node message passing, it adds a bidirectional unk edge between each node in the context subgraph and each node in the option subgraph to enhance context-option interaction.",
"On ReClor, as shown in Table 4, both variants exhibited a large decrease in accuracy on the test set by 1.60%2.60%.",
"On LogiQA, as shown in Table 5, the decreases were also large, 1.69%1.85% on the test set.",
"The results demonstrated the effectiveness of our subgraph-to-node message passing.",
"Compared with AdaLoGN n2n , AdaLoGN n2n+ achieved better results on ReClor but worse results on LogiQA on the test set, indicating that a naive enhancement of context-option interaction could have negative effects.",
"From the development set of each dataset, we randomly sampled fifty questions to which our approach outputted an incorrect answer.",
"We analyzed the sources of these errors.",
"Note that an error could Source of Error ReClor LogiQA Construction of raw TLG 38% 36% Adaptive extension of TLG 18% 22% Expressivity of symbolic reasoning 20% 18% Others (about neural reasoning) 46% 40% Table 6: Error analysis of AdaLoGN.",
"As shown in Table 6, we mainly relied on Graphene to extract a raw TLG from text based on syntactic analysis, which accounted for about one third of the errors (36%38%).",
"Our adaptive extension of TLG constituted about one fifth of the errors (18%22%), e.g., some excessive extensions produced irrelevant logical relations which might mislead message passing.",
"One fifth of the errors (18%20%) were due to the limited expressivity of our symbolic reasoning, i.e., a subset of propositional logic, while some questions required quantifiers.",
"Other errors might be related to neural reasoning such as message passing or answer prediction (40%46%).",
"On both ReClor and LogiQA, our approach used about 0.8 second for answering a question.",
"While simple MRC tasks have been well studied, complex MRC tasks requiring various reasoning capabilities are receiving increasing research attention.",
"Among others, multi-hop MRC tasks in HotpotQA (Yang et al., 2018) and WikiHop (Welbl et al., 2018) require retrieving and reading multiple supporting passages to answer a question.",
"They can be solved by constructing and reasoning over a graph connecting passages that overlap or co-occur with each other (Qiu et al., 2019; Tu et al., 2020), by implicitly supervising a retriever via word weighting (Huang et al., 2021b), or by iteratively applying dense retrieval (Xiong et al., 2021).",
"MRC tasks in DROP (Dua et al., 2019) require discrete reasoning such as addition, counting, and sorting.",
"Neural networks have been extended to incorporate modules that can perform such reasoning over numbers and dates mentioned in a given context (Gupta et al., 2020).",
"For MRC tasks in CommonsenseQA (Tal-mor et al., 2019) which are targeted at commonsense knowledge and reasoning, recent methods fuse external commonsense knowledge with pre-7154 trained language models for reasoning (Yan et al., 2021; Xu et al., 2021).",
"There are also studies on MRC tasks requiring spatial/geographical reasoning (Huang et al., 2019; Li et al., 2021) and tempo-ral/causal reasoning (Sun et al., 2018).",
"Different from the above reasoning capabilities, the MRC tasks considered in this paper require logical reasoning , such as reasoning about sufficient and necessary conditions, categorization, conjunctions and disjunctions.",
"Pre-trained language models alone struggled and were far behind human-level performance on such tasks in ReClor (Yu et al., 2020) and LogiQA (Liu et al., 2020) due to their weakness in logical reasoning.",
"Among existing methods for solving such tasks, DAGN (Huang et al., 2021a) and Focal Reasoner (Ouyang et al., 2021) extract discourse or coreference relations from text and represent as a graph of text units.",
"Then they employ GNN to pass messages and update representations for predicting an answer.",
"Different from their neural nature, our approach symbolically performs logical reasoning as required by such tasks, by applying inference rules over extracted logical relations to extend the graph.",
"This feature resembles LReasoner (Wang et al., 2021a,b) which extends the context with inferred logical relations to benefit the subsequent neural reasoning.",
"However, different from LReasoner which computes the deductive closure and identifies relevant extensions by text overlapping with the options in an unsupervised manner, our approach predicts relevance based on signals from neural reasoning in a supervised manner , and our prediction evolves over iterations after sufficient interaction between symbolic and neural reasoning.",
"All these features helped our approach achieve better performance in the experiments.",
"Our approach represents a novel implementation of neural-symbolic reasoning (Raedt et al., 2020), and it differs from the following existing methods.",
"One paradigm of neural-symbolic reasoning is logic-driven neural reasoning.",
"For example, logical constraints can be compiled into a neural network by augmenting the loss function (Xu et al., 2018) or the network structure (Li and Srikumar, 2019).",
"Logical connectives, quantifiers, and consistency checking can also be approximated by neural networks (Dong et al., 2019; Ren et al., 2020; Gu et al., 2019).",
"While these methods incorporate logical reasoning into neural reasoning via emulation, our approach explicitly performs logical reasoning by applying inference rules over logical relations.",
"Such exact inference is more accurate than emulation-based approximation.",
"Another paradigm is neural-driven logical reasoning.",
"For example, neural networks have been employed to predict the truth of an atom in answering first-order logic queries (Arakelyan et al., 2021), and to implement predicates in probabilistic logic programming (Manhaeve et al., 2021).",
"These methods and our approach cope with different problems, thus using different techniques.",
"Specifically, while these methods complement logical reasoning with extra facts generated by neural reasoning, our approach filters inferred logical relations based on signals from neural reasoning.",
"Moreover, observe that the neural-symbolic interaction in the above methods are unidirectional, i.e., they leverage either symbolic or neural reasoning to reinforce the other.",
"By contrast, we allow bidirectional neural-symbolic interaction where neural and symbolic reasoning mutually and iteratively reinforce each other for better performance.",
"To meet the challenge of reasoning-based MRC, we presented a neural-symbolic approach where neural and symbolic reasoning mutually and iteratively reinforce each other via our new AdaLoGN model.",
"We also enhanced graph-based neural reasoning with a novel subgraph-to-node message passing mechanism.",
"Since these ideas are quite general, we believe they have great potential for a variety of applications beyond MRC, e.g., link prediction.",
"Error analysis has revealed some shortcomings of our approach.",
"Currently we rely on syntactic tools to extract a raw TLG from text.",
"We will explore other extraction methods to achieve a higher quality.",
"We also plan to apply more inference rules and incorporate quantifiers to improve the expressivity of our symbolic reasoning.",
"This work was supported in part by the NSFC (62072224) and in part by the Beijing Academy of Artificial Intelligence (BAAI)."
] | [
"abstain",
"abstain",
"method",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"method",
"method",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"objective",
"other",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"method",
"other",
"objective",
"objective",
"objective",
"method",
"result",
"method",
"objective",
"result",
"other"
] |
[
"In this work, we investigate the knowledge learned in the embeddings of multimodal-BERT models.",
"More specifically, we probe their capabilities of storing the grammatical structure of linguistic data and the structure learned over objects in visual data.",
"To reach that goal, we first make the inherent structure of language and visuals explicit by a dependency parse of the sentences that describe the image and by the dependencies between the object regions in the image, respectively.",
"We call this explicit visual structure the scene tree , that is based on the dependency tree of the language description.",
"Extensive probing experiments show that the multimodal-BERT models do not encode these scene trees.",
"Code available at https://github.",
"com/VSJMilewski/multimodal-probes .",
"In recent years, contextualized embeddings have become increasingly important.",
"Embeddings created by the BERT model and its variants have been used to get state-of-the-art performance in many tasks (Devlin et al., 2019; Liu et al., 2019b; Yang et al., 2019; Radford and Narasimhan, 2018; Radford et al., 2019; Brown et al., 2020).",
"Several multimodal-BERT models have been developed that learn multimodal contextual embeddings through training jointly on linguistic data and visual data (Lu et al., 2019; Su et al., 2019; Li et al., 2019; Chen et al., 2020).",
"They achieve state-of-the-art results across many tasks and benchmarks, such as Visual Question Answering (Goyal et al., 2017), image and text retrieval (Lin et al., 2014), and Visual Commonsense Reasoning (Suhr et al., 2019).",
"1 BERT and multimodal-BERTs are blackbox models that are not easily interpretable.",
"It is not 1 From here on we refer to the text-only BERT models as 'BERT' and the multimodal-BERT models as 'multimodalBERTs'.",
"trivial to know what knowledge is encoded in the models and their embeddings.",
"A common method for getting insight into the embeddings of both textual and visual content is probing.",
"Language utterances have an inherent grammatical structure that contributes to their meaning.",
"Natural images have a characteristic spatial structure that likewise allows humans to interpret their meaning.",
"In this paper we hypothesize that the textual and visual embeddings learned from images that are paired with their descriptions encode structural knowledge of both the language and the visual data.",
"Our goal is to reveal this structural knowledge with the use of probing.",
"More specifically, in order to perform this probing, we first make the inherent structure of language and visuals explicit by a mapping between a dependency parse of the sentences that describe the image and by the dependency between the object regions in the image, respectively.",
"Because the language truthfully describes the image, and inspired by Draschkow and V (2017), we define a visual structure that correlates with the dependency tree structure and that arranges object regions in the image in a tree structure.",
"We call this visual dependency tree the scene tree .",
"An example of this mapping to the scene tree is visualized in Figure 1.",
"The aligned dependency tree and scene tree allow us to conduct a large set of experiments aimed at discovering encoded structures in neural representations obtained from multimodal-BERTs.",
"By making use of the structural probes proposed by Hewitt and Manning (2019), we compare the dependency trees learned by models with or without provided image features.",
"Furthermore, we investigate if scene trees are learned in the object region embeddings.",
"with a multimodal-BERT retain their structural knowledge?",
"Sub-RQ 1.1: To what extent does the joint training in a multimodal-BERT influence the structures learned in the textual embeddings?",
"RQ 2: Do the visual embeddings trained with a multimodal-BERT learn to encode a scene tree?",
"In a broader framework this study might contribute to better representation learning inspired by how humans acquire language in a perceptual context.",
"It stimulates the learning of representations that are compositional in nature and are jointly influenced by the structure of language and the corresponding structure of objects in visuals.",
"Probing studies Several studies have been performed that aim at analyzing BERT and multimodal-BERTs.",
"For BERT, probes are designed that explore gender bias (Bhardwaj et al., 2021), relational knowledge (Wallat et al., 2020), linguistic knowledge for downstream tasks (Liu et al., 2019a), part-of-speech knowledge (Hewitt and Liang, 2019; Hewitt et al., 2021), and for sentence and dependency structures (Tenney et al., 2019; Hewitt and Manning, 2019).",
"These studies have shown that BERT latently learns to encode linguistic structures in its textual embeddings.",
"Basaj et al. (2021) made a first attempt at converting the probes to the visual modality and evaluated the information stored in the features created by visual models trained with self-supervision.",
"For multimodal-BERTs, one study by Parcal-abescu et al. (2021) investigates how well these models learn to count objects in images and how well they generalize to new quantities.",
"They found that the multimodal-BERTs overfit the dataset bias and fail to generalize to out-of-distribution quantities.",
"Frank et al. (2021) found that visual information is much more used for textual tasks than textual information is used for visual tasks when using multimodal models.",
"These findings suggest more needed research into other capabilities of and knowledge in multimodal-BERT embeddings.",
"We build on this line of work but aim to discover structures encoded in the textual and visual embeddings learned with multimodal-BERTs.",
"This is a first step towards finding an aligned structure between text and images.",
"Future work could exploit this to make textual information more useful for visual tasks.",
"Structures in visual data There is large research interest in identifying structural properties of images e.g., scene graph annotation of the visual genome dataset (Krishna et al., 2016).",
"In the field of psychology, research towards scene grammars (Draschkow and V, 2017) evidences that humans assign certain grammatical structures to the visual world.",
"Furthermore, some studies investigate the grounding of textual structures in images, such as syntax learners (Shi et al., 2019) and visually grounded grammar inducers (Zhao and Titov, 2020).",
"Here the complete image is used, without considering object regions and their composing structure, to aid in predicting linguistic structures.",
"Closer to our work, Elliott and Keller (2013) introduced visual dependency relations (VDR), where spatial relations are created between object in the image.",
"The VDR can also be created by locating the object and subject in a caption and matching it with object annotations in the image (Elliott and de Vries, 2015).",
"Our scene tree differs, since it makes use of the entire dependency tree of the caption to create the visual structure.",
"Multimodal-BERT Many variations of the BERT model implement a transformer architecture to process both visual and linguistic data, e.g., images and sentences.",
"These Multimodal-5659 BERTs can be categorized into two groups: single-stream and dual-stream encoders.",
"In the former, a regular BERT architecture processes the concatenated input of the textual description and the image through a transformer stack.",
"This allows for an \"unconstrained fusion of cross-modal features\" (Bugliarello et al., 2021).",
"Some examples of these models are ViL-BERT (Su et al., 2019), Visual-BERT (Li et al., 2019), and UNITER (Chen et al., 2020).",
"In the dual-stream models, the visual and linguistic features are first processed separately by different transformer stacks, followed by several transformer layers with alternating intra-modal and inter-modal interactions.",
"For the inter-modal interactions, the query-key-value matrices modeling the multi-head self-attention are computed, and then the key-value matrices are exchanged between the modalities.",
"This limits the interactions between the modalities but increases the expressive power with separate parameters.",
"Examples of such dual-stream models are ViLBERT (Lu et al., 2019), LXMERT (Tan and Bansal, 2019), and ERNIE-ViL (Yu et al., 2021).",
"2 4 Method 4.1 Tree Structures In the probing experiments we assume that the structural knowledge of a sentence is made explicit by its dependency tree structure and that likewise the structural knowledge of an image is represented by a tree featuring the dependencies between object regions.",
"Further, we assume that the nodes of a tree (words in the dependency tree of the sentence, phrase labels in the region dependency tree of the image) are represented as embeddings obtained from a layer in BERT or in a multimodal-BERT.",
"To generate the depths and distances values from the tree, we use properties of the embedding representation space (Mikolov et al., 2013).",
"For example, similar types of relations between embeddings have a similar distance between them, such as counties and their capital city.",
"The properties we use are that the length (the norm) of a vector which describes the depth in a tree and the distance between nodes that can be translated as the distance between vectors.",
"2 The ERNIE-ViL model is trained with scene graphs of the visual genome dataset.",
"We do not probe this model as there is an overlap between the training data of ERNIE-ViL and our evaluation data.",
"Algorithm 1 ConstructSceneT ree ( T t , P, I ) Input: Language dependency tree T t = { E t , V t } , with V t the set of T extIDs for words in a sentence and E t the set of edges such that each e t = ( v t,j , v t,k ) , where v t,k is a child node of v t,j Input: Set of phrases P , each p i describes one or more regions and covers multiple words Input: Image I Output: Scene tree T s 1: V s = {} , set of Nodes in Scene Tree T s 2: E s = {} , set of Edges in Scene Tree T s 3: v s, 0 = I , set Image as root node 4: D 0 = 0 , set root node depth as 0 5: add ( V s , v s, 0 ) 6: v t, 0 = F indRootNode ( T t ) 7: P hraseID 2 T extID (0) = v t, 0 8: for p i P do 9: v t,k = F indHighestNode ( p i ) 10: P hraseID 2 T extID ( p i ) = v t,k 11: D i = DepthInT ree ( T t , v t,k ) 12: for p i P ordered by D do 13: v t,k = P hraseID 2 T extID ( p i ) 14: while True do 15: e t = EdgeW ithChildNode ( E, v t,k ) 16: v t,j = SelectP arentNode ( e t ) 17: p p = T extID 2 P hraseID ( v t,j ) 18: if p p V s then 19: add ( V s , p i ) , add ( E s , ( p p , p i )) 20: D i = D p + 1 21: break while loop 22: else 23: v t,k = v t,j 24: return T s Generating distance values For the distance labels, a matrix D N n n is required, with each D ij describing the distance between nodes i and j .",
"To fill the matrix, we iterate over all possible pairs of nodes.",
"For nodes i and j , it is computed by starting at node i in the tree and traverse it until node j is reached while ensuring a minimum distance.",
"This is achieved by using the breadth-first search algorithm.",
"Generating depth values For the depth labels, we generate a vector d N n , with n the number of nodes in the tree.",
"There is a single node that is the root of the tree, to which we assign a depth of zero.",
"The depth increases at every level below.",
"Language dependency tree We use the dependency tree as linguistic structure.",
"The tree annotations are according to the Stanford dependency guidelines (De Marneffe and Manning, 2008).",
"They can either be provided as gold-standard in the dataset, or generated using the spacy dependency parser (Honnibal et al., 2020).",
"Scene tree Draschkow and V (2017) found that there are commonalities between words in language and objects in scenes, allowing to construct a scene grammar.",
"Furthermore, Zhao and Titov (2020) have shown that an image provides clues that improve grammar induction.",
"In line with these works, we want a visual structure that aligns with a linguistic representation like the dependency tree.",
"As visual structure, a scene graph could be used for the relations between regions (Krishna et al., 2016).",
"However, the unconstrained graph is difficult to align with the dependency tree.",
"Therefore, we propose a novel visual structure, the scene tree , that is created by mapping a textual dependency tree to the object regions of an image.",
"An example of such a mapping for an image-sentence pair is given in Figure 1.",
"This process requires a tree for the sentence and paired data for images and sentences.",
"Each node in the scene tree directly matches one or more visual regions.",
"The node description is a phrase that covers multiple words in the sentence (or nodes in the dependency tree).",
"The output of this method is a tree that contains the phrase trees that directly correspond to the regions.",
"The algorithm is completely described as pseudo-code in Algorithm 1.",
"The algorithm starts by initializing the scene tree.",
"We set the full image as the root node.",
"For each phrase that describes an image region, we select the dependency tree node (or word with a T extID ) that is closest to the root and assign this a phrase ID.",
"This creates a mapping between the phrases (Phrase IDs) and dependency tree nodes (Text IDs) P hraseID 2 T extID , and its reverse T extID 2 P hraseID .",
"We assign each phrase an initial depth, based on the word it maps to in P hraseID 2 T extID .",
"On line 12, the loop over the phrases that describe the object regions starts, to find the direct parent for each phrase so it can be added to the new scene tree.",
"For each phrase p i , we select the matching dependency tree node the v t,k from P hraseID 2 T extID .",
"From v t,k we follow the chain of parent nodes, until an ancestor v t,l is found that points back to a phrase p j (using T extID 2 P hraseID ) that is already a member of the scene tree.",
"Phrase p i is added to the tree as child of p j .",
"The completed tree of phrases is our scene tree .",
"Textual embeddings For each sentence l , every word becomes a node n i in the tree, such that we have a sequence of s nodes n l 1: s .",
"To obtain the textual embeddings h l 1: s R m , we do a wordpiece tokenization (Wu et al., 2016) and pass the sentence into BERT.",
"Depending on the requested layer, we take the output of that BERT layer as the embeddings.",
"For nodes with multiple embeddings because of the wordpiece tokenization, we take the average of those embeddings.",
"To obtain the textual embeddings h l 1: s for a multimodal-BERT, we use the same process but also provide visual features.",
"When an image is present, we enter the visual features (as described in the next paragraph), otherwise, a single masked all-zero feature is entered.",
"Visual embeddings For sentence with image l , the sequence of s nodes n l 1: s consists of the number of regions plus the full image.",
"The visual embeddings h l 1: s R m are obtained by passing the raw Faster R-CNN features (Ren et al., 2015) into the multimodal-BERT.",
"Depending on the requested layer, we take the output of that multimodal-BERT layer as the embeddings.",
"Here we shortly describe the structural probes as defined by Hewitt and Manning (2019).",
"Originally designed for text, we use these probes to map from an embedding space (either textual embeddings or visual embeddings) to depth or distance values as defined in Section 4.1.",
"Distance probe Given a sequence of s nodes n l 1: s (words or objects) and their embeddings h l 1: s R m , where l identifies the sequence and m the embedding size, we predict a matrix of s s distances.",
"First, we define a linear transformation B R k m with k the probe rank, such that BTB is a positive semi-definite, symmetric matrix.",
"By first transforming a vector h with matrix B , we get its norm like this: ( Bh ) T ( Bh ) .",
"To get the squared distance between two nodes i and j in sequence l , we compute the difference between node 5661 embeddings h i and h j and take the norm following equation 1: D ij = ( B ( h li h lj )) T ( B ( h li h lj )) (1) The only parameters of the distance probe are now the transformation matrix B , which can easily be implemented as a fully connected linear layer.",
"Identical to the work by Hewitt and Manning (2019), the probe is trained through stochastic gradient descent.",
"Depth probe For the depth probe, we transform the embedding of each node n i to their norm, so we can construct the vector d .",
"This imposes a total order on the elements and results in the depths.",
"We compute the squared vector norm (cid:107) h i (cid:107) 2 B with the following equation: d i = (cid:107) h i (cid:107) 2 B = ( Bh li ) T ( Bh li ) (2) 5 Experimental Setup 5.1 Data By using a text-only dataset, we can test how the textual embeddings of the multimodal-BERTs perform compared to the BERT model, without the interference from the visual embeddings.",
"This allows us to see how much information the multimodal-BERTs encode in the visual embeddings.",
"Therefore, we use the Penn Treebank (PTB3) (Marcus et al., 1999).",
"It is commonly used for dependency parsing (also by Hewitt and Manning (2019) from whom we borrow the probes) and consists of gold-standard dependency tree annotations according to the Stanford dependency guidelines (De Marneffe and Manning, 2008).",
"We use the default training/validation/testing split, that is, the subsets 2-21 for training, 22 for validation and 23 for testing of the Wall Street Journal sentences.",
"This provides us with 39.8k/1.7k/2.4k sentences for the splits, respectively.",
"The second dataset is the Flickr30k dataset (Young et al., 2014), which consists of multimodal image captioning data.",
"It has five caption annotations for each of the 30k images.",
"An additional benefit of this dataset are the existing extensions, specifically the Flickr30k-Entities (F30E) (Plum-mer et al., 2015).",
"In F30E all the phrases in the captions are annotated and match with region annotations in the image.",
"This paired dataset is used to create the scene trees proposed in Section 4.2.",
"The Flickr30k dataset does not provide gold-standard dependency trees.",
"Therefore, the transformer based Spacy dependency parser (Honnibal et al., 2020) is used to generate silver-standard dependency trees according to the Stanford dependency guidelines (De Marneffe and Manning, 2008).",
"The dataset consists of 30k images, with (mostly) 5 captions each, resulting in 148.9k/5k/5k sentences for the training/validation/testing splits, respectively.",
"We use two different multimodal-BERTs, one single-stream and one dual-stream model.",
"As implementation for the multimodal-BERTs, we make use of the VOLTA library (Bugliarello et al., 2021).",
"Here, all the models are implemented and trained under a controlled and unified setup with regard to hyperparameters and training data.",
"Based on the performance under this unified setup on the Flickr30k image-sentence matching task, we have chosen the best performing models: ViLBERT (Lu et al., 2019) as single-stream model and UNITER (Chen et al., 2020) as dual-stream model.",
"When probing the textual embeddings, we also use a text-only BERT-base model (from here on referred to as BERT) (Devlin et al., 2019).",
"Hewitt and Manning (2019) use the same model, allowing for easy comparability.",
"The implementation used is from the HuggingFace Transformer library (Wolf et al., 2020).",
"Hyperparameters For our setup and metrics, we follow the setup from Hewitt and Manning (2019).",
"The batch size is set to 32 and we train for a maximum of 40 epochs.",
"Early stopping is used to terminate training after no improvement on the validation L1-loss for 5 epochs.",
"The main metric used for both the distance and the depth probes is the Spearman rank coefficient correlation.",
"This indicates if the predicted depth vector of the nodes, or the predicted distance matrix of the nodes, correlate with the gold-standard (or silver) depths and distances generated according to the method in Section 4.4.",
"The Spearman correlation is computed for each length sequence separately.",
"We take the average over the scores of the lengths between 5 and 50 and call this the Distance Spearman (DSpr.) for the distance probe and 5662 DSpr.",
"For the depth probes, we also use the root accuracy (root_acc).",
"This computes the accuracy of predicting the root of the sequence.",
"This metric is only applicable for the textual embeddings, due to our method of generating the visual tree, where the root is always the full image at the start of the sequence.",
"For the distance probe, we make use of the undirected unlabelled attachment score (UUAS).",
"This directly tests how accurate the predicted tree is compared to the ground-truth (or silver) tree by computing the accuracy of predicted connections between nodes in the tree.",
"It does not consider the label for the connection or the direction of the connection (Jurafsky and Martin, 2021).",
"Baseline comparisons We design one baseline for the textual data and two for the visual data.",
"For the textual baseline, we use the initial word piece textual embeddings (from either BERT or a multimodal-BERT) before inserting them into the transformer stack.",
"We simply refer to it as baseline .",
"The first visual baseline implements the raw Faster R-CNN features (Ren et al., 2015) of each object region.",
"However, they have a larger dimen-3 Just as done by Hewitt and Manning (2019).",
"sion than the BERT embeddings.",
"We refer to it as R-CNN baseline .",
"The second baseline uses the visual embeddings before they are fed to the transformer stack.",
"This is a mapping from the Faster R-CNN features to the BERT embedding size.",
"We refer to it as baseline .",
"First, we want to determine the probe rank of the linear transformation used on the textual or the visual embeddings.",
"Based on results by Hewitt and Manning (2019), we set the probe rank for BERT to 128.",
"We run a comparison with several probe ranks on UNITER and ViLBERT to find the optimal setting for the textual and visual embeddings.",
"The results are shown and discussed in Appendix A. We use a rank of 128 for all our following experiments.",
"RQ 1 The multimodal-BERT models are pretrained on language data.",
"We assume that the resulting embeddings integrate structural grammatical knowledge and hypothesize that this knowledge will not be forgotten during multimodal training.",
"To determine if training on multimodal data affects the quality of predicting the dependency tree when trained solely with textual data, we train the probes with BERT and both multimodal-BERTs and evaluate on the PTB3 dataset (Marcus et al., 5663 DSpr.",
"Sub-RQ 1.1 We expect that more interaction between the regions and the text will have a stronger impact.",
"Some dependency attachments that are hard to predict might require visual knowledge.",
"Next to the effect on the linguistic knowledge, we also want to discover if the multimodal data helps the multimodal-BERTs in learning structural knowledge.",
"We run the probes on Flickr30k dataset (Young et al., 2014) with the textual embeddings for all our models.",
"Furthermore, we compare these to the difference in scores on the PTB3 dataset (Marcus et al., 1999).",
"RQ 2 The Multimodal-BERTs learn highly contextualized embeddings.",
"Therefore, we hypothesize that a model should be able to discover important interactions between object regions in the image.",
"To see if the model has learned to encode the scene tree in the visual region embeddings, we run the probes on the Flickr30k dataset (Young et al., 2014) with the visual embeddings.",
"Furthermore, to see if the scene tree is learned mainly through joint interaction with the textual embeddings, we compare the scores between the single-stream model UNITER (with many cross-modal interactions) and the dual-stream model ViLBERT (with limited cross-modal interactions).",
"This discussion is based on the results from the test split.",
"The results on the validation split (see Appendix B), lead to the same observations.",
"RQ 1: Do the textual embeddings trained with a multimodal-BERT retain their structural knowledge?",
"To answer RQ 1, we report the results for both structural probes on the PTB3 dataset.",
"Here we only use the textual embeddings, since no visual features are available.",
"The results for the depth probe are in Figure 2, and for the distance probe in Figure 3.",
"The results of both multimodal-BERTs (Fig-ures 2c and 3c for ViLBERT and Figures 2b and 3b for UNITER) in terms of NSpr.",
"and Root Acc are very comparable showing similar curves and scores.",
"For both, the seventh layer is the best performing one.",
"The shape of the curves across the layers is similar to those for the BERT model in Figures 2a and 3a.",
"However, the scores of the multimodal-BERTs drop significantly.",
"While the multimodal-BERTs were initialized with weights from BERT, they were trained longer on additional multimodal data with a different multimodal objective.",
"This shows that the multimodal training hampers the storing of grammatical structural knowledge in the resulting embeddings.",
"Sub-RQ 1.1: To what extent does the joint training in a multimodal-BERT influence the structures learned in the textual embeddings?",
"For this experiment, we compare the effect of having visual features present when using the structural probes on the textual embeddings.",
"We run the probes on Flickr30k.",
"The results for the depth probe are in Figure 4, and for the distance probe in Figure 5.",
"First, we see that for all models (BERT and multimodal-BERTs) the scores increase compared to the results on the PTB3 dataset (see discussion of RQ 1), but still follow a similar trend across the layers.",
"The latter is most likely due to the complexity of the sentences and language of the PTB3 dataset, which is simpler for the captions.",
"For ViLBERT, there is a drop in performance for the earlier layers.",
"We believe this is caused by the early stopping method firing early with these settings.",
"Another explanation is that it is more difficult for the dual-stream model to use the additional parameters.",
"BERT outperforms the multimodal-BERTs on PTB3, however, this is not the case on Flickr30k.",
"For the depth probe (Figure 4) and the UUAS metric on the distance probe (Figure 5), the results obtained on these two datasets are almost equal.",
"This can be due to the additional pretraining of the multimodal-BERTs on similar captioning sentences.",
"Another explanation is that, during such pretraining, the models learned to store relevant information in the visual embeddings.",
"We run an additional experiment where we use the pretrained multimodal-BERT, but while probing we only provide the sentence to the model, and mask out the image.",
"The results for the depth probe are in Figure 6, and for the distance probe in Figure 7.",
"Here we can see that the results are almost identical to when we provide the model with the visual embeddings.",
"This indicates that the model does not have any benefit from the visual data when predicting the structures for textual embeddings, and it seems that the model uses the extra parameters of the vision layers to store knowledge about the text.",
"RQ 2: Do the visual embeddings trained with a multimodal-BERT learn to encode a scene tree?",
"We aim to find the layer with the most structural knowledge learned when applied to multimodal data.",
"See the results in Figures 8 and 9.",
"Regarding the results for the depth probe (Fig-ure 8), the scores between layers fluctuate inconsistently.",
"The scores do improve slightly over the 5665 DSpr.",
"baselines, indicating that the multimodal-BERT encodes some knowledge of depth in the layers.",
"With regard to the distance probe (Figure 9), the trend in the curves across the layers indicate that this is a type of knowledge that can be learned for the regions.",
"The multimodal-BERTs seem to disregard scene trees.",
"There is a strong downward trend across the layers.",
"Furthermore, all the scores are much lower than the baseline and the R-CNN baseline scores.",
"This lack of learning of the scene tree can be caused by the chosen training objective of the multimodal-BERTs.",
"These objectives require an abstract type of information, where only basic features are needed to predict the masked items.",
"For the distance probe, there is a noticeable difference between the single-stream (Figure 13a) and the dual-stream (Figure 13b) models, where single stream models benefit from the multimodal interactions to retain structural knowledge.",
"For UNITER, the scores in the first layers are very close to the baseline, showing that the single stream interaction benefits the memorizing of the scene tree structure.",
"We made a first attempt at investigating whether the current Multimodal-BERT models encode structural grammatical knowledge in their textual embeddings, in a similar way as text-only BERT models encode this knowledge.",
"Furthermore, we were the first to investigate the existence of encoded structural compositional knowledge of the object regions in image embeddings.",
"For this purpose, we created a novel scene tree structure that is mapped from the textual dependency tree of the paired caption.",
"We discovered that the multimodal-BERTs encode less structural grammatical knowledge than BERT.",
"However, with image features present, it is still possible to achieve similar results.",
"While tree depths from the scene tree are not natively present in the features, we found that this could be a potential method of finding connections and distances between regions, already decently predicted with the Faster R-CNN features.",
"The Multimodal-BERT models are currently trained with an objective that does not enforce the learning or storing of these types of structural information.",
"Hence we assume that the models learn to encode more abstract knowledge in their features.",
"Our work opens possibilities to further research on scene trees as a joint representation of object compositions in an image and the grammatical structure of its caption.",
"Furthermore, we recommend investigating the training of multimodal-BERTs with objectives that enforce the encoding of structural knowledge.",
"We would like to thank Desmond Elliott, Djam Seddah, and Liesbeth Allein for feedback on the paper.",
"Victor Milewski and Marie-Francine Moens were funded by the European Research Council (ERC) Advanced Grant CALCULUS (grant agreement No. 788506).",
"Miryam de Lhoneux was funded by the Swedish Research Council (grant 2020-00437)."
] | [
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"other",
"method",
"other",
"abstain",
"other",
"abstain",
"other",
"other",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"abstain",
"result",
"abstain",
"method",
"abstain",
"objective",
"other",
"other",
"other"
] |
[
"Abductive reasoning aims at inferring the most plausible explanation for observed events, which would play critical roles in various NLP applications, such as reading comprehension and question answering.",
"To facilitate this task, a narrative text based abductive reasoning task NLI is proposed, together with explorations about building reasoning framework using pretrained language models.",
"However, abundant event commonsense knowledge is not well exploited for this task.",
"To fill this gap, we propose a variational autoencoder based model ege-RoBERTa, which employs a latent variable to capture the necessary commonsense knowledge from event graph for guiding the abductive reasoning task.",
"Experimental results show that through learning the external event graph knowledge, our approach outperforms the baseline methods on the NLI task.",
"Abductive reasoning aims at seeking for the best explanations for incomplete observations (Bhagavat-ula et al., 2019).",
"For example, given observations Forgot to close window when leaving home and The room was in a mess , human beings can generate a reasonable hypothesis for explaining the observations, such as A thief entered the room based on commonsense knowledge in their mind.",
"However, due to the lack of commonsense knowledge and effective reasoning mechanism, this is still a challenging problem for today's cognitive intelligent systems (Charniak and Shimony, 1990; Oh et al., 2013; Kruengkrai et al., 2017).",
"Most previous works focus on conducting abductive reasoning based on formal logic (Eshghi et al., 1988; Levesque, 1989; Ng et al., 1990; Paul, 1993).",
"However, the rigidity of formal logic limits the application of abductive reasoning in NLP Corresponding author Figure 1:",
"tasks, as it is hard to express the complex semantics of natural language in a formal logic system.",
"To facilitate this, Bhagavatula et al. (2019) proposed a natural language based abductive reasoning task NLI.",
"As shown in Figure 1",
"(a), given two observed events O 1 and O 2 , the NLI task requires the prediction model to choose a more reasonable explanation from two candidate hypothesis events H 1 and H 2 .",
"Both observed events and hypothesis events are daily-life events, and are described in natural language.",
"Together with the NLI task, Bhagavatula et al. (2019) also explored conducting such reasoning using pretrained language models such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019).",
"However, despite pretrained language models could capture rich linguistic knowledge benefit for understanding the semantics of events, additional commonsense knowledge is still necessary for the abductive reasoning.",
"For example, as illustrated in Figure 1",
"(b), given observations O 1 and O 2 , to choose the more likely explanation H 1 : A thief entered the room and exclude H 2 : A breeze blew in the window , prediction model should have the commonsense knowledge that it is hardly possible for a breeze to mess up the room, whereas a thief may enter the room from the open window ( I 1 ), then rummage through the room ( I 2 ) and lead to a mess.",
"These intermediary events ( I 1 and I 2 ) can serve as necessary commonsense knowledge for understanding the relationship between observed events and hypothesis events.",
"We notice that the observed events, hypothesis events, intermediary events and their relationships could be described using an event graph, which can be constructed based on an auxiliary dataset.",
"The challenge is how to learn such commonsense knowledge from the constructed event graph.",
"To address this issue, we propose an Event Graph Enhanced RoBERTa (ege-RoBERTa) model, and a two-stage training procedure.",
"Specifically, as shown in Figure 1",
"(c), on the basis of the RoBERTa framework, we additionally introduce a latent variable z to model the information about the intermediary events.",
"In the pretraining stage, ege-RoBERTa is trained upon an event-graph-based pseudo instance set to capture the commonsense knowledge using the latent variable z .",
"In the finetuning stage, model adapts the commonsense knowledge captured by z to conduct the abductive reasoning.",
"Experimental results show that ege-RoBERTa could effectively learn the commonsense knowledge from a well-designed event graph, and improve the model performance on the NLI task compared to the baseline methods.",
"The code is released at https://github.com/sjcfr/ege-RoBERTa.",
"As shown in Figure 1",
"(a), NLI can be defined as a multiple-choice task.",
"Given two observed events O 1 and O 2 happened in a sequential order, one needs to choose a more reasonable hypothesis event from two candidates H 1 and H 2 for explaining the observations.",
"Therefore, we formalize the abductive reasoning task as a conditional distribution p ( Y | O 1 , H i , O 2 ) , where H i { H 1 , H 2 } , and Y [0 , 1] is a relatedness score measuring the reasonableness of H i .",
"In the NLI dataset, H i is set to be an explanation event happens intermediate to O 1 and O 2 (Bhagavatula et al., 2019).",
"Hence, O 1 , O 2 and H i form an event temporal sequence O 1 , H i , O 2 .",
"For brevity, we denote the event sequence as X = ( O 1 , H i , O 2 ) .",
"Therefore, taking the event order into consideration, we further characterize the abductive reasoning task as p ( Y | X ) .",
"Formally, an event graph could be denoted as G = { V, R } , where V is the node set, and R is the edge set.",
"Each node V i V corresponds to an event, while R ij R is a directed edge V i V j along with a weight W ij , which denotes the probability that V j is the subsequent event of V i .",
"Given observed events and a certain hypothesis event, from the event graph we could acquire additional commonsense knowledge about: (1) the intermediary events, (2) the relationships between events .",
"As Figure 1",
"(b) shows, the observed events, hypothesis event and intermediary events compose another event sequence ( O 1 , I 1 , H i , I 2 , O 2 ) .",
"For clarity, we define such event sequence as posterior event sequence X (cid:48) , where X (cid:48) = ( O 1 , I 1 , H i , I 2 , O 2 ) .",
"The relationship between events within X (cid:48) could be described by an adjacency matrix A R 5 5 , with each element initialized using the edge weights of the event graph: A jk = (cid:26) W jk , if V j V k R, 0 , others .",
"The matrix A could describe the adjacency relationship between arbitrary two events in X (cid:48) .",
"3 Ege-RoBERTa as a Conditional Variational Autoencoder Based Reasoning Framework In this paper, rather than directly predicts the relatedness score Y based on the event sequence X , we propose to predict Y based on both X and additional commonsense knowledge (i.e. posterior event sequence X (cid:48) and adjacency matrix A ).",
"To this end, we introduce a latent variable z to learn such knowledge from an event graph through a two stage training procedure.",
"To effectively capture the event graph knowledge through z and conduct the abductive reasoning task based on z , we frame the ege-RoBERTa model as a conditional variational autoencoder (CVAE) (Sohn et al., 2015).",
"Specifically, with regard to the latent variable z , ege-RoBERTa characterizes the conditional distribution P ( Y | X ) using three neural networks: a prior network p ( z | X ) , a recognition network q ( z | X (cid:48) , A ) and a neural likelihood p ( Y | X, z ) , Figure 2: Illustration of the pretraining, finetuning and prediction process of ege-RoBERTa.",
"where and denote the parameters of networks.",
"Moreover, instead of directly maximize P ( Y | X ) , following CVAE (Sohn et al., 2015), ege-RoBERTa proposes to maximize the evidence lower bound (ELBO) of P ( Y | X ) : LELBO ( , ) = E q ( z | X (cid:48) ,A ) log( p ( Y | X, z )) KL( q ( z | X (cid:48) , A ) || p ( z | X )) log p ( Y | X ) (2) Note that, in the recognition network, the latent variable z is directly conditioned on X (cid:48) and A , where X (cid:48) = { O 1 , I 1 , H i , I 2 , O 2 } is the posterior event sequence, A is an adjacency matrix describing the relationship between events within X (cid:48) .",
"This enables z to capture the event graph knowledge from X (cid:48) and A .",
"Through minimizing the KL term of ELBO, we can teach the prior network p ( z | X ) to learn the event graph knowledge from the recognition network as much as possible.",
"Then in the neural likelihood p ( Y | X, z ) the relatedness score Y could be predicted based on X and z , which captures the event graph knowledge.",
"However, the event graph knowledge is absent in the NLI dataset.",
"To learn such knowledge, we design the following two-stage training procedure: Pre-training Stage: Learning Event Graph Knowledge from a Pseudo Instance Set In this stage, ege-RoBERTa is pretrained on a prebuilt event-graph-based pseudo instance set, which contains rich information about the intermediary events and the events relationships.",
"As shown in Figure 2",
"(a), the latent variable z is directly conditioned on X (cid:48) and A .",
"Therefore, z could be employed to learn the event graph knowledge.",
"Finetuning Stage: Adapt Event Graph Knowledge to the Abductive Reasoning Task As Figure 2",
"(b) shows, at the finetuning stage, ege-RoBERTa is trained on the NLI dataset Figure 3: Architecture of ege-RoBERTa.",
"without the additional information X (cid:48) and A .",
"In this stage model learns to adapt the captured event graph knowledge to the abductive reasoning task.",
"Then as Figure 2",
"(c) shows, after the two-stage training process, ege-RoBERTa could predict the relatedness score Y based on the latent variable z .",
"We introduce the specific implementation of ege-RoBERTa.",
"As illustrated in Figure 3, ege-RoBERTa introduces four modules in addition to the RoBERTa framework: (1) an aggregator providing representation for any event within X and X (cid:48) ; (2) an attention-based prior network for modeling p ( z | X ) ; (3) a graph neural network based recognition network for modeling q ( z | X (cid:48) , A ) ; (4) a merger to merge the latent variable z into RoBERTa frame for downstream abductive reasoning task.",
"The event representation aggregator provides distributed representation for events in both the event sequence X and the posterior event sequence X (cid:48) .",
"To this end, the aggregator employs attention mechanism to aggregate token representations of the event sequence from hidden states of RoBERTa.",
"Given an event sequence X composed of tokens [ [CLS], ( x 11 ,..., x 1 l 1 ),...,( x 31 ,..., x 3 l 3 ) ] (where [CLS] is the special classification token (Devlin et al., 2019), and x jk is the k th token within the j th event), the M th transformer layer of RoBERTa encodes these tokens into contextualized distributed representations H ( M ) = [ h [ CLS ] , ( h 11 ,..., h 1 l 1 ),...,( h 31 ,..., h 3 l 3 ) ] , where h jk R 1 d is the distributed representation of the k th token within the j th event.",
"Then for the j th event, the distributed representation is initialized as q j = 1 l j (cid:80) h jl j .",
"Multi-head attention mechanism ( MultiAttn ) (Vaswani et al., 2017) is employed to softly select information from H ( M ) and get the representation of each event: e j = MultiAttn( q j , H ( M ) ) .",
"For brevity, we denote the vector representation of all events in X using a matrix EX , where EX = { e 1 , e 2 , e 3 } R 3 d .",
"Note that, through the embedding layer of RoBERTa, position information has been injected into the token representations.",
"Therefore, EX derived from token representations carries event order information.",
"In addition, since EX is obtained from the hidden states of RoBERTa, rich linguistic knowledge within RoBERTa could be utilized to enhance the comprehension of event semantics.",
"By the same way, the representation of events within X (cid:48) could be calculated, which we denote as EX (cid:48) .",
"The recognition network models q ( z | X (cid:48) , A ) based on EX (cid:48) and A , where EX (cid:48) is the representations of events within X (cid:48) .",
"Following traditional VAE, q ( z | X (cid:48) , A ) is assumed to be a multivariate Gaussian distribution: q ( z | X (cid:48) , A ) N ( (cid:48) ( X (cid:48) , A ) , D ) , (4) where D denotes the identity matrix.",
"To obtain ( X (cid:48) , A ) , we first combine EX (cid:48) and adjacency matrix A using a GNN (Kipf et al., 2016): E ( U ) (cid:48) = ( AEX (cid:48) W ( u ) ) .",
"(5) where ( ) is the sigmoid function; W ( u ) R d d is a weight matrix and E ( U ) (cid:48) are relational information updated event representations.",
"Then a multi-head self-attention operation is performed to promote the fusion of event semantic information and relational information: E ( U ) (cid:48) = MultiAttn( E ( U ) (cid:48) , E ( U ) (cid:48) ) .",
"(6) Finally, to estimate ( X (cid:48) , A ) , we aggregate information within E ( U ) (cid:48) using a readout function g ( ) : (cid:48) = g ( E ( U ) (cid:48) ) .",
"(7) Following Zhou et al. (2019) and Zhong et al. (2019), we set g ( ) to be a mean-pooling operation.",
"Hence, by estimating (cid:48) based on the relational information updated event representation E ( U ) (cid:48) , event graph knowledge about X (cid:48) and A is involved into the latent variable z .",
"The prior network models p ( z | X ) based on EX , where EX is the representation matrix of events in X .",
"The same as the recognition network, p ( z | X ) also follows multivariate normal distribution, while the parameters are different: p ( z | X ) N ( ( X ) , D ) , (8) where D denotes the identity matrix.",
"To obtain ( X ) , different from the recognition network, the prior network starts from updating EX using a multi-head self-attention: E ( U ) = MultiAttn( EX , EX ) .",
"(9) Then an additional multi-head self-attention operation is performed to get deeper representations: E ( U ) = MultiAttn( E ( U ) , E ( U ) ) .",
"(10)",
"Finally, ( X ) is estimated through aggregating information from E ( U ) : = g ( E ( U ) ) , (11) where g ( ) is a mean-pooling operation.",
"The merger module merges the latent variable z as well as updated (deep) representation of events into the N th transformer layer of RoBERTa frame for predicting the relatedness score.",
"To this end, we employ multi-head attention mechanism to softly select relevant information from z and E ( U ) , and then update the hidden state of the N th transformer layer of RoBERTa.",
"Note that, given X , p ( | X ) achieves its maximum when z = .",
"Hence, making predictions based on could be regarded as finding the best explanation based on the most likely commonsense situation.",
"Through integrating latent variable z , H ( N ) contains the event graph knowledge.",
"By taking H ( N ) as the input of the subsequent ( N + 1) th transformer layers of RoBERTa for predicting the relatedness score, the abductive reasoning task is conducted based on the additional event graph knowledge.",
"The NLI task requires model to choose a more likely hypothesis event from two candidates.",
"However, in the pre-training stage, the negative examples are absent in the pseudo instances.",
"To address this issue, following the method of Liu et al. (2019), in the pre-training stage ege-RoBERTa is trained to predict the masked tokens in the event sequence X rather than the relatedness score.",
"In addition, in order to balance the masked token prediction loss with the KL term, we introduce an additional hy-perparameter .",
"Hence, the objective function in the pretraining stage is defined as follows: LELBO ( , ) = E q ( z | X (cid:48) ,A ) log LMLM ( X, z ; ) KL( q ( z | X (cid:48) , A ) || p ( z | X )) , (14) where log LMLM ( X, z ; ) is the masked token prediction loss.",
"Intuitively, through minimizing the KL term, we aim to transmit the event graph knowledge from the recognition network to the prior network.",
"In the finetuning stage, ege-RoBERTa is trained to adapt the learned event graph knowledge to the abductive reasoning task.",
"Without the recogniton network, we formulate the objective function as: L ( ) = p ( Y | z, X ) = p ( Y | z, X ) p ( z | X ) .",
"We implement two different sizes of ege-RoBERTa model (i.e. ege-RoBERTa-base and ege-RoBERTa-large) based on RoBERTa-base framework and RoBERTa-large framework, respectively.",
"For the ege-RoBERTa-base model, in the aggregator, the prior network, the recognition network and the merger, the dimension of the attention mechanism d is set as 768, and all multi-head attention layers contain 12 heads.",
"While for the ege-RoBERTa-large model, d is equal to 1024 and all multi-head attention layers contain 16 heads.",
"In the ege-RoBERTa-base model, token representations are aggregated from the 7th transformer layer of RoBERTa, and the latent variable is merged to the 10th transformer layer of RoBERTa.",
"While for the ege-RoBERTa-large model, the aggregator and merger layer are set as the 14th and 20th layer, respectively.",
"The balance coefficient equals 0.01.",
"More details are provided in the Appendix.",
"quadruples in training, development and test set, respectively.",
"The observation events are collected from a short story corpus ROCstory (Mostafazadeh et al., 2016), while all of hypothesis events are independently generated through crowdsourcing.",
"The event graph serves as an external knowledge base to provide information about the relationship between observation events and intermediary events.",
"To this end, we build the event graph based on an auxiliary dataset, which are composed of two short story corpora independent to NLI, i.e., VIST (Huang et al., 2016), and TimeTravel (Qin et al., 2019).",
"Both VIST and TimeTravel are composed of five-sentences short stories.",
"Totally there are 121,326 stories in the auxiliary dataset.",
"To construct the event graph, we define each sentence in the auxiliary dataset as a node in the event graph.",
"To get the edge weight W ij between two nodes V i and V j (i.e., the probability that V j is the subsequent event of V i ), we finetune a RoBERTa-large model through a next sentence prediction task.",
"Specifically, we define adjacent sentence pairs in the story text (for example, [1 st , 2 nd ] sentence, [4 th , 5 th ] sentence of a story) as positive instances, define nonadjacent sentence pairs or sentences pairs in reverse order (such as [1 st , 3 rd ] sentence, [5 th , 4 th ] sentence of a story) as negative instances.",
"After that we sample 300,000 positive and 300,000 negative instances from the auxiliary dataset.",
"Then given an event pair ( V i , V j ), the finetuned RoBERTa-large model would be able to predict the probability that V j is the subsequent event of V i .",
"Event Graph Based Pseudo Instance Set for Pretraining ege-RoBERTa To effectively utilize the event graph knowledge, we induce a set of pseudo instances for pretraining the ege-RoBERTa model.",
"Specifically, given a five-sentence-story within the auxiliary dataset, as Table 1 shows, we define the 1 st and 5 th sentence of the story as two observed events, the 3 rd sentence as the hypothesis event, the 2 nd and 4 th sentence as intermediary events, respectively.",
"In this way, the posterior event sequence X (cid:48) and the event sequence X of a pseudo instance could be obtained.",
"In addition, given X (cid:48) , we initialize the elements of the adjacency matrix A using the edge weights of the event graph, and scale A so that its row sums equal to 1.",
"After the above operations, each pseudo instance is composed of an event sequence X , a posterior event sequence X (cid:48) which contains intermediary event information, and an adjacency matrix A which describes relationships between events within X (cid:48) .",
"We compare ege-RoBERTa with:",
"GPT (Radford et al., 2018) is a multilayer-transformer based unidirectional pretrained language model.",
"BERT (Devlin et al., 2019) is a multilayer-transformer based bi-directional pretrained language model.",
"RoBERTa (Liu et al., 2019) refers robustly optimized BERT.",
"SVM uses features about length, overlap and sentiment to predict the more likely hypothesis event.",
"Infersent (Conneau et al., 2017) represents sentences using a Bi-LSTM, and predicts the relatedness score using MLP.",
"ege-RoBERTa u(npretrained) refers to the ege-RoBERTa model without the pretraining stage.",
"ege-RoBERTa =0 refers to setting the balance coefficient to 0 in the pretraining stage.",
"Note that all pretrained-language-model-based baselines (i.e., GPT, BERT and RoBERTa) are finetuned on the NLI dataset as the method of Bhagavatula et al. (2019) to adapt to the abductive reasoning task.",
"In addition, we also list two concurrent works:",
"(i) L 2 R (Zhu et al., 2020) learns to rank the candidate hypotheses with a novel scoring function.",
"(ii) RoBERTa-GPT-MHKA (Paul et al., 2020) enhances pretrained language model with social and causal commonsense knowledge for NLI task.",
"We list the prediction accuracy (%) in Table 2, and observe that:",
"(1) Compared with SVM and Infersent, pretrained language model based methods: GPT, BERT, RoBERTa and ege-RoBERTa show sig-nificant better performances in abductive reasoning task.",
"This is because through the pre-training Methods Accu.",
"stage language models could capture rich linguistic knowledge that is helpful for understanding the semantics of events.",
"(2) Comparison between ege-RoBERTa-large u with ege-RoBERTa-large shows that the pretraining process can increase the accuracy of abductive reasoning.",
"In addition, comparison between ege-RoBERTa-large =0 with ege-RoBERTa-large indicates that in the pre-training process, ege-RoBERTa could capture the event graph knowledge through the latent variable to enhance the abductive reasoning.",
"Furthermore, the relative close performance between ege-RoBERTa-large u and ege-RoBERTa-large =0 suggest that the main improvements of the performance is brought by the event graph knowledge.",
"(3) Compared to RoBERTa, ege-RoBERTa achieves higher prediction accuracy for both the base and large sized model.",
"This result confirms our motivation that learning event graph knowledge could be helpful for the abductive reasoning task.",
"(4) According to Bhagavatula et al. (2019), human performance on the test set of NLI is 91.4%.",
"While the RoBERTa-large model has achieved an accuracy of 83.9%.",
"Therefore, further improvements over RoBERTa-large could be challenging.",
"Through learning the event graph knowledge, our proposed method ege-RoBERTa further improves the relative accuracy.",
"(5) Our approach has comparable performance with the SOTA concurrent work, which combines RoBERTa with GPT, and incorporates social and causal commonsense into model.",
"The combination of both methods would further increase the model performance.",
"All studies are conducted on the development set of the NLI using the ege-RoBERTa-base model.",
"Influence of the Balance Coefficient In the pretraining stage, the balance coefficient controls the trade off between event graph knowledge learning and abductive reasoning.",
"To investigate the specific influence of the balance coefficient, we compare the performance of ege-RoBERTa model pretrained with different .",
"As shown in Figure 4, the prediction accuracy continues to increase as increases from 0 to 0.01.",
"This is because adequate event graph knowledge can offer guidance for the abductive reasoning task.",
"While when exceeds 0.05, the accuracy start to decrease, as the over-emphasis of event graph knowledge learning would in turn undermine the model performance.",
"Influence of the External Commonsense Knowledge We study the specific effect of the event relational information and the intermediary event information by controlling the generation of pseudo instances.",
"In specific, we eliminate the influence of the adjacency matrix A by replacing A with a randomly initialized matrix A .",
"Similarly, the influence of the intermediary events I 1 and I 2 is eliminated through substituting them by two randomly sampled events I 1 and I 2 .",
"As Table 3 shows, both the replacement of A and { I 1 , I 2 } lead to obvious decrease of model performance.",
"This demonstrates that ege-RoBERTa can use both two kinds of event graph knowledge for enhancing the abductive reasoning task.",
"To find out if the improvement of Ege-RoBERTa is brought by a certain dataset, and the specific",
"relationship between the model performance with the number of pseudo instances, we conduct following experiments: (1) excluding a certain dataset when inducing pseudo instances; (2) pretraining the ege-RoBERTa-base model with different number of pseudo instances.",
"The corresponding results on the dev set of NLI is shown in Table 4.",
"We can find that, the elimination of both dataset leads to decrease of model performances.",
"This suggests that the ege-RoBERTa model could capture relevant event graph knowledge from both dataset.",
"While the prediction accuracy continues to increase along with the number of pseudo instances used for pretraining the ege-RoBERTa model.",
"This is because the accumulation of commonsense knowledge is helpful for the abductive reasoning task.",
"In addition, it also indicates that the model performance could be further improved if the auxiliary dataset is even more enlarged.",
"Table 5 provides an example of model prediction results.",
"Given two observed events O 1 hates Fall and O 2 didn't have to experience Fall in Guam , the hypothesis event H 1 moved to Guam is more likely to explain the two motivations of observed events.",
"However, H 1 implicitly relies on a precondition that in Guam, Fall could be eluded.",
"Correspondingly, in the auxiliary dataset, there is information supporting the hypothesis event H 1 that there is no Fall in Guam.",
"In this case, ege-RoBERTa chooses the hypothesis event H 1 , whereas RoBERTa chooses the wrong hypothesis event H 2 .",
"This indicates that ege-RoBERTa could learn the event graph knowledge in the pretraining process for improving the reasoning performance.",
"In this paper, to involve the event graph knowledge, we formalize the posterior event sequence as X (cid:48) = { O 1 , I 1 , H i , I 2 , O 2 } .",
"While our approach also allows other forms of posterior event sequences, such as X (cid:48) = { O 1 , H i , I 1 , O 2 } , X (cid:48) = { O 1 , I 1 , H i , O 2 } , or X (cid:48) = { O 1 , I 1 , I 2 , H i , O 2 } , etc.",
"We also pretrained ege-RoBERTa on pseudo-instance sets derived by these manners.",
"The results are shown in Table 6.",
"We find that whatever forms of posterior event sequences involved in ege-RoBERTa, our approach can achieve consistently better performance than the baseline method.",
"This confirms that our approach is sufficiently generalizable to deal with various forms of external event-sequence knowledge.",
"Furthermore, ege-RoBERTa can also be equipped with more types of event graph knowledge, such as background knowledge by: formalizing the posterior event sequence as X (cid:48) = { B 1 , . . . , B m , E 1 , . . . , E n } , where { B 1 , . . . , B m } is a set of background events for a given prior event sequence { E 1 , . . . , E n } .",
"This demonstrates the potential of ege-RoBERTa in learning different kinds of event graph knowledge for different event inference tasks.",
"Most previous studies focus on formal logic based abductive reasoning (Eshghi et al., 1988; Levesque, 1989; Konolige, 1990; Paul, 1993).",
"To infer the most reasonable hypothesis, the abductive reasoning process could be divided into two steps: (1) proposing reasonable hypotheses; (2) finding the best explanation from the hypotheses (Levesque, 1989; Konolige, 1990; Paul, 1993).",
"However, the rigidity of formal logic limits its application in NLP domain.",
"To facilitate this, Bhagavatula et al. (2019) proposed a text based abductive reasoning task NLI.",
"To solve the this task, Zhu et al. (2020) formalize NLI as a rank learning task, and propose a novel ranking function.",
"While Paul et al. (2020) enhances the reasoning model with social commonsense and causal commonsense knowledge.",
"Compared to their works, for enhancing the abductive reasoning process, we propose to incorporate event graph knowledge by a CVAE based model ege-RoBERTa.",
"In addition, we argue that our approach can be easily extended to other event inference tasks.",
"Understanding events and their relationships are crucial for various natural language inference (NLI) tasks (Kruengkrai et al., 2017).",
"Hence, a number of previous studies explore conducting NLI tasks based on event graphs.",
"For example, to predict the subsequent event for a given event context, Li et al. (2018) build an event evolutionary graph (EEG), and make prediction using a scaled graph neural network.",
"While Wu et al. (2019) predict the propagation of news event through combining an historical event propagation graph with temporal point process.",
"In addition to the event prediction related tasks, Liu et al. (2017) propose to enhance the news recommendation by incorporating additional event graph information.",
"Liu et al. (2016) detect the textual contradiction by using event graphs as additional evidence.",
"In this paper, we employ event graph knowledge for guiding the abductive reasoning.",
"To this end, we propose a variational autoencoder based framework ege-RoBERTa, which employs a latent variable z to implicitly capture the necessary event graph knowledge and enhance the pretrained language model RoBERTa.",
"In this paper, we propose a variational autoencoder based framework ege-RoBERTa with a two-stage training procedure for the abductive reasoning task.",
"In the pretraining stage, ege-RoBERTa is able to learn commonsense knowledge from an event graph through the latent variable, then in the following stage the learned event graph knowledge can be adapted to the abductive reasoning task.",
"Experimental results show improvement over the baselines on the NLI task.",
"We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the National Key Research and Development Program of China (2020AAA0106501), and the National Natural Science Foundation of China (61976073)."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"method",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"objective",
"abstain",
"method",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"objective",
"abstain",
"abstain",
"other"
] |
[
"Conversational semantic parsers map user utterances to executable programs given dialogue histories composed of previous utterances, programs, and system responses.",
"Existing parsers typically condition on rich representations of history that include the complete set of values and computations previously discussed.",
"We propose a model that abstracts over values to focus prediction on typeand function-level context.",
"This approach provides a compact encoding of dialogue histories and predicted programs, improving generalization and computational efficiency.",
"Our model incorporates several other components, including an atomic span copy operation and structural enforcement of well-formedness constraints on predicted programs, that are particularly advantageous in the low-data regime.",
"Trained on the SMCALFLOW and TREEDST datasets, our model outperforms prior work by 7.3% and 10.6% respectively in terms of absolute accuracy.",
"Trained on only a thousand examples from each dataset, it outperforms strong baselines by 12.4% and 6.4%.",
"These results indicate that simple representations are key to effective generalization in conversational semantic parsing.",
"Conversational semantic parsers, which translate natural language utterances into executable programs while incorporating conversational context, play an increasingly central role in systems for interactive data analysis (Yu et al., 2019), instruction following (Guu et al., 2017), and task-oriented dialogue (Zettlemoyer and Collins, 2009).",
"An example of this task is shown in Figure 1. Typical models are based on an autoregressive sequence prediction approach, in which a detailed representation of the dialogue history is concatenated to the input sequence, and predictors condition on this sequence and all previously generated components of the output (Suhr et al., 2018).",
"While this approach can capture arbitrary dependencies between inputs and outputs, it comes at the cost of sampleand computational inefficiency.",
"We propose a new value-agnostic approach to contextual semantic parsing driven by type-based representations of the dialogue history and function-based representations of the generated programs.",
"Types and functions have long served as a foundation for formal reasoning about programs, but their use in neural semantic parsing has been limited, e.g., to constraining the hypothesis space (Krishna-murthy et al., 2017), guiding data augmentation (Jia and Liang, 2016), and coarsening in coarse-to-fine models (Dong and Lapata, 2018).",
"We show that representing conversation histories and partial programs via the types and functions they contain enables fast, accurate, and sample-efficient contextual semantic parsing.",
"We propose a neural encoder decoder contextual semantic parsing model which, in contrast to prior work: 1. uses a compact yet informative representation of discourse context in the encoder that considers only the types of salient entities that were predicted by the model in previous turns or that appeared in the execution results of the predicted programs, and 2. conditions the decoder state on the sequence of function invocations so far, without conditioning on any concrete values passed as arguments to the functions.",
"Our model substantially improves upon the best published results on the SMCALFLOW (Semantic Machines et al., 2020) and TREEDST (Cheng et al., 2020) conversational semantic parsing datasets, improving model performance by 7.3% and 10.6%, respectively, in terms of absolute accuracy.",
"In further experiments aimed at quantifying sample efficiency, ENTITYPROPOSERS Number(2) Month.May Proposeentitiesfromthecurrentuserutterance.",
"it improves accuracy by 12.4% and 6.4% respectively when trained on only a thousand examples from each dataset.",
"Our model is also effective at non-contextual semantic parsing, matching state-of-the-art results on the JOBS , GEOQUERY , and ATIS datasets (Dong and Lapata, 2016).",
"This is achieved while also reducing the test time computational cost by a factor of 10 (from 80ms per utterance down to 8ms when running on the same machine; more details are provided in Appendix H), when compared to our fastest baseline, which makes it usable as part of a real-time conversational system.",
"One conclusion from these experiments is that most semantic parses have structures that depend only weakly on the values that appear in the dialogue history or in the programs themselves.",
"Our experiments find that hiding values alone results in a 2.6% accuracy improvement in the low-data regime.",
"By treating types and functions, rather than values, as the main ingredients in learned representations for semantic parsing, we improve model accuracy and sample efficiency across a diverse set of language understanding problems, while also significantly reducing computational costs.",
"Our goal is to map natural language utterances to programs while incorporating context from dialogue histories (i.e., past utterances and their associated",
"associated programs and execution results).",
"We model a program as a sequenceo of function invocations , each consisting of a function and zero or more argument values , as illustrated at the lower right of Figure 1. The argument values can be either literal values or references to results of previous function invocations.",
"The ability to reference previous elements of the sequence, sometimes called a target-side copy , allows us to construct programs that involve re-entrancies.",
"Owing to this referential structure, a program can be equivalently represented as a directed acyclic graph (see e.g., Jones et al., 2012; Zhang et al., 2019).",
"We propose a Transformer-based (Vaswani et al., 2017) encoderdecoder model that predicts programs by generating function invocations sequentially, where each invocation can draw its arguments from an inventory of values (2.5)possibly copied from the utteranceand the results of previous function invocations in the current program.",
"The encoder (2.2) transforms a natural language utterance and a dialogue history to a continuous representation.",
"Subsequently, the decoder (2.3) uses this representation to define an autoregressive distribution over function invocation sequences and chooses a high-probability sequence by performing beam search.",
"As our experiments (3) will show, a nave encoding of the complete dialogue history and program results in poor model accuracy.",
"Our approach assumes that programs have type annotations on all values and function calls, similar to the setting of Krishnamurthy et al. (2017).",
"1 Furthermore, we assume that program prediction is local in that it does not require program fragments to be copied from the dialogue history (but may still depend on history in other ways).",
"Several formalisms, including the typed references of Zettlemoyer and Collins (2009) and the meta-computation operators of Semantic Machines et al. (2020), make it possible to produce local program annotations even for dialogues like the one depicted in Figure 2, which reuse past computations.",
"We transformed the datasets in our experiments to use such meta-computation operators (see Appendix C).",
"We also optionally make use of entity proposers , similar to Krishnamurthy et al. (2017), which annotate spans from the current utterance with typed values.",
"For example, the span one in Change it to one might be annotated with the value 1 of type Number .",
"These values are scored by the decoder along with other values that it considers (2.5) when predicting argument values for function invocations.",
"Using entity proposers aims to 1 This requirement can be trivially satisfied by assigning all expressions the same type, but in practice defining a set of type declarations for the datasets in our experiments was not difficult (refer to Appendix C for details).",
"help the model generalize better to previously unseen values that can be recognized in the utterance using hard-coded heuristics (e.g., regular expres-sions), auxiliary training data, or other runtime information (e.g., a contact list).",
"In our experiments we make use of simple proposers that recognize numbers, months, holidays, and days of the week, but one could define proposers for arbitrary values (e.g., song titles).",
"As described in 2.5, certain values can also be predicted directly without the use of an entity proposer.",
"The encoder, shown in Figure 3, maps a natural language utterance to a continuous representation.",
"Like many neural sequence-to-sequence models, we produce a contextualized token representation of the utterance, H utt RU h enc , where U is the number of tokens and h enc is the dimensionality of their embeddings.",
"We use a Transformer encoder (Vaswani et al., 2017), optionally initialized using the BERT pretraining scheme (Devlin et al., 2019).",
"Next, we need to encode the dialogue history and combine its representation with H utt to produce history-contextualized utterance token embeddings.",
"Prior work has incorporated history information by linearizing it and treating it as part of the input utterance (Cheng et al., 2018; Semantic Machines et al., 2020; Aghajanyan et al., 2020).",
"While flexi-ble and easy to implement, this approach presents a number of challenges.",
"In complex dialogues, history encodings can grow extremely long relative to the user utterance, which:",
"(i) increases the risk of overfitting,",
"(ii) increases computational costs (because attentions have to be computed over long sequences), and",
"(iii) necessitates using small batch sizes during training, making optimization difficult.",
"Thanks to the predictive locality of our representations (2.1), our decoder (2.3) never needs to retrieve values or program fragments from the dialogue history.",
"Instead, context enters into programs primarily when programs use referring expressions that point to past computations, or revision expressions that modify them.",
"Even though this allows us to dramatically simplify the dialogue history representation, effective generation of referring expressions still requires knowing something about the past.",
"For example, for the utterance What's next? the model needs to determine what What refers to.",
"Perhaps more interestingly, the presence of dates in recent DIALOGUE HISTORY TYPES Unit Constraint[String] Constraint[Event] Event String EventNotFoundError embed decoder UTTERANCE ENCODER DIALOGUE HISTORY ENCODER USER UTTERANCE \"Oh, it's just called shopping. It may be at 2.\" attention K V Q Figure 3: Illustration of our encoder (2.2), using the example of Figure 1. The utterance is processed by a Transformer-based (Vaswani et al., 2017) encoder and combined with information extracted from the set of dialogue history types using multi-head attention.",
"turns (or values that have dates, such as meetings) should make the decoder more eager to generate referring calls that retrieve dates from the dialogue history; especially so if other words in the current utterance hint that dates may be useful and yet date values cannot be constructed directly from the current utterance.",
"Subsequent steps of the decoder which are triggered by these other words can produce functions that consume the referred dates.",
"We thus hypothesize that it suffices to strip the dialogue history down to its constituent types, hiding all other information.",
"2 Specifically, we extract a set T of types that appear in the dialogue history up to m turns back, where m = 1 in our experiments.",
"3 Our encoder then transforms H utt into a sequence of history-contextualized embeddings H enc by allowing each token to attend over T .",
"This is motivated by the fact that, in many cases, dialogue history is important for determining the meaning of specific tokens in the utterance, rather than the whole utterance.",
"Specifically, we learn embeddings T R |T | h type for the extracted types, where h type is the embedding size, and use the attention mechanism of Vaswani et al. (2017) to contextualize H utt : H enc (cid:44) H utt + MHA ( H utt (cid:124)(cid:123)(cid:122)(cid:125) Queries , T (cid:124)(cid:123)(cid:122)(cid:125) Keys , T (cid:124)(cid:123)(cid:122)(cid:125) Values ) , (1) where MHA stands for multi-head attention, and each head applies a separate linear transformation to the queries, keys, and values.",
"Intuitively, 2 For the previous example, if the type List[Event] appeared in the history then we may infer that What probably refers to an Event .",
"3 We experimented with different values of m and found that increasing it results in worse performance, presumably due to overfitting.",
"[0] +( 1, 2) [1] +([0], 3) [2] +([1], 4) [3] +([2], 5) [0] +(Number, Number) [1] +(Number, Number) [2] +(Number, Number) Consider the following program representing the expression 1 + 2 + 3 + 4 + 5: While generating this invocation, the decoder only gets to condition on the following program prefix: Argument values are masked out!",
"each utterance-contextualized token is further contextualized in (1) by adding to it a mixture of embeddings of elements in T , where the mixture coefficients depends only on that utterance-contextualized token.",
"This encoder is illustrated in Figure 3.",
"As we show in 3.1, using this mechanism performs better than the nave approach of appending a set-of-types vector to H utt .",
"The decoder uses the history-contextualized representation H enc of the current utterance to predict a distribution over the program that corresponds to that utterance.",
"Each successive line i of invokes a function f i on an argument value tuple ( v i 1 , v i 2 , . . . , v iA i ) , where A i is the number of (for-mal) arguments of f i .",
"Applying f i to this ordered tuple results in the invocation f i ( a i 1 = v i 1 , a i 2 = v i 2 , . . . ) , where ( a i 1 , a i 2 , . . . , a iA i ) name the formal arguments of f i .",
"Each predicted value v ij can be the result of a previous function invocation, a constant value, a value copied from the current utterance, or a proposed entity (2.1), as illustrated in the lower right corner of Figure 1. These different argument sources are described in 2.5.",
"Formally, the decoder defines a distribution of programs : p ( | H enc ) = P (cid:89) i =1 p ( i | f <i , H enc ) , (2) where P is the number of function invocations in the program, and f <i (cid:44) { f 1 , . . . , f i 1 } .",
"Additionally, we assume that argument values are conditionally independent given f i and f <i , resulting in: p ( i | f <i ) = p ( f i | f <i ) (cid:124) (cid:123)(cid:122) (cid:125) function scoring A i (cid:89) j =1 p ( v ij | f <i , f i ) (cid:124) (cid:123)(cid:122) (cid:125) argument value scoring , (3) where we have elided the conditioning on H enc .",
"Here, functions depend only on previous functions FUNCTIONEMBEDDER from: City NAME TYPE argumentembedding FUNCTIONSIGNATURE NAME TYPE TYPE ARGUMENT ARGUMENT Book[Flight](from: City, to: City): Booking[Flight] POOLING functionembedding ARGUMENTEMBEDDER Figure 5: Illustration of our function encoder (2.4), using a simplified example function signature.",
"(not their argument values or results) and argument values depend only on their calling function (not on one another or any of the previous argument values).",
"4 This is illustrated in Figure 4. In addition to providing an important inductive bias, these independence assumptions allow our inference procedure to efficiently score all possible function invocations at step i , given the ones at previous steps, at once (i.e., function and argument value assignments together), resulting in an efficient search algorithm (2.6).",
"Note that there is also a corresponding disadvantage (as in many machine translation models) that a meaningful phrase in the utterance could be independently selected for multiple arguments, or not selected at all, but we did not encounter this issue in our experiments; we rely on the model training to evade this problem through the dependence on H enc .",
"In Equation 3, the sequence of functions f 1 , f 2 , . . . in the current program is modeled by (cid:81) i p ( f i | f <i , H enc ) .",
"We use a standard autoregressive Transformer decoder that can also attend to the utterance encoding H enc (2.2), as done by Vaswani et al. (2017).",
"Our decoder generates sequences over the vocabulary of functions .",
"This means that each function f i needs an embedding f i (used as both an input to the decoder and an output), which we construct compositionally .",
"We assume that each unique function f has a type signature that specifies a name n , a list of type parameters { 1 , . . . , T } (to support poly-morphism), 5 a list of argument names and types (( a 1 , t 1 ) , . . . , ( a A , t A )) , and a result type r .",
"An 4 We also tried defining a jointly normalized distribution over entire function invocations (Appendix A), but found that it results in a higher training cost for no accuracy benefits.",
"example is shown in Figure 5. We encode the function and argument names using the utterance encoder of 2.2 and learn embeddings for the types, to obtain ( n , r ) , { 1 , . . . , T } , and { ( a 1 , t 1 ) , . . . , ( a A , t A ) } .",
"Then, we construct an embedding for each function as follows: a = Pool ( a 1 + t 1 , . . . , a A + t A ) , (4) f = n + Pool ( 1 , . . . , T ) + a + r , (5) where Pool is the max-pooling operation which is invariant to the arguments' order.",
"Our main motivation for this function embedding mechanism is the ability to take cues from the user utterance (e.g., due to a function being named similarly to a word appearing in the utterance).",
"If the functions and their arguments have names that are semantically similar to corresponding utterance parts, then this approach enables zero-shot generalization.",
"6 However, there is an additional potential benefit from parameter sharing due to the compositional structure of the embeddings (see e.g., Baroni, 2020).",
"This section describes the implementation of the argument predictor p ( v ij | f <i , f i ) .",
"There are four different kinds of sources that can be used to fill each available argument slot: references to previous function invocations, constants from a static vocabulary, copies that copy string values from the utterance, and entities that come from entity proposers (2.1).",
"Many sources might propose the same value, including multiple sources of the same kind.",
"For example, there may be multiple spans in the utterance that produce the same string value in a program, or an entity may be proposed that is also available as a constant.",
"To address this, we marginalize over the sources of each value: p ( v ij | f <i , f i )= (cid:88) s S ( v ij ) p ( v ij , s | f <i , f i ) , (6) where v ij represents a possible value for the argument named a ij , and s S ( v ij ) ranges over the possible sources for that value.",
"For example, given the utterance Change that one to 1:30pm and the value 1 , the set S ( 1 ) may contain entities that correspond to both one and 1 from the utterance.",
"6 The data may contain overloaded functions that have the same name but different type signatures (e.g., due to optional arguments).",
"The overloads are given distinct identifiers f , but they often share argument names, resulting in at least partially shared embeddings.",
"The argument scoring mechanism considers the last-layer decoder state h i dec that was used to predict f i via p ( f i | f <i ) exp( f (cid:62) i h i dec ) .",
"We specialize this decoder state to argument a ij as follows: h i,a ij dec (cid:44) h i dec (cid:12) tanh ( f i + a ij ) , (7) where (cid:12) represents elementwise multiplication, f i is the embedding of the current function f i , a ij is the encoding of argument a ij as defined in 2.4, and h dec is a projection of h dec to the necessary dimensionality.",
"Intuitively, tanh( f i + a ij ) acts as a gating function over the decoder state, deciding what is relevant when scoring values for argument a ij .",
"This argument-specific decoder state is then combined with a value embedding to produce a probability for each (sourced) value assignment: p ( v, s | f <i , f i ) exp (cid:110) v (cid:62) ( h i,a dec + w kind ( s ) a ) + b kind ( s ) a (cid:111) , (8) where a is the argument name a ij , kind( s ) { REFERENCE , CONSTANT , COPY , ENTITY } , v is the embedding of ( v, s ) which is described next, and w ka and b ka are model parameters that are specific to a and the kind of the source s .",
"References.",
"References are pointers to the return values of previous function invocations.",
"If the source s for the proposed value v is the result of the k th invocation (where k < i ), we take its embedding v to be a projection of h k dec that was used to predict that invocation's function and arguments.",
"Constants.",
"Constants are values that are always proposed, so the decoder always has the option of generating them.",
"If the source s for the proposed value v is a constant, we embed it by applying the utterance encoder on a string rendering of the value.",
"The set of constants is automatically extracted from the training data (see Appendix B).",
"Copies.",
"Copies are string values that correspond to substrings of the user utterance (e.g., person names).",
"String values can only enter the program through copying, as they are not in the set of constants (i.e., they cannot be hallucinated by the model; see Pasupat and Liang, 2015; Nie et al., 2019).",
"One might try to construct an approach based on a standard token-based copy mechanism (e.g., Gu et al., 2016).",
"However, this would allow copying non-contiguous spans and would also require marginalizing over identical tokens as opposed to spans, resulting in more ambiguity.",
"Instead, we propose a mechanism that enables the decoder to copy contiguous spans directly from the utterance.",
"Its goal is to produce a score for each of the U ( U + 1) / 2 possible utterance spans.",
"Navely, this would result in a computational cost that is quadratic in the utterance length U , and so we instead chose a simple scoring model that avoids it.",
"Similar to Stern et al. (2017) and Kurib-ayashi et al. (2019), we assume that the score for a span factorizes, and define the embedding of each span value as the concatenation of the contextual embeddings of the first and last tokens of the span, v = [ h k start utt ; h k end utt ] .",
"To compute the copy scores we also concatenate h i,a dec with itself in Equation 8.",
"Entities.",
"Entities are treated the same way as copies, except that instead of scoring all spans of the input, we only score spans proposed by the external entity proposers discussed in 2.1.",
"Specifically, the proposers provide the model with a list of candidate entities that are each described by an utterance span and an associated value.",
"The candidates are scored using an identical mechanism to the one used for scoring copies.",
"This means that, for example, the string sept could be linked to the value Month.September even though the string representations do not match perfectly.",
"Type Checking.",
"When scoring argument values for function f i , we know the argument types, as they are specified in the function's signature.",
"This enables us to use a type checking mechanism that allows the decoder to directly exclude values with mismatching types.",
"For references, the value types can be obtained by looking up the result types of the corresponding function signatures.",
"Additionally, the types are always pre-specified for constants and entities, and copies are only supported for a subset of types (e.g., String , PersonName ; see Appendix B).",
"The type checking mechanism sets p ( v ij | f <i , f i ) = 0 whenever v ij has a different type than the expected type for a ij .",
"Finally, because copies can correspond to multiple types, we also add a type matching term to the copy score.",
"This term is defined as the inner product of the argument type embedding and a (learnable) linear projection of h k start utt and h k end utt concatenated, where k start and k end denote the span start and end indices.",
"Similar to other sequence-to-sequence models, we employ beam search over the sequence of function invocations when decoding.",
"However, in contrast to other models, our assumptions (2.3) allow us to Dataset SMCALFLOWTREEDSTV 1.1 V 2.0 Best Reported Result 66.5 68.2 62.2 Our Model 73.8 75.3 72.8 Table 1: Test set exact match accuracy comparing our model to the best reported results for SMCALFLOW (Seq2Seq model from the public leaderboard; Semantic Machines et al., 2020) and TREEDST (TED-PP model; Cheng et al., 2020).",
"This computation is parallelizable and it also allows the decoder to avoid choosing a function if there are no high scoring assignments for its arguments (i.e., we are performing a kind of lookahead ).",
"This also means that the paths explored during the search are shorter for our model than for models where each step corresponds to a single decision, allowing for smaller beams and more efficient decoding.",
"We first report results on SMCALFLOW (Semantic Machines et al., 2020) and TREEDST (Cheng et al., 2020), two recently released large-scale conversational semantic parsing datasets.",
"Our model makes use of type information in the programs, so we manually constructed a set of type declarations for each dataset and then used a variant of the Hindley-Milner type inference algorithm (Damas and Milner, 1982) to annotate programs with types.",
"As mentioned in 2.1, we also transformed TREEDST to introduce meta-computation operators for references and revisions (more details can be found in Appendix C).",
"7 We also report results on nonconversational semantic parsing datasets in 3.2.",
"We use the same hyperparameters across all experiments (see Appendix E), and we use BERT-medium (Turc et al., 2019) to initialize our encoder.",
"Test set results for SMCALFLOW and TREEDST are shown in Table 1. Our model significantly outperforms the best published numbers in each case.",
"7 The transformed datasets are available at https: //github.com/microsoft/task_oriented_dialogue_as_dataflow_synthesis/tree/master/datasets .",
"In order to further understand the performance characteristics of our model and quantify the impact of each modeling contribution, we also compare to a variety of other models and ablated versions of our model.",
"We implemented the following baselines: Seq2Seq: The OpenNMT (Klein et al., 2017) implementation of a pointer-generator network (See et al., 2017) that predicts linearized plans represented as S-expressions and is able to copy tokens from the utterance while decoding.",
"This model is very similar to the model used by Semantic Machines et al. (2020) and represents the current state-of-the-art for SMCALFLOW .",
"8 Seq2Tree: The same as Seq2Seq, except that it generates invocations in a top-down, pre-order program traversal.",
"Each invocation is embedded as a unique item in the output vocabulary.",
"Note that SMCALFLOW contains re-entrant programs represented with LISP-style let bindings.",
"Both the Seq2Tree and Seq2Seq are unaware of the special meaning of let and predict calls to let as any other function, and references to bound 8 Semantic Machines et al. (2020) used linearized plans to represent the dialogue history, but our implementation uses previous user and agent utterances.",
"We found no difference in performance.",
"variables as any other literal.",
"Seq2Tree++: An enhanced version of the model by Krishnamurthy et al. (2017) that predicts typed programs in a top-down fashion.",
"Unlike Seq2Seq and Seq2Tree, this model can only produce well-formed and well-typed programs.",
"It also makes use of the same entity proposers (2.1) similar to our model, and it can atomically copy spans of up to 15 tokens by treating them as additional proposed entities.",
"Furthermore, it uses the linear history encoder that is described in the next paragraph.",
"Like our model, re-entrancies are represented as references to previous outputs in the predicted sequence.",
"We also implemented variants of Seq2Seq and Seq2Tree that use BERT-base 9 (Devlin et al., 2019) as the encoder.",
"Our results are shown in Table 2a.",
"Our model outperforms all baselines on both datasets, showing particularly large gains in the low data regime, even when using BERT.",
"Finally, we implemented the following ablations, with more details provided in Appendix G: Value Dependence: Introduces a unique function for each value in the training data (except for copies) and transforms the data so that values are always produced by calls to these functions, allowing the model to condition on them.",
"No Name Embedder: Embeds functions and constants atomically instead of using the approach of 2.4 and the utterance encoder.",
"No Types: Collapses all types to a single type, which effectively disables type checking (2.5).",
"No Span Copy: Breaks up span-level copies into token-level copies which are put together using a special concatenate function.",
"Note that our model is value-agnostic and so this ablated model cannot condition on previously copied tokens when copying a span token-by-token.",
"No Entity Proposers: Removes the entity proposers, meaning that previously entity-linked values have to be generated as constants.",
"No History: Sets H enc = H utt (2.2).",
"Previous Turn: Replaces the type-based history encoding with the previous turn user and system utterances or linearized system actions.",
"Linear Encoder: Replaces the history attention 9 We found that BERT-base worked best for these baselines, but was no better than the smaller BERT-medium when used with our model.",
"Also, unfortunately, incorporating BERT in Seq2Tree++ turned out to be challenging due to the way that model was originally implemented.",
"The results, shown in Table 2b, indicate that all of our features play a role in improving accuracy.",
"Perhaps most importantly though, the value dependence ablation shows that our function-based program representations are indeed important, and the previous turn ablation shows that our type-based program representations are also important.",
"Furthermore, the impact of both these modeling decisions grows larger in the low data regime, as does the impact of the span copy mechanism.",
"Our main focus is on conversational semantic parsing, but we also ran experiments on nonconversational semantic parsing benchmarks to show that our model is a strong parser irrespective of context.",
"Specifically, we manually annotated the JOBS , GEOQUERY , and ATIS datasets with typed declarations (Appendix C) and ran experiments comparing with multiple baseline and state-of-the-art methods.",
"The results, shown in Table 3, indicate that our model meets or exceeds state-of-the-art performance in each case.",
"Our approach builds on top of a significant amount of prior work in neural semantic parsing and also context-dependent semantic parsing.",
"was a brief period of interest in using unstructured sequence models for semantic parsing (e.g., Andreas",
"et al., 2013; Dong and Lapata, 2016), most research on semantic parsing has used treeor graph-shaped decoders that exploit program structure.",
"Most such approaches use this structure as a constraint while decoding, filling in function arguments one-at-a-time, in either a top-down fashion (e.g., Dong and Lapata, 2016; Krishnamurthy et al., 2017) or a bottom-up fashion (e.g., Misra and Artzi, 2016; Cheng et al., 2018).",
"Both directions can suffer from exposure bias and search errors during decoding: in top-down when there's no way to realize an argument of a given type in the current context, and in bottom-up when there are no functions in the programming language that combine the predicted arguments.",
"To this end, there has been some work on global search with guarantees for neural semantic parsers (e.g., Lee et al., 2016) but it is expensive and makes certain strong assumptions.",
"In contrast to this prior work, we use program structure not just as a decoder constraint but as a source of independence assumptions: the decoder explicitly decouples some decisions from others, resulting in good inductive biases and fast decoding algorithms.",
"Perhaps closest to our work is that of Dong and Lapata (2018), which is also about decoupling decisions, but uses a dataset-specific notion of an abstracted program sketch along with different independence assumptions, and underperforms our model in comparable settings (3.2).",
"Also close are the models of Cheng et al. (2020) and Zhang et al. (2019).",
"Our method differs in that our beam search uses larger steps that predict functions together with their arguments, rather than predicting the argument values serially in separate dependent steps.",
"Similar to Zhang et al. (2019), we use a target-side copy mechanism for generating references to function invocation results.",
"However, we extend this mechanism to also predict constants, copy spans from the user utterance, and link externally proposed entities.",
"While our span copy mechanism is novel, it is inspired by prior attempts to copy spans instead of tokens (e.g., Singh et al., 2020).",
"Finally, bottom-up models with similarities to ours include SMBOP (Rubin and Berant, 2020) and BUSTLE (Odena et al., 2020).",
"Context-Dependent Semantic Parsing.",
"Prior work on conversational semantic parsing mainly focuses on the decoder, with few efforts on incorporating the dialogue history information in the encoder.",
"Recent work on context-dependent semantic parsing (e.g., Suhr et al., 2018; Yu et al., 2019) conditions on explicit representations of user utterances and programs with a neural encoder.",
"While this results in highly expressive models, it also increases the risk of overfitting.",
"Contrary to this, Zettlemoyer and Collins (2009), Lee et al. (2014) and Semantic Machines et al. (2020) do not use context to resolve references at all.",
"They instead predict context-independent logical forms that are resolved in a separate step.",
"Our approach occupies a middle ground: when combined with local program representations, types, even without any value information, provide enough information to resolve context-dependent meanings that cannot be derived from isolated sentences.",
"The specific mechanism we use to do this infuses contextual type information into input sentence representations, in a manner reminiscent of attention flow models from the QA literature (e.g., Seo et al., 2016).",
"We showed that abstracting away values while encoding the dialogue history and decoding programs significantly improves conversational semantic parsing accuracy.",
"In summary, our goal in this work is to think about types in a new way.",
"Similar to previous neural and non-neural methods, types are an important source of constraints on the behavior of the decoder.",
"Here, for the first time, they are also the primary ingredient in the representation of both the parser actions and the dialogue history.",
"Our approach, which is based on type-centric encodings of dialogue states and function-centric encodings of programs (2), outperforms prior work by 7.3% and 10.6%, on SMCALFLOW and TREEDST, respectively (3), while also being more computationally efficient than competing methods.",
"Perhaps more importantly, it results in even more significant gains in the low-data regime.",
"This indicates that choosing our representations carefully and making appropriate independence assumptions can result in increased accuracy and computational efficiency.",
"We thank the anonymous reviewers for their helpful comments, Jason Eisner for his detailed feedback and suggestions on an early draft of the paper, Abulhair Saparov for helpful conversations and pointers about semantic parsing baselines and prior work, and Theo Lanman for his help in scaling up some of our experiments."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"other",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"result",
"other"
] |
[
"We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages.",
"We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks.",
"We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia.",
"They actually equal or improve the current state of the art in tagging and parsing for all five languages.",
"In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
"One of the key elements that has pushed the state of the art considerably in neural NLP in recent years has been the introduction and spread of transfer learning methods to the field.",
"These methods can normally be classified in two categories according to how they are used: Feature-based methods, which involve pretraining real-valued vectors (embeddings) at the word, sentence, or paragraph level; and using them in conjunction with a specific architecture for each individual downstream task.",
"Fine-tuning methods, which introduce a minimal number of task-specific parameters, and instead copy the weights from a pre-trained network and then tune them to a particular downstream task.",
"Embeddings or language models can be divided into fixed , meaning that they generate a single representation for each word in the vocabulary; and contextualized , meaning that a representation is generated based on both the word and its surrounding context, so that a single word can have multiple representations, each one depending on how it is used.",
"In practice, most fixed embeddings are used as feature-based models.",
"The most notable examples are word2vec (Mikolov et al., 2013), GloVe (Pen-nington et al., 2014) and fastText (Mikolov et al., 2018).",
"All of them are extensively used in a variety of applications nowadays.",
"On the other hand, contextualized word representations and language models have been developed using both feature-based architectures, the most notable examples being ELMo and Flair (Peters et al., 2018; Akbik et al., 2018), and transformer based architectures, that are commonly used in a fine-tune setting, as is the case of GPT-1, GPT-2 (Radford et al., 2018, 2019), BERT and its derivatives (Devlin et al., 2018; Liu et al., 2019; Lan et al., 2019) and more recently T5 (Raffel et al., 2019).",
"All of them have repeatedly improved the state-of-the art in many downstream NLP tasks over the last year.",
"In general, the main advantage of using language models is that they are mostly built in an unsupervised manner and they can be trained with raw, unannotated plain text.",
"Their main drawback is that enormous quantities of data seem to be required to properly train them especially in the case of contextualized models, for which larger corpora are thought to be needed to properly address polysemy and cover the wide range of uses that commonly exist within languages.",
"For gathering data in a wide range of languages, Wikipedia is a commonly used option.",
"It has been used to train fixed embeddings (Al-Rfou et al., 2013; Bojanowski et al., 2017) and more recently the multilingual BERT (Devlin et al., 2018), hereafter mBERT.",
"However, for some languages, Wikipedia might not be large enough to train good quality contextualized word embeddings.",
"Moreover, Wikipedia data all belong to the same specific genre and style.",
"To address this problem, one can resort to crawled text from the internet; the largest and most widespread dataset of crawled text being Common Crawl.",
"1 Such an approach generally solves the quantity and genre/style coverage problems but might introduce noise in the data, an issue which has earned the corpus some criticism, most notably by Trinh and Le (2018) and Radford et al. (2019).",
"Using Common Crawl also leads to data management challenges as the corpus is distributed in the form of a large set of plain text each containing a large quantity of unclassified multilingual documents from different websites.",
"In this paper we study the trade-off between quantity and quality of data for training contextualized representations.",
"To this end, we use the OSCAR corpus (Ortiz Surez et al., 2019), a freely available 2 multilingual dataset obtained by performing language classification, filtering and cleaning of the whole Common Crawl corpus.",
"3 OSCAR was created following the approach of Grave et al. (2018) but proposing a simple improvement on their filtering method.",
"We then train OSCAR-based and Wikipedia-based ELMo contextualized word embeddings (Peters et al., 2018) for 5 languages: Bulgarian, Catalan, Danish, Finnish and Indonesian.",
"We evaluate the models by attaching them to the to UDPipe 2.0 architecture (Straka, 2018; Straka et al., 2019) for dependency parsing and part-of-speech (POS) tagging.",
"We show that the models using the OSCAR-based ELMo embeddings consistently outperform the Wikipedia-based ones, suggesting that big high-coverage noisy corpora might be better than small high-quality narrow-coverage corpora for training contextualized language representations 4 .",
"We also establish a new state of the art for both POS tagging and dependency parsing in 6 different treebanks covering 1 https://commoncrawl.org 2 https://oscar-corpus.com 3 Snapshot from November 2018 4 Both the Wikipediaand the OSCAR-based embeddings for these 5 languages are available at: https://oscar-corpus.com/#models .",
"The structure of the paper is as follows.",
"In Section 2 we describe the recent related work.",
"In Section 3 we present, compare and analyze the corpora used to train our contextualized embeddings, and the treebanks used to train our POS tagging and parsing models.",
"In Section 4 we examine and describe in detail the model used for our contextualized word representations, as well as the parser and the tagger we chose to evaluate the impact of corpora in the embeddings' performance in downstream tasks.",
"Finally we provide an analysis of our results in Section 5 and in Section 6 we present our conclusions.",
"Since the introduction of word2vec (Mikolov et al., 2013), many attempts have been made to create multilingual language representations; for fixed word embeddings the most remarkable works are those of (Al-Rfou et al., 2013) and (Bojanowski et al., 2017) who created word embeddings for a large quantity of languages using Wikipedia, and later (Grave et al., 2018) who trained the fastText word embeddings for 157 languages using Common Crawl and who in fact showed that using crawled data significantly increased the performance of the embeddings especially for midto low-resource languages.",
"Regarding contextualized models, the most notable non-English contribution has been that of the mBERT (Devlin et al., 2018), which is distributed as",
"(i) a single multilingual model for 100 different languages trained on Wikipedia data, and as",
"(ii) a single multilingual model for both Simpli-fied and Traditional Chinese.",
"Four monolingual fully trained ELMo models have been distributed for Japanese, Portuguese, German and Basque 5 ; 44 monolingual ELMo models 6 where also released by the HIT-SCIR team (Che et al., 2018) during the CoNLL 2018 Shared Task (Zeman et al., 2018), but their training sets where capped at 20 million words.",
"A German BERT (Chan et al., 2019) as well as a French BERT model (called CamemBERT) (Martin et al., 2019) have also been released.",
"In general no particular effort in creating a set of high-quality monolingual contextualized representations has been shown yet, or at least not on a scale that 5 https://allennlp.org/elmo 6 https://github.com/HIT-SCIR/ ELMoForManyLangs is comparable with what was done for fixed word embeddings.",
"For dependency parsing and POS tagging the most notable non-English specific contribution is that of the CoNLL 2018 Shared Task (Zeman et al., 2018), where the 1 st place (LAS Ranking) was awarded to the HIT-SCIR team (Che et al., 2018) who used Dozat and Manning (2017)'s Deep Biaffine parser and its extension described in (Dozat et al., 2017), coupled with deep contextualized ELMo embeddings (Peters et al., 2018) (capping the training set at 20 million words).",
"The 1 st place in universal POS tagging was awarded to Smith et al. (2018) who used two separate instances of Bohnet et al. (2018)'s tagger.",
"More recent developments in POS tagging and parsing include those of Straka et al. (2019) which couples another CoNLL 2018 shared task participant, UDPipe 2.0 (Straka, 2018), with mBERT greatly improving the scores of the original model, and UDify (Kondratyuk and Straka, 2019), which adds an extra attention layer on top of mBERT plus a Deep Bi-affine attention layer for dependency parsing and a Softmax layer for POS tagging.",
"UDify is actually trained by concatenating the training sets of 124 different UD treebanks, creating a single POS tagging and dependency parsing model that works across 75 different languages.",
"We train ELMo contextualized word embeddings for 5 languages: Bulgarian, Catalan, Danish, Finnish and Indonesian.",
"We train one set of embeddings using only Wikipedia data, and another set using only Common-Crawl-based OSCAR data.",
"We chose these languages primarily because they are morphologically and typologically different from one another, but also because all of the OSCAR datasets for these languages were of a sufficiently manageable size such that the ELMo pre-training was doable in less than one month.",
"Contrary to HIT-SCIR team (Che et al., 2018), we do not impose any cap on the amount of data, and instead use the entirety of Wikipedia or OSCAR for each of our 5 chosen languages.",
"Wikipedia is the biggest online multilingual open encyclopedia, comprising more than 40 million articles in 301 different languages.",
"Because articles are curated by language and written in an Language Size #Ktokens #Kwords #Ksentences Bulgarian 609M 64,190 54,748 3,685 Catalan 1.1G 211,627 179,108 8,293 Danish 338M 60,644 52,538 3,226 Finnish 669M 89,580 76,035 6,847 Indonesian 488M 80,809 68,955 4,298 Table 1: Size of Wikipedia corpora, measured in bytes, thousands of tokens, words and sentences.",
"open collaboration model, its text tends to be of very high-quality in comparison to other free online resources.",
"This is why Wikipedia has been extensively used in various NLP applications (Wu and Weld, 2010; Mihalcea, 2007; Al-Rfou et al., 2013; Bojanowski et al., 2017).",
"We downloaded the XML Wikipedia dumps 7 and extracted the plaintext from them using the wikiextractor.py script 8 from Giuseppe Attardi.",
"We present the number of words and tokens available for each of our 5 languages in Table 1.",
"We decided against dedupli-cating the Wikipedia data as the corpora are already quite small.",
"We tokenize the 5 corpora using UDPipe (Straka and Strakov, 2017).",
"Common Crawl is a non-profit organization that produces and maintains an open, freely available repository of crawled data from the web.",
"Common Crawl's complete archive consists of petabytes of monthly snapshots collected since 2011.",
"Common Crawl snapshots are not classified by language, and contain a certain level of noise (e.g. one-word sentences such as OK and Cancel are unsurprisingly very frequent).",
"This is what motivated the creation of the freely available multilingual OSCAR corpus (Or-tiz Surez et al., 2019), extracted from the November 2018 snapshot, which amounts to more than 20 terabytes of plain-text.",
"In order to create OSCAR from this Common Crawl snapshot, Ortiz Surez et al. (2019) reproduced the pipeline proposed by (Grave et al., 2018) to process, filter and classify Common Crawl.",
"More precisely, language classification was performed using the fastText linear classifier (Joulin et al., 2016, 2017), which was trained by Grave et al. (2018) to recognize 176 languages and was shown to have an extremely good accuracy to processing time trade-off.",
"The filtering step as performed by Grave et al. (2018) consisted in only keeping the lines exceeding 100 7 XML dumps from April 4, 2019.",
"8 Available here.",
"bytes in length.",
"9 However, considering that Common Crawl is a mutilingual UTF-8 encoded corpus, this 100-byte threshold creates a huge disparity between ASCII and non-ASCII encoded languages.",
"The filtering step used to create OSCAR therefore consisted in only keeping the lines containing at least 100 UTF-8-encoded characters.",
"Finally, as in (Grave et al., 2018), the OSCAR corpus is dedupli-cated, i.e. for each language, only one occurrence of a given line is included.",
"As we did for Wikipedia, we tokenize OSCAR corpora for the 5 languages we chose for our study using UDPipe.",
"Table 2 provides quantitative information about the 5 resulting tokenized corpora.",
"We note that the original Common-Crawl-based corpus created by Grave et al. (2018) to train fastText is not freely available.",
"Since running the experiments described in this paper, a new architecture for creating a Common-Crawl-based corpus named CCNet (Wenzek et al., 2019) has been published, although it includes specialized filtering which might result in a cleaner corpus compared to OSCAR, the resulting CCNet corpus itself was not published.",
"Thus we chose to keep OSCAR as it remains the only very large scale, Common-Crawl-based corpus currently available and easily downloadable.",
"We wanted to address (Trinh and Le, 2018) and (Radford et al., 2019)'s criticisms of Common Crawl, so we devised a simple method to measure how noisy the OSCAR corpora were for our 5 languages.",
"We randomly extract a number of lines from each corpus, such that the resulting random sample contains one million words.",
"10 We test if the words are in the corresponding GNU Aspell 11 dictionary.",
"We repeat this task for each of the 5 languages, for both the OSCAR and the Wikipedia 9 Script available here.",
"10 We remove tokens that are capitalized or contain less than 4 UTF-8 encoded characters, allowing us to remove bias against Wikipedia, which traditionally contains a large quantity of proper nouns and acronyms.",
"11 http://aspell.net/ Language OOV Wikipedia OOV OSCAR Bulgarian 60,879 66,558 Catalan 34,919 79,678 Danish 134,677 123,299 Finnish 266,450 267,525 Indonesian 116,714 124,607 Table 3: Number of out-of-vocabulary words in random samples of 1M words for OSCAR and Wikipedia.",
"corpora.",
"We compile in Table 3 the number of out-of-vocabulary tokens for each corpora.",
"As expected, this simple metric shows that in general the OSCAR samples contain more out-of-vocabulary words than the Wikipedia ones.",
"However the difference in magnitude between the two is strikingly lower that one would have expected in view of the criticisms by Trinh and Le (2018) and Radford et al. (2019), thereby validating the usability of Common Crawl data when it is properly filtered, as was achieved by the OSCAR creators.",
"We even observe that, for Danish, the number of out-of-vocabulary words in OSCAR is lower than that in Wikipedia.",
"The main goal of this paper is to show the impact of training data on contextualized word representations when applied in particular downstream tasks.",
"To this end, we train different versions of the Embeddings from Language Models (ELMo) (Peters et al., 2018) for both the Wikipedia and OSCAR corpora, for each of our selected 5 languages.",
"We save the models' weights at different number of epochs for each language, in order to test how corpus size affect the embeddings and to see whether and when overfitting happens when training elmo on smaller corpora.",
"We take each of the trained ELMo models and use them in conjunction with the UDPipe 2.0 (Straka, 2018; Straka et al., 2019) architecture for dependency parsing and POS-tagging to test our models.",
"We train UDPipe 2.0 using gold tokeniza-tion and segmentation for each of our ELMo models, the only thing that changes from training to training is the ELMo model as hyperparameters always remain at the default values (except for number of training tokens) (Peters et al., 2018).",
"More precisely, it uses a bidirectional language model, which combines a forward and a backward LSTM-based language model.",
"ELMo also computes a context-independent token representation via a CNN over characters.",
"We train ELMo models for Bulgarian, Catalan, Danish, Finnish and Indonesian using the OSCAR corpora on the one hand and the Wikipedia corpora on the other.",
"We train each model for 10 epochs, as was done for the original English ELMo (Peters et al., 2018).",
"We save checkpoints at 1 st , 3 rd and 5 th epoch in order to investigate some concerns about possible overfitting for smaller corpora (Wikipedia in this case) raised by the original ELMo authors.",
"12 4.2 UDPipe 2.0 For our POS tagging and dependency parsing evaluation, we use UDPipe 2.0, which has a freely available and ready to use implementation.",
"13 This architecture was submitted as a participant to the 2018 CoNLL Shared Task (Zeman et al., 2018), obtaining the 3 rd place in LAS ranking.",
"UDPipe 2.0 is a multi-task model that predicts POS tags, lemmas and dependency trees jointly.",
"Pre-trained word embeddings : In the original implementation, the Wikipedia version of fastText embeddings is used (Bojanowski et al., 2017); we replace them in favor of the newer Common-Crawl-based fastText embeddings trained by Grave et al. (2018).",
"Trained word embeddings : Randomly initialized word representations that are trained with the rest of the network.",
"Character-level word embeddings : Computed using bi-directional GRUs of dimension 256.",
"They represent every UTF-8 encoded character with two 256 dimensional vectors, one for the forward and one for the backward layer.",
"This two vector representations are concatenated and are trained along the whole network.",
"After the CoNLL 2018 Shared Task, the UDPipe 2.0 authors added the option to concatenate contextualized representations to the embedding 12 https://github.com/allenai/bilm-tf/ issues/135 13 https://github.com/CoNLL-UD-2018/ UDPipe-Future Treebank #Ktokens #Ksentences Bulgarian-BTB 156 11 Catalan-AnCora 530 17 Danish-DDT 100 6 Finnish-FTB 159 19 Finnish-TDT 202 15 Indonesian-GSD 121 6 Table 4: Size of treebanks, measured in thousands of tokens and sentences.",
"section of the network (Straka et al., 2019), we use this new implementation and we concatenate our pretrained deep contextualized ELMo embeddings to the three embeddings mentioned above.",
"Once the embedding step is completed, the concatenation of all vector representations for a word are fed to two shared bidirectional LSTM (Hochre-iter and Schmidhuber, 1997) layers.",
"The output of these two BiLSTMS is then fed to two separate specific LSTMs: The taggerand lemmatizer-specific bidirectional LSTMs, with Softmax classifiers on top, which process its output and generate UPOS, XPOS, UFeats and Lemmas.",
"The lemma classifier also takes the character-level word embeddings as input.",
"The parser-specific bidirectional LSTM layer, whose output is then passed to a bi-affine attention layer (Dozat and Manning, 2017) producing labeled dependency trees.",
"To train the selected parser and tagger (cf. Section 4.2) and evaluate the pre-trained language models in our 5 languages, we run our experiments using the Universal Dependencies (UD) 14 paradigm and its corresponding UD POS tag set (Petrov et al., 2012).",
"We use all the treebanks available for our five languages in the UD treebank collection version 2.2 (Nivre et al., 2018), which was used for the CoNLL 2018 shared task, thus we perform our evaluation tasks in 6 different treebanks (see Table 4 for treebank size information).",
"Bulgarian BTB : Created at the Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, it consists of legal documents, news articles and fiction pieces.",
"14 https://universaldependencies.org Catalan-AnCora : Built on top of the Spanish-Catalan AnCora corpus (Taul et al., 2008), it contains mainly news articles.",
"Danish-DDT : Converted from the Danish Dependency Treebank (Buch-Kromann, 2003).",
"It includes news articles, fiction and non fiction texts and oral transcriptions.",
"Finnish-FTB : Consists of manually annotated grammatical examples from VISK 15 (The Web Version of the Large Grammar of Finnish).",
"Finnish-TDT : Based on the Turku Dependency Treebank (TDT).",
"Contains texts from Wikipedia, Wikinews, news articles, blog entries, magazine articles, grammar examples, Europarl speeches, legal texts and fiction.",
"Indonesian-GSD : Includes mainly blog entries and news articles.",
"We use UDPipe 2.0 without contextualized embeddings as our baseline for POS tagging and dependency parsing.",
"However, we did not train the model without contextualized word embedding ourselves.",
"We instead take the scores as they are reported in (Kondratyuk and Straka, 2019).",
"We also compare our UDPipe 2.0 + ELMo models against the state-of-the-art results (assuming gold tokeniza-tion) for these languages, which are either UDify (Kondratyuk and Straka, 2019) or UDPipe 2.0 + mBERT (Straka et al., 2019).",
"Results for UPOS, UAS and LAS are shown in Table",
"5. We obtain the state of the art for the three metrics in each of the languages with the UDPipe 2.0 + ELMo OSCAR models.",
"We also see that in every single case the UDPipe 2.0 + ELMo OSCAR result surpasses the UDPipe 2.0 + ELMo Wikipedia one, suggesting that the size of the pre-training data plays an important role in downstream task results.",
"This is also supports our hypothesis that the OSCAR corpora, being multi-domain, exhibits a better coverage of the different styles, genres and uses present at least in these 5 languages.",
"of overfitting, as the UDPipe 2.0 + ELMo Wikipedia results considerably improve the UDPipe 2.0 baseline.",
"This is the case for all of our ELMo Wikipedia models as we never see any evidence of a negative impact when we add them to the baseline model.",
"In fact, the results of UDPipe 2.0 + ELMo Wikipedia give better than previous state-of-the-art results in all metrics for the Finnish-FTB and in UPOS for the Finnish-TDT.",
"The results for Finnish are actually quite interesting, as mBERT was pre-trained on Wikipedia and here we see that the multilingual setting in which UDify was fine-tuned exhibits sub-baseline results for all metrics, and that the UDPipe + mBERT scores are often lower than those of our UDPipe 2.0 + ELMo Wikipedia .",
"This actually suggests that even though the multilingual approach of mBERT (in pre-training) or UDify (in pre-training and fine-tuning) leads to better performance for high-resource languages or languages Treebank Model UPOS UAS LAS UDPipe 2.0 98.98 93.38 90.35 +ELMo Wikipedia(1) 98.81 93.60 90.21 +ELMo Wikipedia(3) 99.01 94.32 91.36 +ELMo Wikipedia(5) 99.03 94.32 91.38 Bulgarian BTB +ELMo Wikipedia(10) 99.17 94.93 92.05 +ELMo OSCAR(1) 99.28 95.45 92.98 +ELMo OSCAR(3) 99.34 95.58 93.12 +ELMo OSCAR(5) 99.34 95.63 93.25 +ELMo OSCAR(10) 99.40 96.01 93.56 UDPipe 2.0 98.88 93.22 91.06 +ELMo Wikipedia(1) 98.93 93.24 91.21 +ELMo Wikipedia(3) 99.02 93.75 91.93 +ELMo Wikipedia(5) 99.04 93.86 92.05 Catalan-AnCora +ELMo Wikipedia(10) 99.05 93.99 92.24 +ELMo OSCAR(1) 99.07 93.92 92.29 +ELMo OSCAR(3) 99.10 94.29 92.69 +ELMo OSCAR(5) 99.07 94.38 92.75 +ELMo OSCAR(10) 99.06 94.49 92.88 UDPipe 2.0 97.78 86.88 84.31 +ELMo Wikipedia(1) 97.47 86.98 84.15 +ELMo Wikipedia(3) 98.03 88.16 85.81 +ELMo Wikipedia(5) 98.15 88.24 85.96 Danish-DDT +ELMo Wikipedia(10) 98.45 89.05 86.92 +ELMo OSCAR(1) 98.50 89.47 87.43 +ELMo OSCAR(3) 98.59 89.68 87.77 +ELMo OSCAR(5) 98.59 89.46 87.64 +ELMo OSCAR(10) 98.62 89.84 87.95 Treebank Model UPOS UAS LAS UDPipe 2.0 96.65 90.68 87.89 +ELMo Wikipedia(1) 95.86 89.63 86.39 +ELMo Wikipedia(3) 96.76 91.02 88.27 +ELMo Wikipedia(5) 96.97 91.66 89.04 Finnish-FTB +ELMo Wikipedia(10) 97.27 92.05 89.62 +ELMo OSCAR(1) 97.91 93.41 91.43 +ELMo OSCAR(3) 98.00 93.99 91.98 +ELMo OSCAR(5) 98.15 93.98 92.24 +ELMo OSCAR(10) 98.13 93.81 92.02 UDPipe 2.0 97.45 89.88 87.46 +ELMo Wikipedia(1) 96.73 89.11 86.33 +ELMo Wikipedia(3) 97.55 90.84 88.50 +ELMo Wikipedia(5) 97.55 91.11 88.88 Finnish-TDT +ELMo Wikipedia(10) 97.65 91.60 89.34 +ELMo OSCAR(1) 98.27 93.03 91.29 +ELMo OSCAR(3) 98.38 93.60 91.83 +ELMo OSCAR(5) 98.39 93.57 91.80 +ELMo OSCAR(10) 98.36 93.54 91.77 UDPipe 2.0 93.69 85.31 78.99 +ELMo Wikipedia(1) 93.70 85.81 79.46 +ELMo Wikipedia(3) 93.90 86.04 79.72 +ELMo Wikipedia(5) 94.04 85.93 79.97 Indonesian-GSD +ELMo Wikipedia(10) 93.94 86.16 80.10 +ELMo OSCAR(1) 93.95 86.25 80.23 +ELMo OSCAR(3) 94.00 86.21 80.14 +ELMo OSCAR(5) 94.23 86.37 80.40 +ELMo OSCAR(10) 94.12 86.49 80.59 Table 6: UPOS, UAS and LAS scores for the UDPipe 2.0 baseline reported by (Kondratyuk and Straka, 2019), plus the scores for checkpoints at 1, 3, 5 and 10 epochs for all the ELMo OSCAR and ELMo Wikipedia .",
"that are closely related to high-resource languages, it might also significantly degrade the representations for more isolated or even simply more morphologically rich languages like Finnish.",
"In contrast, our monolingual approach with UDPipe 2.0 + ELMo OSCAR improves the previous SOTA considerably, by more than 2 points for some metrics.",
"Note however that Indonesian, which might also be seen as a relatively isolated language, does not behave in the same way as Finnish.",
"An important topic we wanted to address with our experiments was that of overfitting and the number of epochs one should train the contextualized embeddings for.",
"The ELMo authors have expressed that increasing the number of training epochs is generally better, as they argue that training the ELMo model for longer reduces held-out perplexity and further improves downstream task performance.",
"16 This is why we intentionally fully pre-trained the ELMo Wikipedia to the 10 epochs of the original ELMo paper, as its authors also expressed concern over the possibility of overfitting for smaller corpora.",
"We thus save checkpoints for 16 Their comments on the matter can be found here.",
"each of our ELMo model at the 1, 3, 5 and 10 epoch marks so that we can properly probe for overfitting.",
"The scores of all checkpoints are reported in Table",
"6. Here again we do not train the UDPipe 2.0 baselines without embedding, we just report the scores published in Kondratyuk and Straka (2019).",
"The first striking finding is that even though all our Wikipedia data sets are smaller than 1GB in size (except for Catalan), none of the ELMo Wikipedia models show any sign of overfitting, as the results continue to improve for all metrics the more we train the ELMo models, with the best results consistently being those of the fully trained 10 epoch ELMos.",
"For all of our Wikipedia models, but those of Catalan and Indonesian, we see sub-baseline results at 1 epoch; training the model for longer is better, even if the corpora are small in size.",
"ELMo OSCAR models exhibit exactly the same behavior as ELMo Wikipedia models where the scores continue to improve the longer they are pre-trained, except for the case of Finnish.",
"Here we actually see an unexpected behavior where the model performance caps around the 3 rd to 5 th epoch.",
"This is surprising because the Finnish OSCAR corpus is more than 20 times bigger than our smallest Wikipedia corpus, the Danish Wikipedia, that did not exhibit this behavior.",
"As previously mentioned Finnish is morphologically richer than the other languages in which we trained ELMo, we hypothesize that the representation space given by the ELMo embeddings might not be sufficiently big to extract more features from the Finnish OSCAR corpus beyond the 5 th epoch mark, however in order to test this we would need to train a larger language model like BERT which is sadly beyond our computing infrastructure limits (cf. Subsection 5.3).",
"However we do note that pre-training our current language model architectures in a morphologically rich language like Finnish might actually better expose the limits of our existing approaches to language modeling.",
"One last thing that it is important to note with respect to the number of training epochs is that even though we fully pre-trained our ELMo Wikipedia 's and ELMo OSCAR 's to the recommended 10 epoch mark, and then compared them against one another, the number of training steps between both pre-trained models differs drastically due to the big difference in corpus size (for Indonesian, for instance, 10 epochs correspond to 78K steps for ELMo Wikipedia and to 2.6M steps for OSCAR; the complete picture is provided in the Appendix, in Table 8).",
"In fact, we can see in Table 6 that all the UDPipe 2.0 + ELMo OSCAR(1) perform better than the UDPipe 2.0 + ELMo Wikipedia(1) models across all metrics.",
"Thus we believe that talking in terms of training steps as opposed to training epochs might be a more transparent way of comparing two pretrained models.",
"Considering the discussion above, we believe an interesting follow-up to our experiments would be training the ELMo models for more of the languages included in the OSCAR corpus.",
"However training ELMo is computationally costly, and one way to estimate this cost, as pointed out by Strubell et al. (2019), is by using the training times of each model to compute both power consumption and CO 2 emissions.",
"In our set-up we used two different machines, each one having 4 NVIDIA GeForce GTX 1080 Ti graphic cards and 128GB of RAM, the difference between the machines being that one uses a single Intel Xeon Gold 5118 processor, while the other uses two Intel Xeon E5-2630 v4 processors.",
"One GeForce GTX 1080 Ti card is rated at around Language Power Hours Days KWh PUE CO 2 e OSCAR-Based ELMos Bulgarian 1183 515.00 21.45 962.61 49.09 Catalan 1118 199.98 8.33 353.25 18.02 Danish 1183 200.89 8.58 375.49 19.15 Finnish 1118 591.25 24.63 1044.40 53.26 Indonesian 1183 694.26 28.93 1297.67 66.18 Wikipedia-Based ELMos Bulgarian 1118 15.45 0.64 27.29 1.39 Catalan 1118 51.08 2.13 90.22 4.60 Danish 1118 14.56 0.61 25,72 1.31 Finnish 1118 21.79 0.91 38.49 1.96 Indonesian 1118 20.28 0.84 35.82 1.82 TOTAL EMISSIONS 216.78 Table 7: Average power draw (Watts), training times (in both hours and days), mean power consumption (KWh) and CO 2 emissions (kg) for each ELMo model trained.",
"250 W, 17 the Xeon Gold 5118 processor is rated at 105 W, 18 while one Xeon E5-2630 v4 is rated at 85 W. 19 For the DRAM we can use the work of Desrochers et al. (2016) to estimate the total power draw of 128GB of RAM at around 13W.",
"Having this information, we can now use the formula proposed by Strubell et al. (2019) in order to compute the total power required to train one ELMo model: p t = 1 .",
"Where c and g are the number of CPUs and GPUs respectively, p c is the average power draw (in Watts) from all CPU sockets, p r the average power draw from all DRAM sockets, and p g the average power draw of a single GPU.",
"We estimate the total power consumption by adding GPU, CPU and DRAM consumptions, and then multiplying by the Power Usage Effectiveness (PUE), which accounts for the additional energy required to support the compute infrastructure.",
"We use a PUE coefficient of 1.58, the 2018 global average for data centers (Strubell et al., 2019).",
"In table 7 we report the training times in both hours and days, as well as the total power draw (in Watts) of the system used to train each individual ELMo model.",
"We use this in-17 https://www.geforce.com/hardware/ desktop-gpus/geforce-gtx-1080-ti/specifications 18 https://ark.intel.com/content/www/ us/en/ark/products/120473/intel-xeon-gold-5118-processor-16-5m-cache-2-30-ghz.html 19 https://ark.intel.com/content/www/ us/en/ark/products/92981/intel-xeon-processor-e5-2630-v4-25m-cache-2-20-ghz.html formation to compute the total power consumption of each ELMo, also reported in table",
"7. We can further estimate the CO 2 emissions in kilograms of each single model by multiplying the total power consumption by the average CO 2 emissions per kWh in France (where the models were trained).",
"According to the RTE (Rseau de transport d'lectricit / Electricity Transmission Network) the average emission per kWh were around 51g/kWh in November 2019, 20 when the models were trained.",
"Thus the total CO 2 emissions in kg for one single model can be computed as: CO 2 e = 0 .",
"All emissions for the ELMo models are also reported in table",
"7. We do not report the power consumption or the carbon footprint of training the UDPipe 2.0 architecture, as each model took less than 4 hours to train on a machine using a single NVIDIA Tesla V100 card.",
"Also, this machine was shared during training time, so it would be extremely difficult to accurately estimate the power consumption of these models.",
"Even though it would have been interesting to replicate all our experiments and computational cost estimations with state-of-the-art fine-tuning models such as BERT, XLNet, RoBERTa or AL-BERT, we recall that these transformer-based architectures are extremely costly to train, as noted by the BERT authors on the official BERT GitHub repository, 21 and are currently beyond the scope of our computational infrastructure.",
"However we believe that ELMo contextualized word embeddings remain a useful model that still provide an extremely good trade-off between performance to training cost, even setting new state-of-the-art scores in parsing and POS tagging for our five chosen languages, performing even better than the multilingual mBERT model.",
"In this paper, we have explored the use of the Common-Crawl-based OSCAR corpora to train ELMo contextualized embeddings for five typologically diverse mid-resource languages.",
"We have compared them with Wikipedia-based ELMo embeddings on two classical NLP tasks, POS tagging 20 https://www.rte-france.com/fr/ eco2mix/eco2mix-co2 21 https://github.com/google-research/ bert and parsing, using state-of-the-art neural architectures.",
"Our goal was to explore whether the noisiness level of Common Crawl data, often invoked to criticize the use of such data, could be compensated by its larger size; for some languages, the OSCAR corpus is several orders of magnitude larger than the corresponding Wikipedia.",
"Firstly, we found that when properly filtered, Common Crawl data is not massively noisier than Wikipedia.",
"Secondly, we show that embeddings trained using OSCAR data consistently outperform Wikipedia-based embeddings, to the extent that they allow us to improve the state of the art in POS tagging and dependency parsing for all the 6 chosen treebanks.",
"Thirdly, we observe that more training epochs generally results in better embeddings even when the training data is relatively small, as is the case for Wikipedia.",
"Our experiments show that Common-Crawl-based data such as the OSCAR corpus can be used to train high-quality contextualized embeddings, even for languages for which more standard textual resources lack volume or genre variety.",
"This could result in better performances in a number of NLP tasks for many non highly resourced languages.",
"We want to thank Ganesh Jawahar for his insightful comments and suggestions during the early stages of this project.",
"This work was partly funded by the French national ANR grant BASNUM (ANR-18-CE38-0003), as well as by the last au-thor's chair in the PRAIRIE institute, 22 funded by the French national ANR as part of the Investisse-ments d'avenir programme under the reference ANR-19-P3IA-0001.",
"The authors are grateful to Inria Sophia Antipolis Mditerrane Nef 23 computation cluster for providing resources and support."
] | [
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"method",
"method",
"result",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"other",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"objective",
"result",
"result",
"result",
"result",
"abstain",
"other",
"other",
"other"
] |
[
"We propose a novel method to sparsify attention in the Transformer model by learning to select the most-informative token representations during the training process, thus focusing on the task-specific parts of an input.",
"A reduction of quadratic time and memory complexity to sublinear was achieved due to a robust trainable topk operator.",
"Our experiments on a challenging long document summarization task show that even our simple baseline performs comparably to the current SOTA, and with trainable pooling we can retain its top quality, while being 1 .",
"8 faster during training, 4 .",
"5 faster during inference and up to 13 more computationally efficient in the decoder.",
"1 1 Introduction The introduction of Transformer architecture led to an immense improvement in the performance of Natural Language Processing systems (Vaswani et al., 2017; Radford et al., 2018; Devlin et al., 2019).",
"Nevertheless, the underlying attention mechanism is marked by the original sin of quadratic memory complexity w.r.t. the input sequence length.",
"It results from the attention matrix reflecting inter-connections between every two representations in the input sequence.",
"Previous approaches either reduce the full connectivity of its elements to its non-empty subset or approximate the self-attention matrix (Dai et al., 2019; Beltagy et al., 2020; Kitaev et al., 2020; Tay et al., 2020; Zaheer et al., 2020a; Wang et al., 2020; Shen et al., 2021; Choromanski et al., 2021; Roy et al., 2021).",
"In particular, in these models, each word at every layer attends to at least one other word.",
"In contrast, we disregard attention for a given representation completely in the case of non-informative ones (Figure 1 and 2).",
"In particular, we optimize the attention complexity by learning to select encoded representations for the given task and promoting only the chosen ones to the next layer of the model.",
"This mechanism will be referred to as representation pooling .",
"Consequently, a significantly 1 Code publicly available at https://github.com/ applicaai/pyramidions along with trained models.",
"lower memory consumption and an improved processing time are achieved.",
"As the selection operation has to be trainable, we provide a suitable high-performance continuous relaxation of topk , robust for every k value and input sequence length.",
"We demonstrate this idea's applicability by performing on par to state-of-the-art on the challenging problem of long document summarization.",
"Simultaneously, the proposed end-to-end model is a significant theoretical improvement over the previous systems, which are based on independently trained extractive and abstractive models.",
"Contribution.",
"The specific contributions of this paper are the following: (1) We propose a method to sparsify Transformer architecture in a novel, previously unrecognized way, achieving sublinear time and memory complexity.",
"Our model learns to select the subset of best representations depending on the advantage they give on a downstream task.",
"(2) Additionally, we demonstrate an improvement of the decoder's cross-attention complexity.",
"It is beneficial for both train/inference time and memory consumption.",
"(3) We demonstrate an elegant way to train extractive-abstractive models in an end-to-end manner with only a cross-entropy loss function.",
"(4) We present a Successive Halving Topk operator that outperforms previous approaches in terms of approximation quality and speed.",
"We provide a detailed analysis of its differential properties and prove that it is trainable in an end-to-end manner, making it applicable within our neural networks.",
"(5) We achieve state-of-the-art performance level in long document's summarization and show that previous models can be outperformed by a straightforward baseline.",
"Word-vector elimination.",
"It has been previously shown that the progressive elimination of word vectors occurring layer after layer can improve inference time of transformer-based language models used in a text classification scenario (Goyal et al., 2020).",
"We extend this notion to tasks demanding text generation in a way that, contrary to previous work, is trainable and optimized concerning a downstream task.",
"A similar approach has been taken in the Funnel Transformer proposed concurrently to our work (Dai et al., 2020).",
"We directly compare to both methods' adaptations (see Section 5), and consider our work to surpass it in two aspects: 1) results were improved due to a better pooling mechanism than mean/max; 2) training was accelerated, which we attribute to the significant reduction of the decoder's complexity.",
"Sparse attention.",
"Several authors proposed to limit attention connectivity, e.g., by dividing input into smaller 'blocks' (Child et al., 2019; Beltagy et al., 2020; Rae and Razavi, 2020).",
"Blockwise attention is an optional element of our architectures, used in addition to trainable pooling.",
"Summarization.",
"In terms of the type of summarization task we target, our representation pooling mechanism can be considered an end-to-end extractive-abstractive model.",
"This is a conceptual breakthrough compared to recently proposed two-stage hybrids that extract and paraphrase in two independent steps, using separately trained modules (Pilault et al., 2020; Hsu et al., 2018; Gehrmann et al., 2018; Chen and Bansal, 2018).",
"It is suspected that when humans engage in information search, they use various cognitive processes depending on the relevance level of constituent text fragments (Gwizdka et al., 2017).",
"The method we propose is inspired by this search for relevant fragments, which is an important aspect of human cognition when engaged in reading to do actions (Mosenthal, 1996; Mosenthal and Kirsch, 1992).",
"We intend to mimic relevance judgments and hypothesize that it is possible to answer problems involving natural language with only selected passages of the input text.",
"These passages may be of substantially shorter length than the original text.",
"One may compare this to a person reading the paper and highlighting in such a way that it is possible to provide a summary using only the highlighted parts.",
"The end-to-end mechanism we introduce performs such highlighting by scoring the representations and passes only the selected ones to the next layer of the neural network (Figure 3).",
"The role of the selection is to reduce data resolution in a roughly similar way to how pooling works in CNNs, where the feature map is downsampled and only the most informative activations are retained.",
"When pooling in a trainable manner at the bottleneck of the encoder-decoder, it impacts the encoding process because the additional, orthogonal, informational bottleneck forces the model to compress more context into one representation vector of constant-length, leveraging the already provided capacity.",
"Let n denote the number of input tokens that are projected onto d dimensions, resulting in a matrix of embedding representations E R n d .",
"We want to assign scores v i to embedding vectors E i , in such a way that v i measures the usefulness of E i for further layers and the training objective.",
"Typically, this can be achieved by defining a scoring function S : R d R (which we allow to depend on additional parameters, thus making it trainable) that assigns a usefulness score to every embedding vector, and putting v i = S ( E i ) .",
"(1) Next, we use our soft topk operator : R n d R n R k d to reduce the number of embeddings from 8617 encoding Tok 1 Tok N Tok N+1 Tok M Tok M+1 Tok L ... ... ... ... ... ...",
"n to k , based on their usefulness scores.",
"The k vectors produced by form the input for the next network layer.",
"The path of residual connections starts on a reduced number of tokens.",
"Flavors.",
"We consider two architectures in this work: with single or multiple pooling layers (Figure 1).",
"Specifically, the latter is a generalization of the former to any given number of pooling layers.",
"We use the term Transpooler when a single pooling layer is placed after the encoder.",
"This setup directly limits the amount of information passed to the decoder through the network's bottleneck.",
"However, pooling can be applied between any subsequent layers, such that multiple operations of this type will be used in the network and gradually introduce the bottleneck along the encoding process.",
"As a result, the same model bottleneck size can be achieved as when using Transpooler.",
"Moreover, the decision to pool earlier has the advantage of attaining more substantial memory complexity reduction.",
"This model will be referred to as the Pyramidion.",
"Blockwise attention.",
"When propagating through layers, we use blockwise attention and split input into nonoverlapping chunks in such a way that the full quadratic attention is computed for each chunk.",
"The score is then determined for each representation vector, and after selecting with the topk operator, chosen representations are passed to the next layer.",
"We assure our topk operator selects representations without permuting their order, keeping them in line with their original position.",
"Scoring functions.",
"Multiple scoring methods can be proposed.",
"The most straightforward is to use a linear scoring function as used in conventional token classification, S ( e ) = e T w + b , where w R d and b R are trainable parameters.",
"We found it to work best with our pooling method.",
"In the Appendix A we perform ablations on different scoring functions.",
"Table 1 presents the complexity of attention in our models, and compares it to different architectures.",
"The vanilla encoder depends on the number of layers l , the number of tokens in the input n and the number of tokens each attends to n .",
"Likewise, the decoder's cross-attention depends on l , n and the target length t .",
"The m denotes the effective number of tokens one can attend to, resulting from the attention's block size, allowed window size or the clustering of key-values.",
"The number of parallel LSH hashes is denoted by h .",
"The rank of the factorization matrix is r , which can be a constant that is independent of n .",
"Similarly, the number of best task-specific representations k , selected after encoding, is independent of n .",
"c is an effective number of layers in a hierarchically decreasing encoder of the Pyramidion.",
"The Pyramidion's c can be as low as .",
"Blockwise sparse attention improved 8618 the vanilla Transformer's complexity by limiting the number of tokens each attends to from n (input length) to m (block size) as seen in Table 1.",
"As we keep the encoding of blockwise attention, the m improvement also applies to our self-attention.",
"For the Pyramidion model, we narrow down the size of the representation on the output of each chosen layer, leading to the exponential reduction of memory consumption as the encoding proceeds.",
"For example, when pooling after every layer is considered, the total memory complexity across l layers would be (cid:80) pi =0 2 i mnd = (2 k/n ) mnd where p denotes the number of passes p = log 2 ( n/k ) , assuming k n and n, k { 2 i | i Z + } .",
"Hence, the effective complexity of all layers is lower than mnd , which means it is lower than times the complexity of the full-size first layer.",
"For the decoder cross-attention, the number of input representations that t target tokens can attend to is limited by k , thus decreasing the memory complexity of cross attention from O ( tn ) to O ( tk ) .",
"Optimization over quadratic sentence-length complexity is even more powerful and needed on the decoder side, as O ( tn ) complexity hurts performance of real-world applications based on auto-regressive decoding.",
"The blockwise attention itself reduces encoder complexity proportionally to the number of chunks.",
"We further reduce the decoder layer's complexity in Transpooler models by a factor of n/k , thanks to representation pooling.",
"The Pyramidion we propose offers an additional improvement on the encoder side, where time and memory consumption are reduced in each of the consecutive layers compared to the Transformer featuring blockwise attention.",
"In other words, when b denotes the number of blocks, l stands for the number of layers, and the sequence length is halved in each layer, we reduce memory from b + b + ... + b = lb to b + b/ 2+ b/ 4+ ... + b/ (2 l ) 2 b .",
"Because the beneficial impact of pooling accumulates, we are able to improve complexity from one that is linearly dependent on l to one that is constant, independent of l .",
"In the further DeepPyramidion's experiments, we will proceed with a higher reduction factor, where the length of a sequence is cut in four.",
"As a result, the Pyramidion achieves an effective self-attention time and space complexity linear of n and logarithmic of l .",
"For comparison, other sparse models such as, e.g., Linformer depend linearly on n and linearly on l .",
"The analysis of Figure 4 found evidence that our method scales well with an increasing number of layers.",
"In the evaluation (see Section 5), we demonstrate that our model achieves a 2 .",
"5 computation reduction in the encoder's self-attention and a 16 reduction in the decoder's cross-attention comparing to blockwise baseline, while both models are close to SOTA results on the task of long-document summarization.",
"All things considered, we introduce Pyramidion with sublinear complexity that achieves remarkable results.",
"The advantage of our approach is that it complements all other proposed sparsification techniques, thus paving a new interesting avenue of potential research.",
"It can be effortlessly applied in-between layers and simultaneously with other improvements since representation pooling addresses a different aspect of the attention's complexity problem.",
"The choice of the selection operator is challenging, as it has to be trainable to instantiate a pooler.",
"In case of the hard topk operator, back-propagation through the scores is impossible and prevents training the scoring function.",
"It could be seen as an extreme case of the vanishing gradient problem.",
"In this section we introduce a mechanism not prone to this issue, while the Appendix B is dedicated to a theoretical analysis of its differential properties, from a geometrical point of view.",
"The crux of our approach is the Successive Halving Topk selection mechanism that finds k convex combinations of vector representations E i , dominated by those achieving the highest scores v i (pseudocode available in the Appendix B.1).",
"2 The general idea is to perform a tournament soft selection, where candidate vectors are compared in pairs ( i, j ) , until only k remained.",
"After each tournament's round new E (cid:48) and v (cid:48) are computed as convex combinations of these pairs with weights based on their respective scores.",
"Each new vector is calculated as: E (cid:48) i = w i E i + w j E j , where the w i , w j are the result of a peaked softmax over the scores v i , v j .",
"Analogously, we use v (cid:48) i = w i v i + w j v j as the new-round's scores.",
"2 Preliminary work regarding this method was previously presented in the form of a Student Abstract, see Pietruszka et al. (2020).",
"Weights are calculated using a PeakedSoftmax function (Goyal et al., 2017), increasing the pairwise difference in scores between v i and v j .",
"One round halves the number of elements in E and v .",
"We perform it iteratively unless the size of E and v matches the chosen value of k .",
"To improve convergence towards selecting the real topk , it is desired to permute v and E first.",
"In our algorithm, we sort the vectors E i in descending order of their scores v i and then put them into the tournament in pairs of the form ( i, n + 1 i ) .",
"This method of pairing guarantees that the weights w i depend monotonically on the scores v i , which is the main motivation for using it.",
"Extended benchmarks for time and accuracy are covered in details in Appendix B.5.",
"The main focus of the experiments was to understand how to employ the Successive Halving Topk operator within neural networks to build models that have better training and inference time and are expressive enough to achieve results comparable to state-of-the-art models.",
"The first experiment was specifically designed to compare to other sparse Transformers and Vanilla baselines.",
"Choice of tasks.",
"We demonstrate the benefit of pooling on the arXiv and PubMed summarization datasets (Cohan et al., 2018) available under Apache License 2.0 license.",
"Both tasks demand text generation and have the highest average input sequence length ( 6 k and 3 k words on average for arXiv and PubMed respectively).",
"Assuming an embedding of dimensionality 768 , it is important to note that for inputs shorter than approx.",
"4 k tokens, more multiplications happen in the Transformer's FFN layers and projection layers than in the attention layers.",
"Hence, the validation of the sparsification mechanism should be proved by showing that it works for longer inputs.",
"Time benchmarks.",
"The average time of processing a batch of documents is reported to evaluate the computational improvements experimentally.",
"Decoding experiments were synthetic with a forced fixed length of 512 output tokens to discount for the lower processing time of models predicting an earlier sequence end.",
"We recorded time in seconds on batches of size 64 and 8 for training and generation, respectively.",
"Details regarding the hyperparameters and test environment are reported in Appendix C. 8620 Ablations on input and decoder lengths.",
"Table 2 presents evaluation metrics and time benchmarks depending on encoder and decoder lengths, as well as used sparsification mechanisms.",
"At this stage, we use shallow 4 -layer models to perform ablation studies and estimate each approach's strengths and weaknesses.",
"We observe that all sparse models deliver on the promise of accelerating training time over Vanilla Transformers for longer sequences in this setup.",
"Methods requiring the elimination of word vectors scale well with the sequence length but incur additional pooling costs, which may be notable for shorter sequences.",
"Nevertheless, inference time was significantly reduced only when methods eliminating word vectors were employed.",
"The introduction of blockwise attention and pooling does not decrease scores while lowering the computational cost.",
"The detailed training procedure for all models is provided in Appendix C. Scaling deeper.",
"In preliminary experiments it was estimated that the fastest-to-train model that performs comparably to the Vanilla Transformer is the Blockwise Transformer.",
"Here, we scale it to 6 -layers in each encoder and decoder and provide an interesting baseline for our model, since Transpooler's backbone is blockwise attention.",
"We undertook the empirical analysis of scaling Transpooler to many layers in Appendix C.2 and found that in order to balance performance and speed, it is crucial to delay the first pooling and not to perform it directly on the first layer's output.",
"It was also revealed that appending more layers at the end of the encoder (after pooling) results in a negligible increase in time while considerably improving scores.",
"Both changes to the block size and reduction of the bottleneck harmed the performance.",
"Thus, the data supports the premise that the 6 -layers encoder should consume 8 k tokens on the input and output representations of lengths 8 k, 8 k, 2 k, 512 , 512 , 512 after each successive layer.",
"We refer to this model as DeepPyramidion (note that pooling happens twice in the encoder).",
"The decoder also has six layers, making our model directly comparable to the deeper Blockwise Transformer.",
"We confront DeepPyramidion with the Blockwise baseline by training models from scratch on arXiv and PubMed datasets separately and report results in comparison to the state-of-the-art summarization models (Table 3).",
"Results.",
"The evaluation of the data presented in Table 3 leads to the unexpected conclusion that our Blockwise Transformer baseline, despite its simplicity, is sufficient to outperform deeper, denser, and additionally pretrained models that were recently reported as state-of-the-art.",
"We demonstrate that DeepPyramidion retains or improves the performance of the competitive baseline we produced.",
"The training time speedup by 1 .",
"8 supports the notion that our model scales better to long sequences, assuming deeper models.",
"This result stands in line with evidence in Figure 4.",
"While our baseline Blockwise model reduces the computational demand of self-attention in encoder by a factor of 16 when comparing to Vanilla Transformer, it does not improve the decoder's computational complexity.",
"It is interesting to highlight that DeepPyramidion further lowers the cost of self-attention by 2 .",
"5 and improves 16 over Blockwise's cross-attention in the decoder, and leads to overall 13 improvement in the number of multiplication operations in the decoder.",
"Time benchmarks show a 4 .",
"5 improvement in the generation times for our method, proving how vital the improvement in the decoder's cross-attention complexity is for inference time.",
"DeepPyramidion achieves a ROUGE-2 score indistinguishable from SOTA on arXiv and performs competitively on PubMeb.",
"At the same time, an entire DeepPyramidion costs five times less than a single Transformer layer consuming 8 k tokens.",
"However, when comparing our results to those of older studies, it must be pointed out that our models were trained from scratch only on the targeted dataset, whereas prior works often base on already pretrained models such as BART or RoBERTa and leverage unsupervised training on additional datasets.",
"On the contrary, a longer input sequence was consumed by both Blockwise and DeepPyramidion, which we speculate, is the reason for their strong performance.",
"3 Impact of longer inputs.",
"The results achieved in our paper are comparable to other, much heavier, and more costly models due to two main reasons, that will be briefly discussed below.",
"Firstly, to perform well on a long document summarization task, there is a need to strike the right balance not only between the depth and width of the network but also it is required for design optimization to take into account the length of the input.",
"All previous work seem to underperform when considering all three factors, as they were designed and optimized for shorter tasks and generally have more parameters, denser computations, or even a hard limit on the range of positional encoding.",
"The authors were thus bounded by the maximal sequence length of 512 or 1024 tokens.",
"One can argue that within this prefix (corresponding to the first 2 3 pages), any data point from the arXiv/PubMed datasets (a scientific paper) usually provides enough information to write a meaningful summary, but also, important details will be missing to some degree.",
"Hence, increasing the length of the input that can be consumed on GPUs, at the price of using a shallower network, with sparser computation, may be considered a better fit for the task.",
"3 This view is supported by results of PoolingFormer that are concurrent to our work (Zhang et al., 2021).",
"Despite that, at first sight, the methods seem similar and the authors present an interesting use of pooling in the attention, we argue that the mentioned model suffers from several weaknesses that are not present in our work.",
"First of all, in the PoolingFormer model vectors are not removed from computations in further layers.",
"Hence logarithmic complexity of the number of layers does not apply.",
"PoolingFormer's approach suffers from having three orders of magnitude more calculations than when a global pooling based on scores of individual tokens is considered.",
"Secondly, we think that pretraining in the Pyramidion's case may be disregarded due to an interesting length exploiting hypothesis.",
"That is, while we consume longer sequences on the input, the network learns more efficiently, as more information is available, and thus, the training signal is stronger.",
"This can be convincingly portrayed in the case of embedding layers, as during training they see many more words and sentences from the chosen dataset, and hence, can provide more meaningful representations to the further layers.",
"One can think that making the most of already available domain texts and consuming longer inputs is an advantageous approach to masked pretraining on out-of-domain datasets.",
"While the latter approach may aid general' language understanding, it has insufficient transferability potential to domain-specific document understanding (e.g., scientific or medical texts).",
"To sum up, the Pyramidion has improvements that allow consuming longer inputs cheaply, which turns out to be a more cost-effective strategy compared to other models.",
"This aspect is crucial for achieving strong results on the presented datasets.",
"At this stage of understanding, we believe that sparsification based on trainable pooling is unlikely to improve processing time for short sequences specific to some NLP tasks, e.g., sentence-level Neural Machine Translation.",
"In addition, the score improvement may be attainable for tasks characterized by at least an order of magnitude shorter outputs than inputs, as it was previously shown on classification, or, as in the case of this work, on summarization.",
"However, the extent to which it is possible to replace full-attention in Transformer with the sparse attention we propose is unknown.",
"However, we argue that the benefits are visible starting from the inputs of length 4 k .",
"As discussed earlier, 4 k is a break-even point where more calculations are needed for attention than for FFNs and projecting layers.",
"As such, we recommend applying sparsification methods on datasets featuring sequences of length over that value.",
"While we focus on the long end of the possible inputs, one can continue our analysis, to find improvements that work for shortest sequences, such as, e.g., concentrating on employing lighter projection layers and FFNs or stacking more attention blocks.",
"Although our method is a hybrid extractive-abstractive, it does not provide interpretable explanations to which specific representations were selected as the pooling operates in the latent space.",
"How to match the selected vectors to the vocabulary tokens remains an open question.",
"Moreover, framing the trainable pooling for language modeling remains a challenge to address in future works, especially as in this task the Markov assumption may serve as a basis for competitive pooling heuristics.",
"We did not consider Relative Positional Encoding in our work as pooling mechanism is not trivially applicable with it and some generalization of our method may be needed.",
"In that case, as it demands more experiments and proofs, we will leave the generalization of the pooling method for future work.",
"Regarding the social impact and environmental sus-tainability, we actively considered the Earth's wellbeing by contributing a technique for reducing the computational demand of recent Deep Learning models.",
"Our near-state-of-the-art DeepPyramidion model costs us 3 days of training on 8 NVIDIA A100 GPUs.",
"Shallow models featuring trainable pooling were finished in about 2 days each, given the same hardware.",
"Blockwise baselines cost us about 3 .",
"5 x the price of respective pooling methods.",
"The most prolonged training of the 8 k Vanilla Transformer lasted for about 2 weeks.",
"The total cost of training the models covered in this paper is about 2 months on the mentioned hardware, plus an additional month for models and ablations described in the appendices.",
"We roughly estimate that it is between half and one-fourth of the total computation spent, including false runs, unpublished work, and initial experiments.",
"The dataset preparation took less than 10 hours on 1 CPU.",
"We propose representation pooling as a method to reduce the complexity of Transformer encoder-decoder models.",
"Specifically, we optimize self-attention com-8622 plexity and address the decoder's cross-attention complexity optimization, which has so far not been widely acknowledged by the research community.",
"Moreover, the DeepPyramidion we introduced establishes results comparable to state-of-the-art, outperforming not only other systems relying on progressive word-vector elimination but also deeper, denser, and additionally pretrained models.",
"We tackle the problem by introducing a novel method of applying successive halving to a model's input in a tournament style.",
"It is a theoretical improvement over existing approaches in terms of both computational complexity and approximation quality.",
"Trainable Top-k selection allows to train scorer for a task and outperforms other pooling methods.",
"From the summarization task's point of view, the proposed end-to-end model is a significant theoretical improvement over the previous systems, where the extractive model was trained independently of the abstractive one.",
"In contrast, our mechanism does not require the introduction of an additional training objective or training stage.",
"Our approach can be easily applied to other problems from Natural Language Processing and Computer Vision.",
"E.g., in a recent work later than ours, Multiscale Vision Transformers were proposed.",
"These, similarly to our Pyramidion model, introduce the bottleneck gradually along the encoding process of videos and images, leading to better results, and complexity (Fan et al., 2021).",
"As it comes to Natural Language Processing, possible applications include Key Information Extraction, Machine Reading Comprehension, and Question Answering in scenarios where encoder-decoder models struggle or would struggle with input sequence length (see, e.g., Choi et al. (2017); Townsend et al. (2021); Kocisk et al. (2018)).",
"We are looking forward to seeing these opportunities exploited.",
"For easy reproduction of the results, we release our utilities, code and pretrained models on the MIT license for all researchers not affiliated or working for Russian state-controlled institutions and public companies.",
"The reason to ostracize scientists under those affiliations is the violent invasion of their armed forces on Ukraine, recklessly intended to inflict pain, threaten world peace and civilians life with nonhuman aggression against a sovereign nation.",
"The authors would like to thank Zofia Prochoroff and Pawe Morawiecki for the helpful discussions on the draft of the paper.",
"Moreover, we thank the reviewers for their comments and suggestions that helped improve the paper.",
"The Smart Growth Operational Programme supported this research under project no.",
"POIR.01.01.01-00-0877 /19-00 ( A universal platform for robotic automation of processes requiring text comprehension, with a unique level of implementation and service automation )."
] | [
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"method",
"abstain",
"other",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"objective",
"result",
"method",
"result",
"other",
"other",
"objective",
"abstain",
"abstain",
"other",
"other",
"method",
"other",
"method",
"other",
"other",
"objective",
"objective",
"other",
"other",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"Transfer learning that adapts a model trained on data-rich sources to low-resource targets has been widely applied in natural language processing (NLP).",
"However, when training a transfer model over multiple sources, not every source is equally useful for the target.",
"To better transfer a model, it is essential to understand the values of the sources.",
"In this paper, we develop SEAL-Shap, an efficient source valuation framework for quantifying the usefulness of the sources (e.g., domains/languages) in transfer learning based on the Shapley value method.",
"Experiments and comprehensive analyses on both cross-domain and cross-lingual transfers demonstrate that our framework is not only effective in choosing useful transfer sources but also the source values match the intuitive source-target similarity.",
"Transfer learning has been widely used in learning models for low-resource scenarios by leveraging the supervision provided in data-rich source corpora.",
"It has been applied to NLP tasks in various settings including domain adaptation (Blitzer et al., 2007; Ruder and Plank, 2017), cross-lingual transfer (Tckstrm et al., 2013; Wu and Dredze, 2019), and task transfer (Liu et al., 2019b; Vu et al., 2020).",
"A common transfer learning setting is to train a model on a set of sources and then evaluate it on the corresponding target (Yao and Doretto, 2010; Yang et al., 2020).",
"1 However, not every source corpus contributes equally to the transfer model.",
"Some of them may even cause a performance drop (Ghorbani and Zou, 2019; Lin et al., 2019).",
"Therefore, it is essential to understand the value of each source in the transfer learning not only to achieve 1 In this paper, we focus on two transfer learning scenarios: 1) cross-lingual and 2) cross-domain.",
"We train a model on a set of source corpora and evaluate on a target corpus where each corpus refers to the corresponding domain or language.",
"Nonetheless, determining the value of a source corpus is challenging as it is affected by many factors, including the quality of the source data, the amount of the source data, and the difference between source and target at lexical, syntax and semantics levels (Ahmad et al., 2019; Lin et al., 2019).",
"The current source valuation or ranking methods are often based on single source transfer performance (McDonald et al., 2011; Lin et al., 2019; Vu et al., 2020) or leave-one-out approaches (Tommasi and Caputo, 2009; Li et al., 2016; Feng et al., 2018; Rahimi et al., 2019).",
"They do not consider the combinations of the sources.",
"Consequently, they may identify the best single source corpus effectively but their topk ranked source corpora may achieve limited gain in transfer results.",
"sElection for trAnsfer Learning via Shapley value) , a source valuation framework 2 (see Fig 1) based on the Shapley value (Shapley, 1952; Roth, 1988) in cooperative game theory.",
"SEAL-Shap adopts the notion of Shapely value to understand the contribution of each source by computing the approximate average marginal contribution of that particular source to every possible subset of the sources.",
"Shapley value is a unique contribution distribution scheme that satisfies the necessary conditions for data valuation like fairness and additivity (Dubey, 1975; Jia et al., 2019a,b).",
"As many model explanation methods including Shapley value are computationally costly (Van den Broeck et al., 2021), in a different context of features and data valuation in machine learning, Ghorbani and Zou (2019) propose to use an approximate Shapley value to estimate the feature or data values.",
"However, the existing approximation methods for estimating Shapley values are not scalable for NLP applications.",
"NLP models are often large (e.g., BERT (Devlin et al., 2019)) and NLP transfer learning usually assumes a large amount of source data.",
"To deal with the scalability issue, we propose a new sampling scheme, a truncation method, and a caching mechanism to efficiently approximate the source Shapley values.",
"We evaluate the effectiveness of SEAL-Shap under various applications in quantifying the usefulness of the source corpora and in selecting potential transfer sources.",
"We consider two settings of source valuation or selection: (1) where a small target corpus is available; and (2) where we only have access to the linguistic or statistical features of the target, such as language distance to the sources, typological properties, lexical overlap etc.",
"For the first setting, we use the small target data as the validation set to measure the values of the sources w.r.t the target.",
"For the second setting, we follow Lin et al. (2019) to train a source ranker based on SEAL-Shap and the available features.",
"We conduct extensive experiments in both (zero-shot) cross-lingual and cross-domain transfer settings on three NLP tasks, including POS tagging, sentiment analysis, and natural language inference (NLI) with different model architectures (BERT and BiLSTM).",
"In a case study, on the cross-lingual transfer learning, we exhibit that the source language values are correlated with the language 2 Our source codes are available at https://github.",
"family and language distanceindicating that our source values are meaningful and follow the intuitive source-target relationships.",
"Lastly, we analyze the approximation correctness and the run-time improvement of our source valuation framework SEAL-Shap.",
"We propose SEAL-Shap, a source valuation framework.",
"We start with the setting where we have only one target and multiple sources.",
"We denote the target corpus by V and the corresponding set of source corpora by D = { D 1 , , D m } .",
"Our goal is to quantify the value j of each source corpus D j to the transfer performance on V and explain model behaviors.",
"Once the source values are measured, we can then develop a method to select either all the sources or a subset of sources (i.e., D ) that realizes a good transfer accuracy on V .",
"Below, we first review the data Shapley value and its adaptation for transfer learning.",
"Then, we describe how SEAL-Shap efficiently quantifies j and how to use it to select a subset of sources for model transfer.",
"Shapley value is designed to measure individual contributions in collaborative game theory and has been adapted for data valuation in machine learning (Ghorbani and Zou, 2019; Jia et al., 2019a,b).",
"In the transfer learning setting, on a target corpus V , let Score ( C , V ) represent the transfer performance of a model C trained on a set of source corpora .",
"3 The Shapley value j is defined as the average marginal contribution of a source corpus D j to every possible subsets of corpora D : 1 m (cid:88) D D j Score ( C D j , V ) Score ( C , V ) (cid:0) m 1 | | (cid:1) .",
"TMC-Shap for Transfer Learning: Computing the exact source-corpus Shapley value, described above, is computationally difficult as it involves evaluating the performances of the transfer models trained on all the possible combinations of the source corpora.",
"Hence, Ghorbani and Zou (2019) propose to approximate the evaluation by 3 In this paper, we consider a model trained on the union of the source data and the loss function for training the model is aggregated from the loss functions defined on each source.",
"However, our approach is agnostic to how the model is trained and can be integrated with other training strategies.",
"a truncated Monte Carlo method.",
"Given the target corpus V and a set of source corpora D , for each epoch, a source training data set D is maintained and a random permutation on D is performed (corresponds to line 6 in Algorithm 1 which is discussed in Sec 2.2).",
"Then it loops over every source corpus j in the ordered list and compute its marginal contribution by evaluating how much the performance improves by adding j to : Score ( C j , V ) Score ( C , V ) .",
"These processes are repeated multiple rounds and the average of all marginal contributions associated with a particular source corpus is taken as its approximate Shapley value (line 18 in Algorithm 1).",
"When the size of increase, the marginal contribution of adding a new source corpus becomes smaller.",
"Therefore, to reduce the computation, Ghorbani and Zou (2019) propose to truncate the computations at each epoch when the marginal contribution of adding a new source j is smaller than a user defined threshold Tolerance (line 10-11, 18 in Algorithm 1).",
"4 2.2 SEAL-Shap Despite that TMC-Shap improves the running time, it is still unrealistic to use it in our setting where both source data and model are large.",
"For example, in cross-lingual POS tagging on Universal Dependencies Treebanks, on average, it takes more than 200 hours to estimate the values of 30 source languages with multi-lingual BERT (See Sec 4.4).",
"Therefore, in the following, we propose three techniques to further speed-up the evaluation process.",
"Stratified Sampling When computing the marginal contributions, training a model C on the entire training set is computationally expensive.",
"Based on extensive experiments, when computing these marginal contributions, we find that we do not need the performance difference of models trained with the entire training sets.",
"For a reasonably large source corpus, 20-30% samples 5 in each source achieve lower but representative performance difference, in general.",
"Therefore, we sample a subset of instances to evaluate the marginal contributions.",
"To address computational limitation and scale to large data, sampling techniques have been widely discussed (L'heureux et al., 2017).",
"In particular, we employ a stratified sampling (Neyman, 1992) to generate a subset T 4 Setting Tolerance to 0 turns off the truncation.",
"5 Higher sampling rate typically leads to better approximation but are expensive in run-time.",
"from by sampling training instances from each source corpus x with a user defined sample rate .",
"Then, we train the model on T (line 14-15 in Algorithm 1).",
"The quantitative effectiveness of this technique is discussed in Sec 4.4 and the impact of different sampling rates are presented in Fig 5.",
"Truncation As discussed in Sec 2.1, at each epoch, Ghorbani and Zou (2019) truncate the computations once a marginal contribution becomes small when looping over the ordered list of that corresponding epoch, typically for the last few sources in .",
"On the other hand, at the beginning of each epoch, when computing the marginal contribution by adding the first source corpus 1 into an empty , the contribution is computed by the performance gap between a model trained on 1 and a random baseline model without any training.",
"Usually, the performance of a random model ( v 0 ) is low and hence, the marginal contribution is high in the first step, in general.",
"As this scale of marginal contributions at the first step is drastically different from later steps, it leads TMC-Shap to converge slowly.",
"Hence, to restrict the variance of the marginal contributions, we down weight the marginal contributions of the first step by setting v 0 = , where is a hyper-parameter 6 indicating the 6 Typically a factor of the performance achieved when using only one source, or all the sources together baseline performance of a model (line 7, 18 in Algorithm 1).",
"Caching When computing the source Shapley values, we have to repeatedly evaluate the performance of the model on different subsets of source corpora.",
"Sometimes, we may encounter subsets that we have evaluated before.",
"For example, consider a set of source corpora D = { D 1 , D 2 , D 3 } and we evaluate their Shapley values through two permutations: 1 = [ D 3 , D 1 , D 2 ] , and 2 = [ D 1 , D 3 , D 2 ] .",
"When we compute the marginal contribution of the last source corpus D 2 , in both cases the training set = { D 1 , D 3 } .",
"That is, if we cache the result of Score ( CD 1 D 3 ) , then we can reuse the scores.",
"We implement this cache mechanism in line 1, 13, 16, 17 in Algorithm 1. With these optimization techniques, we improve the computation time by about 2 x (see Sec 4.4).",
"This enables us to apply this techniques in NLP transfer learning.",
"Note that whenever an causes a cache miss, for each source x , as discussed above in this Section, we sample a new set of instances (line 13-14 in Algorithm-1).",
"Thus, given a reasonably large number of epochs, our approach performs sampling for a large number of times and in aggregation, it evaluates a wide number of samples in each source.",
"Many applications require to evaluate the values of a set of sources with respect to a set of targets.",
"For example, under the zero-shot transfer learning setting, we assume a model is purely trained on the source corpora without using any target data.",
"Consequently, then the same trained model can be evaluated on multiple target corpora.",
"With this intuition, whenever the model is trained on a new training set , SEAL-Shap evaluates it on all the target corpora and caches all of them accordingly.",
"In the previous discussions above, we assume a small annotated target corpus is available and can be used to evaluate the transfer performances.",
"However, in some scenarios, only some linguistic or statistical features of the sources and targets, such as language distance and word overlap, are available.",
"Lin et al. (2019) show that by using these features, we can train a ranker to sort the sources to unknown targets by predicting their value.",
"In the following, we extend their ranker by incorporating it with SEAL-Shap.",
"Given the set of training corpora D and the actual target corpus V , we iteratively consider each training corpus D j as target and the rest m 1 corpora as the sources.",
"We compute the corresponding source values YD j D = { D 1 , . . . , D j 1 , D j +1 , . . . , D m } .",
"Now, w.r.t the target D j , the linguistic or statistical features of the source corpora (e.g., language distance from the target, lexical overlap between the corresponding source and the target) XD j D = { F j ( D 1 ) ,.",
".",
". ,F j ( D j 1 ) , F j ( D j +1 ) ,.",
".",
". ,F j ( D m ) } where F j denotes the source feature generator function for the corresponding target D j .",
"This feature vector of the source corpora ( XD j D ) is a training input and their value vector ( YD j D ) is the corresponding training output for the ranker.",
"We repeat this for each training corpus and generate the respective training inputs and outputs for the ranker.",
"Once trained, for the actual target V and the source corpora D , the ranker can predict the values of the source corpora YV D only based on the linguistic source features XV D .",
"The source values computed in Sec 2.2-2.4 estimate the usefulness of the corresponding transfer sources and can be used to identify the potential sources which lead to the good transfer performances.",
"We select the potential source corpora in two ways.",
"(i) Topk : We simply sort the sources based on their values and select the user defined topk sources.",
"(ii) Threshold: When an annotated evaluation dataset in target corpus V is available, after computing the source values, we empirically set a threshold and select each source that has source value higher than .",
"On that evaluation target corpus, we tune and set for which the corresponding transfer model achieves the best performance.",
"We conduct experiments on zero-shot cross-lingual and cross-domain transfer settings.",
"Models are trained only on the source languages/domains and directly applied in target languages/domains.",
"Cross-lingual Datasets We conduct experiments on two popular cross-lingual transfer problems:",
"(i) universal POS tagging on the Universal Dependencies Treebanks (Nivre et al., 2018).",
"Following Ahmad et al. (2019), we select 31 languages of 13 different language families (details in Appendix A).",
"(ii) natural language inference on the XNLI",
"dataset (Conneau et al., 2018), that covers 15 different languages.",
"XNLI task is a 3-way classification task (entailment, neutral, and contradiction).",
"Data statistics are in Appendix R. Cross-domain Datasets We consider three domain transfer tasks:",
"(i) POS tagging: we use the SANCL 2012 shared task datasets (Petrov and McDonald, 2012) that has six different domains (de-tails in Appendix B).",
"(ii) Sentiment analysis: we use the multi-domain sentiment datasets (Liu et al., 2017) which has several additional domains than the popular Blitzer et al. (2007) dataset, See Appendix D.",
"(iii) NLI: we consider a (modified) binary classification (e.g., entailed or not) dataset used in Ma et al. (2019).",
"It is made upon modification on GLUE tasks (Wang et al., 2018) and has four domains (details in Appendix C).",
"As GLUE test sets are unavailable, for each target domain, we use the original dev set as the pseudo test set and randomly select 2,000 instances from its training set as the pseudo dev set.",
"Classifier and Preprocessing For all domain transfer tasks, we use BERT and for all language transfer tasks, we use multi-lingual BERT (De-vlin et al., 2019) models except for cross-doman POS tagging where we consider the state-of-the-art BiLSTM based Flair framework (Akbik et al., 2018).",
"For BERT models, we use the Transformers implementations in the Huggingface library Wolf et al. (2019).",
"For significance test, we use an open-sourced library.",
"7 By default, no preprocessing is performed except tokenization (see Appendix J).",
"Hyper-parameters Tuning For all BERT models, we tune the learning rate, batch size, and number of epochs.",
"We also tune the number of epochs nepoch in Algorithm 1, the threshold SEAL-Shap value , initial score .",
"Details are in Appendix K. 7 github.com/neubig/util-scripts/blob/ master/paired-bootstrap.py 4 Results and Discussion In the following, we first verify SEAL-Shap is an effective tool for source valuation.",
"Then, we evaluate the source values when an evaluation target corpus is unavailable.",
"In Sec 4.3, we interpret the relations between sources and targets based on the SEAL-Shap values.",
"Finally, we analyze our method with comprehensive ablation studies.",
"We assess our source valuation approach in compare to the following baselines:",
"(i) Baseline-s : source values are based on the single source transfer performance.",
"(ii) Leave-one-out (LOO) : source values are based on how much transfer performance we loose if we train the model on all the sources except the corresponding one.",
"(iii) Baseline-r : a random baseline that assigns random values to sources.",
"8",
"(iv) Greedy DFS : the top-1 ranked source is same as that of Baseline-s .",
"Next, it selects one of the remaining sources as top-2 that gives the best transfer result along with the top-1 and so on.",
"(v) Lang-Dist : (if available) in reverse order of target-source language distance (Ahmad et al., 2019).",
"9 Balancing Source Corpora In the experiements, our focus is to understand the values of the sources.",
"For some datasets, the sizes of source corpora are very different.",
"For example, in UD Treebank, the number of instances in Czech, and Turkish is 69k, 3.5k, respectively.",
"Since data-size is an obvious factor, we conduct experiments on balanced data to reduce the influence of data-size in the analysis.",
"We sub-sample the source corpora to ensure their sizes are similar.",
"Specifically, for the cross-domain NLI task, we sample 20k instances for each source.",
"For others, we sub-sample each source such that the size of the corpus is the same as the smallest one in the dataset.",
"However, our approach can handle both balanced or unbalanced data and the source values are similar in conclusions (e.g., see Fig 5).",
"Result: We first compare these methods by selecting topk sources ranked by each of the approach and reporting the corresponding transfer performance.",
"With k = 3 , we plot the corresponding transfer results and the running time for valuation in Fig 2. As mentioned in Sec 1, the relatively strong Baseline-s can select the best performing top-1 source but with top-2 and top-3 sources, the performances drop on cross-domain sentiment analysis and cross-lingual POS tagging (See Fig",
"2(c) and",
"2(a)) while our approach shows a consistent gain in all of the these tasks and with top-3 sources it achieves the best performances.",
"Appendix I plots the results with higher k .",
"Next, as in Sec 2.5, we tune a threshold and either select all the sources as useful or a smaller subset of m number of sources (i.e., m < |D| ) whose SEAL-Shap values are higher than .",
"In the followings, we compare the model performances of these m sources selected by SEAL-Shap with the same topm sources ranked by the aforementioned baseline methods.",
"Being relatively weak or slow, we do not further report performances for LOO , Model WSJ EM N A R WB Avg MMD 96.12 96.23 96.40 95.75 95.51 96.95 96.16 RENYI 96.35 96.31 96.62 95.52 95.97 96.75 96.25 All Sources 95.95 95.39 96.94 95.15 96.08 97.10 96.10 Baseline-r 95.98 93.41 93.78 93.14 95.25 97.10 94.78 SEAL-Shap 96.14 $ 95.47 $ 97.02 $ 95.30 $ 96.17 $ 97.10 96.20 Table 2: POS tagging results (% accuracy) on SANCL 2012 Shared Task.",
"Lang-Dist , and Greedy DFS .",
"Rather we consider another strong baseline All Sources that uses all the source corpora D .",
"This is a strong baseline as it is trained on more source-corpus instances in general.",
"Cross-Lingual POS Tagging We evaluate the source selection results on zero-shot cross-lingual POS tagging in Table 1. Among the 31 target languages, in 21 of them, SEAL-Shap selects a small subset of source corpora.",
"From the Table, overall, SEAL-Shap selects source corpora with high usefulness for training the model, and except for few cases the model constantly outperforms all the baselines by more than 0 .",
"5% in avg token accuracy.",
"In 13 of them, it is statistically significant by a paired bootstrap test.",
"The gap is especially high for English, Czech, and Hindi.",
"These results demonstrate that SEAL-Shap is capable in both quantifying the source values and also in source selection.",
"We report the full results on the dev and test set of all target languages in Appendix M, N respectively.",
"For each row in Table 1, the number of selected sources are reported in Appendix S. Cross-Domain POS Tagging Table 2 presents the POS tagging results in zero-shot domain transfer on SANCL 2012 shared task.",
"In 5 out of 6 targets, SEAL-Shap outperforms all baselines except Baseline-s .",
"For each target domain with only 5 Model books kitchen dvd baby MR Avg Cai and Wan (2019) 87.3 88.3 88.8 90.3 76.3 86.2 All Sources 87.3 90.3 88.3 92.3 79.3 87.5 Baseline-r 87.0 90.5 87.3 91.8 78.8 87.1 Baseline-s 86.8 89.8 87.0 92.5 77.5 86.7 SEAL-Shap 87.3 90.8 88.8 92.5 79.5 87.8 Table 4: Cross-domain transfer results on multi-domain sentiment analysis task.",
"sources, Baseline-s source values match with ours in general.",
"However, SEAL-Shap significantly outperforms Baseline-r on all 5 cases and All-Sources twice.",
"It even outperforms MMD, and RENYI (Liu et al., 2019a) on Newsgroups (N), Reviews (R), and Weblogs (WB) despite they select source data at instance level and use additional resources.",
"Cross-Lingual NLI In Table 3, we show the XNLI results in 8 target languages where SEAL-Shap selects a small subset of source corpora.",
"Among them, in 3 languages, Baseline-r marginally surpasses ours.",
"However, in 5 other languages SEAL-Shap outperforms all the baselines with clear margin specially on Bulgarian, Vietnamese with about 1% better accuracy (full results in Appendix E).",
"Cross-Domain NLI Next, we evaluate SEAL-Shap on the modified GLUE dataset in Table 5.",
"SEAL-Shap outperforms Baseline-s once and other baselines in all cases.",
"Its highest performance improvement is gained on QNLI, where it outperforms others by 4% .",
"Cross-Domain Sentiment Analysis Among the 13 target domains in the multi-domain sentiment analysis dataset, in 5 domains SEAL-Shap selects a small subset (full results in Appendix O).",
"As in Table 4), with a large margin, SEAL-Shap achieves higher accuracy than all other baselines and, in 4 cases, it is even better than Cai and Wan (2019) that uses unlabeled target data.",
"Our experimental evidences show that SEAL-Shap is an effective tool in choosing useful transfer sources and can achieve higher transfer performances than other source valuation approaches.",
"We evaluate the effectiveness of SEAL-Shap to build a straightforward ranker that directly computes the source values without any evaluation target corpus (see Sec 2.4).",
"We use the ranker in Lin et al. (2019) as the underlying ranking model.",
"First, we show that the source values evaluated by the ranker is as good as SEAL-Shap that uses its annotated target dataset.",
"We compare the transfer performances of the topk sources based on the source values computed with and without the evaluation corpus.",
"Then, we show that the ranker trained with SEAL-Shap is more effective than training it with the existing single source based Baseline-s .",
"In cross-lingual POS tagging on UD Treebank, for each of the 31 target languages, we set aside that language and consider the remaining 30 languages as the training corpora.",
"We then train the ranker as described in Sec 2.4 and compute the source values using it.",
"As for reference, we pass the evaluation target dataset and the 30 source languages to SEAL-Shap to compute their values on",
"(a) XNLI, target: 'es', R < 10 %",
"(b) mGLUE, target: MNLI-mm, R=10-20 %",
"(c) SANCL'12, target: wsj, R 50% Figure 5: Source values by TMC-Shap and ours.",
"TMC-Shap uses unbalanced full source corpora whereas SEAL-Shap that achieves similar source values uses balanced and sampled source corpora.",
"Even with a small sample rate (R), source order is almost same.",
"Higher sampling rate typically refers to better approximation but leads to expensive runtime.",
"In general, for a reasonably large corpus, 20-30% samples (>few thousands) are found sufficient to achieve reasonable approximation.",
"the evaluation dataset.",
"With k = 3 , we compare the transfer results of the topk sources of these two methods in Fig 3. We also plot the results of the baseline ranker (Lin et al., 2019) that is trained with Baseline-s .",
"Results show that the ranker source values are similar to the sources values estimated by SEAL-Shap with an annotated evaluation dataset and also it outperforms the baseline.",
"In this Section, we show that SEAL-Shap values provide a means to understand the usefulness of the transfer sources in cross-lingual and cross-domain transfer.",
"We first analyze cross-lingual POS tagging.",
"Following Ahmad et al. (2019), we consider using language family and word-order distance as a reference distance metric.",
"We anticipate that languages in the same language family with smaller word-order distance from the target language are more valuable in multi-lingual transfer.",
"We plot SEAL-Shap of source languages evaluated on two target languages English (en) and Hindi (hi) in Fig 4. In the x-axis, a common set of twenty different source languages are grouped into ten different language families and sorted based on the word order distance from English.",
"As the figure illustrates, Germanic and Romance languages have higher Shapley values when using English as the target language.",
"The value gradually decreases for language of other families when the word order distance increase.",
"As for the target language Hindi, the trend is opposite, in general.",
"Analogously, in cross-domain NLI, we find that correlation between QNLI, and QQP is high whereas between MNLI-mm and QQP, it is lower (see Appendix Q).",
"SEAL-Shap on Similar Targets Intuitively, if two target corpora are similar, the corresponding Shapley values of the source corpora when transferring to these two targets should be similar as well.",
"To verify, in Fig 6, we plot the Shapley values of twenty nine source languages for targets Russian and Serbian on cross-lingual POS tagging.",
"Also we plot the source values when transferring a NLI model to English and French in Fig 7.",
"We observe that the corresponding curves are almost identical, and SEAL-Shap in fact selects the same set of source corpora as potential.",
"These results suggest that if there is no sufficient data in the target corpus, it is also possible to use a neighboring corpus as a proxy to compute SEAL-Shap values.",
"Source Values Influenced by Data Processing Typically, the sources with least or negative source values are from the domains/languages that are different from the targets (e.g., Fig 4).",
"However, in some cases, source usefulness (i.e., values) is affected by the data preparing process.",
"For example, in XLNI, the source corpora are prepared by machine translation from en (Conneau et al., 2018) and the quality of this translation into zh is better Prob.",
"in compare to other languages, in general.",
"Consequently, in Fig 7, zh has higher source value for both targets en and fr.",
"How good is the approximation?",
"In Fig 5, we compare SEAL-Shap with TMC-Shap (Ghorbani and Zou, 2019) on three datasets (details in Appendix F).",
"Overall, the Shapley values obtained by SEAL-Shap and TMC-Shap are highly correlated and their relative orders are matched, while SEAL-Shap is much more efficient.",
"Note that, the rankings themselves being same/similar, the model performances using the same/similar topk sources are same/similar, too; therefore, we do not list their transfer performances furthermore.",
"Ablation Study: We examine the effectiveness of each proposed components in SEAL-Shap.",
"Results are shown in Table 6 and details are in Appendix F-H.",
"Results show that without the proposed approximation, TMC-Shap is computational costly and is impractical to use to analyze the value of source corpus in the NLP transfer setting.",
"All the proposed components contribute to significantly speed-up the computations.",
"approximation, we study if SEAL-Shap is sensitive to the random seed using the cross-lingual POS tagging task.",
"To analyze, we first compute a reference Shapley values by running SEAL-Shap until empirically convergence (blue line).",
"Then, we report the Shapley value produced by another random seed.",
"Fig 8 shows that with enough epochs, the values computed by different random seeds are highly correlated (more in Appendix H).",
"As discussed in Section 1, transfer learning has been extensively studied in NLP to improve model performance in low-resource domains and languages.",
"In the litearture, various approaches have been proposed to various tasks, including text classification (Zhou et al., 2016; Kim et al., 2017), natural language inference (Lample et al., 2018; Artetxe and Schwenk, 2019), sequence tagging (Tckstrm et al., 2013; Agic et al., 2016; Kim et al., 2017; Ruder and Plank, 2017), dependency parsing (Guo et al., 2015; Meng et al., 2019).",
"These prior studies mostly focus on bridging the domain gap between sources and targets.",
"In different contexts, methods including influence functions and Shapley values have been applied to value the contribution of training data (Koh and Liang, 2017; Lundberg et al., 2018; Jia et al., 2019a).",
"Specifically, Monte Carlo approximation of Shapley values has been used in various applications (Maleki, 2015; Jia et al., 2019a; Ghorbani and Zou, 2020; Tripathi et al., 2020; Tang et al., 2020; Sundararajan and Najmi, 2019).",
"However they are either task/model specific or not scalable to NLP applications.",
"Oppositely, Kumar et al. (2020) discuss the problems of using Shapley value for model explanation.",
"In contrast, we apply efficient Shapley value approximation in NLP transfer learning and analyze the source-target relationships.",
"We propose SEAL-Shap to quantify the value of the source corpora in transfer learning for NLP by computing an approximate Shapley value for each corpus.",
"We show that SEAL-Shap can be used to select source corpora for transfer and provide insight on understanding the value of source corpora.",
"In the future, we plan to further improve the runtime of our source valuation approach by limiting the repetition of model training.",
"We thank the anonymous reviewers for their insightful feedback.",
"We also thank UCLA-NLP for discussion and feedback.",
"This work was supported in part by NSF 1927554 and DARPA MCS program under Cooperative Agreement N66001-19-2-4032.",
"The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"objective",
"method",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"other",
"objective",
"other",
"objective",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"method",
"method",
"objective",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"result",
"result",
"other",
"other",
"other",
"other"
] |
[
"Abstract The impression section of a radiology report summarizes the most prominent observation from the findings section and is the most important section for radiologists to communicate to physicians.",
"Summarizing findings is time-consuming and can be prone to error for inexperienced radiologists, and thus automatic impression generation has attracted substantial attention.",
"With the encoder-decoder framework, most previous studies explore incorporating extra knowledge (e.g., static pre-defined clinical ontologies or extra background information).",
"Yet, they encode such knowledge by a separate encoder to treat it as an extra input to their models, which is limited in leveraging their relations with the original findings.",
"To address the limitation, we propose a unified framework for exploiting both extra knowledge and the original findings in an integrated way so that the critical information (i.e., key words and their relations) can be extracted in an appropriate way to facilitate impression generation.",
"In detail, for each input findings, it is encoded by a text encoder, and a graph is constructed through its entities and dependency tree.",
"Then, a graph encoder (e.g., graph neural networks (GNNs)) is adopted to model relation information in the constructed graph.",
"Finally, to emphasize the key words in the findings, contrastive learning is introduced to map positive samples (constructed by masking non-key words) closer and push apart negative ones (constructed by masking key words).",
"The experimental results on OpenI and MIMIC-CXR confirm the effectiveness of our proposed method.",
"1 1 Introduction Radiology reports document critical observation in a radiology study and play a vital role in communication between radiologists and physicians.",
"A * Equal Contribution.",
"radiology report usually consists of a findings section describing the details of medical observation and an impression section summarizing the most prominent observation.",
"The impression is the most critical part of a radiology report, but the process of summarizing findings is normally time-consuming and could be prone to errors for inexperienced radiologists.",
"Therefore, automatic impression generation (AIG) has drawn substantial attention in recent years, and there are many methods proposed in this area (Zhang et al., 2018; Gharebagh et al., 2020; MacAvaney et al., 2019; Shieh et al., 2019).",
"Most existing studies focus on incorporating extra knowledge on the general encoder-decoder framework.",
"For example, Zhang et al. (2018) utilized the background section in the radiology report through a separate encoder and then used it to guide the decoding process to enhance impression generation.",
"Similarly, MacAvaney et al. (2019) and Gharebagh et al. (2020) proposed to extract the ontology information from findings and used an encoder to encode such information to promote the decoding process.",
"Although these approaches have brought significant improvements, they only leverage extra knowledge and findings separately (i.e., 4677 Figure 2: The overall architecture of our proposed method with graph and contrastive learning. An example input and output at t 1 and t step are shown in the figure, where the top is the backbone sequence-to-sequence paradigm with a graph to store relation information between critical words and the bottom is the contrastive learning module with specific positive and negative examples. m refer to a mask vector. through an extra encoder).",
"Thus, their performance relies heavily on the quality of extra knowledge, and the further relationships between extra knowledge and findings are not explored.",
"In this paper, we propose a unified framework to exploit both findings and extra knowledge in an integrated way so that the critical information (i.e., key words and their relations in our paper) can be leveraged in an appropriate way.",
"In detail, for each input findings, we construct a word graph through the automatically extracted entities and dependency tree, with its embeddings, which are from a text encoder.",
"Then, we model the relation information among key words through a graph encoder (e.g., graph neural networks (GNNs)).",
"Finally, contrastive learning is introduced to emphasize key words in findings to map positive samples (constructed by masking non-key words) closer and push apart negative ones (constructed by masking key words), as shown in Figure",
"1. In such a way, key words and their relations are leveraged in an integrated way through the above two modules (i.e., contrastive learning and the graph encoder) to promote AIG.",
"Experimental results on two prevailing datasets (i.e., OpenI and MIMIC-CXR) show that our proposed approach achieves state-of-the-art results compared to existing studies.",
"We follow the standard sequence-to-sequence paradigm for AIG.",
"First, we utilize WordPiece (Wu et al., 2016) to tokenize original findings and obtain the source input sequence X = x 1 , x 2 , , x N , where N is the number of tokens in X .",
"The goal is to find a sequence Y = { y 1 , ...y i , ..., y L } that summarizes the most critical observations in findings, where L is the length of impression and y i V are the generated tokens and V is the vocabulary of all possible tokens.",
"The generation process can be formalized as: p ( Y | X ) = L (cid:89) t =1 p ( y t | y 1 , . . . , y t 1 , X ) (1) The model is then trained to maximize the negative conditional log-likelihood of Y given the X : = arg max L (cid:88) t =1 log p ( y t | y 1 , ..., y t 1 , X , A ; ) (2) where is the parameters of the model, and A represents edges in the relation graph.",
"An overview of our proposed method is presented in Figure",
"2. Our model contains three main components, i.e., the graph enhanced encoder, the contrastive learning module, and the decoder.",
"The details are described in the following sub-sections.",
"The impression usually describes critical abnormalities with more concise descriptions summarized from the corresponding findings and sometimes uses key phrases to express observations.",
"For 4678 example, a sentence in findings texts There is a left pleural effusion which is small in size. , is simplified as a key phrase Small left pleural effusion in the impression, where the relation between small and effusion is vital for describing the corresponding observation.",
"Thus, the relation information in findings plays an essential role in accurate key phrase generation.",
"Four types of medical entities, anatomy , observation , anatomy modifier , and observation modifier , are recognized from findings, which compose a majority of important medical knowledge in impression (Hassan-pour and Langlotz, 2016).",
"With WordPiece to-kenization, we represent each entity by frequent subwords and connect any two subwords if they are adjacent in the same entity to enhance internal relations for keeping the entity complete.",
"For example, the entity opacity is represented as op ##acity and then these two subwords connect to each other with both from op to ##acity and from ##acity to op .",
"Besides, we need to consider the semantic relation between entities and other words, such as words used to describe the location and degree of symptoms, which is necessary for accurately recording abnormalities.",
"For example, in a text span bilateral small pleural effusions , relations in < bilateral , effusions >, < small , effusions > are also important to describe the observation effusions and they can be extracted from the dependency tree.",
"Therefore, we construct a dependency tree to extract the semantic relations between entities and other words, with the direction from their head words to themselves.",
"We also employ the WordPiece to split these words as subwords and connect all the source subwords to the corresponding target words with the original direction.",
"The constructed subword graph is then used to extract relation information, with edges represented by A .",
"In recent years, pre-trained models have dominated not only general summarization tasks but also multi-modal tasks because of their strong ability in feature representation (Wu et al., 2021; Zhang et al., 2020a; Yuan et al., 2021, 2022).",
"Thus, in our method, we utilize the pre-trained model BioBERT (Lee et al., 2020) trained on a large biomedical corpus as our text encoder.",
"The hidden state h i for each token x i is generated by the text encoder [ h 1 , h 2 , , h n ] = f te ( x 1 , x 2 , , x n ) (3) Algorithm 1: Generation of Examples Input: s : graph enhanced token representation A : edges in relation graph Output: s p Positive example s n Negative example Initialization: s p s , s n s m = [1 e 6] R d 1: N , d = size ( s ) 2: V key = Extract_subword_index ( A ) 3: for j = 0 to N do 4: if j in V key then 5: s nj m 6: else : 7: s pj m 8: end if 9: end for Herein, f te ( ) refers to the pre-trained Transformer-based text encoder (i.e., BioBERT (Lee et al., 2020)), and h i is a d -dimensional feature vector for representing corresponding tokens x i .",
"Since GNNs are well known for extracting features from graph structure and have been shown promising in text generation tasks (Jia et al., 2020; Hu et al., 2021), we employ a GNN-based encoder to capture relation information from the corresponding subword graph.",
"This process can be formulated as: z = f ge ( h , A ) , (4) where f ge ( ) is the graph encoder, and z is the feature vector extracted from the graph.",
"Next, to incorporate relation information into token representation, we concatenate z and h and utilize a fully connected layer to reduce it to the same dimensions as z and h : s = MLP ([ h 1 z 1 , h 2 z 2 , , h n z n ]) , (5) where s is the final token representation.",
"Only relying on a GNN encoder to capture relation information still lacks the capability to fully grasp important word information from findings since the graph is pre-defined before training or testing.",
"Recently, contrastive learning has shown strong power in learning and distinguishing significant knowledge by concentrating positive samples and contrasting with negative samples, and brought significant improvements in many tasks, such as improving the faithfulness of summarization and discriminating vital information to enhance repre-4679 sentation (Cao and Wang, 2021; Zeng et al., 2021).",
"We expect our model to be more sensitive to critical words contained in findings.",
"For this purpose, we apply a contrastive learning module to concentrate positive pairs and push negative ones apart, which aims to help the model differentiate essential information from secondary information.",
"We regard tokens with edges in the relation graph as critical tokens since they contain important information for describing key observations, as discussed in 2.1.",
"To construct a positive example, we mask each non-key token representation in s as the constant vectors m R d , with all elements 1 e 6 , so that this instance can consolidate the critical information and remove unimportant words.",
"Meanwhile, we utilize a similar way to mask important token representations in s as m to obtain a negative example s n .",
"The details of generating positive and negative examples are shown in Algorithm",
"1. Note that in our model, we do not consider the other instances in the same mini-batch as the negative examples, which is different from many existing approaches (Kim et al., 2021; Giorgi et al., 2020) since we aim to identify the critical content in X instead of expanding differences between various findings in one mini-batch.",
"In addition, radiology reports are not as diverse as ordinary texts, and they are mainly composed of fixed medical terms and some attributive words, where the former is used to record critical information and the latter is to keep sentences fluent and grammatically correct.",
"Afterward, we employ a randomly initialized Transformer-based encoder to model s , s p , s n , respectively, which can be formulated as: b = f ce ( s ) , (6) b p = f ce ( s p ) , (7) b n = f ce ( s n ) , (8) where f ce ( ) represents the contrastive encoder.",
"b , b p and b n are intermediate states extracted from the encoder, which are also d -dimensional vectors.",
"Then, we calculate cosine similarity sim ( b 1 , b 2 ) = b 1 b 2 b 1 b 2 for positive and negative pairs, denoted as sim ( b , b p ) and sim ( b , b n ) .",
"We follow Robinson et al. (2020) to formulate the training objective of contrastive module: l con = log e sim( b i , b p ) / (cid:80) Nj =1 (cid:0) e sim( b i , b p ) / + e sim( b i , b n ) / (cid:1) , (9) where is a temperature hyperparameter, which is DATATYPETRAINDEVTESTOPENI REPORT # 2400 292 576 AVG .",
"The decoder in our model is built upon a standard Transformer (Vaswani et al., 2017), where the representation s is functionalized as the input of the decoder so as to improve the generation process.",
"In detail, s is sent to the decoder at each decoding step, jointly with the generated tokens from previous steps, and thus the current output y t can be computed by y t = f e ( s 1 , s 2 , , s n , y 1 , , y t 1 ) , (10) where f e ( ) refers to the Transformer-based decoder and this process is repeated until the complete impression is obtained.",
"Besides, to effectively incorporate the critical word information into the decoding process, we sum the losses from the impression generation and contrastive objectives as L = l ge + l con , (11) where l ge is the basic sequence-to-sequence loss, and is the weight to control the contrastive loss.",
"Our experiments are conducted on two following datasets: OPENI (Demner-Fushman et al., 2016) and MIMIC-CXR (Johnson et al., 2019) respectively, where the former contains 3268 reports collected by Indiana University and the latter is a larger dataset containing 124577 reports.",
"Note that the number of reports we introduced is counted after pre-processing.",
"We follow (Hu et al., 2021; 4680 DATAMODELROUGE FC R-1 R-2 R-LP R F-1 OPENI BASE 62.74 53.32 62.86 -BASE +CL 63.53 54.58 63.13 -BASE + GRAPH 63.29 54.12 63.03 -BASE + GRAPH +CL 64.97 55.59 64.45 --MIMIC-CXR BASE 47.92 32.43 45.83 58.05 50.90 53.01 BASE +CL 48.15 33.25 46.24 58.34 51.58 53.70 BASE + GRAPH 48.29 33.30 46.36 57.80 51.70 53.50 BASE + GRAPH +CL 49.13 33.76 47.12 58.85 52.33 54.52 Table 2: Comparisons of baselines and our method on OPENI and MIMIC-CXR datasets. R-1, R-2 and R-L refer to ROUGE-1, ROUGE-2 and ROUGE-L, respectively. P, R and F-1 represent precision, recall, and F1 score. Zhang et al., 2018) to filter the reports by deleting the reports in the following cases: (1) no findings or no impression sections; (2) the findings have fewer than ten words, or the impression has fewer than two words.",
"For OPENI, we follow (Hu et al., 2021) to randomly divide it into train/validation/test set by 2400:292:576 in our experiments.",
"For MIMIC-CXR, we apply two types of splits, including an official split and a random split with a ratio of 8:1:1 similar to (Gharebagh et al., 2020).",
"We report the statistics of these two datasets in Table",
"1. 3.2 Baseline and Evaluation Metrics To explore the performance of our method, we use the following ones as our main baselines: BASE (Liu and Lapata, 2019): this is a backbone sequence-to-sequence model, i.e., a pre-trained encoder and a randomly initialized Transformer-based decoder.",
"BASE + GRAPH and BASE +CL : these have the same architecture as BASE , where the former incorporates an extra graph encoder to enhance relation information, and the latter introduces a contrastive learning module to help the model distinguish critical words.",
"Besides, we also compare our method with those existing studies, including both extractive summarization methods, e.g., LEXRANK (Erkan and Radev, 2004), TRANSFORMEREXT (Liu and La-pata, 2019), and the ones proposed for abstractive models.",
"e.g., TRANSFORMERABS (Liu and La-pata, 2019), ONTOLOGYABS (Gharebagh et al., 2020), WGSUM (TRANS +GAT) , and WGSUM (LSTM+GAT) (Hu et al., 2021).",
"Actually, factual consistency (FC) is critical in radiology report generation (Liu et al., 2019; Chen et al., 2020).",
"Following Zhang et al. (2020c); Hu et al. (2021), we evaluate our model and three baselines by two types of metrics: summarization and FC metrics.",
"For summarization metrics, we report F 1 scores of ROUGE-1 (R-1), ROUGE-2 (R-2), and ROUGE-L (R-L).",
"Besides, for FC metrics, we utilize CheXbert (Smit et al., 2020) 2 to detect 14 observations related to diseases from reference impressions and generated impressions and then calculate the precision, recall, and F1 score between these two identified results.",
"In our experiments, we utilize biobert-base-cased-v1.1 3 as our text encoder and follow its default model settings: we use 12 layers of self-attention with 768-dimensional embeddings.",
"Besides, we employ stanza (Zhang et al., 2020d) to extract medical entities and the dependence tree, which is used to construct the graph and generate positive and negative examples.",
"Our method is implemented based on the code of BertSum (Liu and Lapata, 2019) 4 .",
"In addition, we use a 2-layer graph attention networks (GAT) (Velickovic et al., 2017) 5 with the hidden size of 768 as our graph encoder and a 6-layer Transformer with 768 hidden sizes and 2048 feed-forward filter sizes for the contrastive encoder.",
"The decoder is also a 6-layer Transformer with 768 dimensions, 8 attention heads, and 2048 feed-forward filter sizes.",
"Note that is set 1 in all experiments, and more detailed hyperparame-ters are reported in A.1.",
"During the training, we use Adam (Kingma and Ba, 2014) to optimize the trainable parameters in our model.",
"2 FC is only applied to MIMIC-CXR since the CheXbert is designed for MIMIC-CXR.",
"We obtain it from https:// github.com/stanfordmlgroup/CheXbert 3 We obtain BioBERT from https://github.com/ dmis-lab/biobert 4 We obtain the code of BertSum from https:// github.com/nlpyang/PreSumm 5 Since previous study (Hu et al., 2021) has shown that GAT (Velickovic et al., 2017) is more effective in impression generation, we select GAT as our graph encoder.",
"To explore the effectiveness of our proposed method, we conduct experiments on two benchmark datasets, with the results reported in Table 2, where BASE + GRAPH +CL represents our complete model.",
"We can obtain several observations from the results.",
"First, both BASE + GRAPH and BASE +CL achieve better results than BASE with respect to R-1, R-2, and R-L, which indicates that graph and contrastive learning can respectively promote impression generation.",
"Second, BASE + GRAPH +CL outperforms all baselines with significant improvement on two datasets, confirming the effectiveness of our proposed method in combining graph and contrastive learning.",
"This might be because graphs and contrastive learning can provide valuable information from different aspects, the former mainly record relation information, and the latter brings critical words knowledge, so that an elaborate combination of them can bring more improvements.",
"Third, when comparing these two datasets, the performance gains from our full model over three baselines on OpenI are more prominent than that on MIMIC-CXR.",
"This is perhaps because compared to MIMIC-CXR, OpenI dataset is relatively smaller and has a shorter averaged word-based length, such that it is easier for the graph to record relation and more accessible for contrastive learning to recognize key words by comparing positive and negative examples.",
"Fourth, we can find a similar trend on the FC metric on the MIMIC-CXR dataset, where a higher F1 score means that our complete model can generate more accurate impressions thanks to its more substantial power in key words discrimination and relationship information extraction.",
"In this subsection, we further compare our models with existing models on the aforementioned datasets, and the results are reported in Table",
"3. There are several observations.",
"First, the comparison between our model and ONTOLOGYABS shows the effectiveness of our design in this task, where our model achieves better performance, though both of them enhance impression generation by incorporating crucial medical information.",
"This might be because by comparing positive and negative examples for each findings, our model is more sensitive to critical information and more intelligent in distinguishing between essential information and secondary information, contributing to more accurate and valuable information embedded in the model.",
"Second, we can observe that our model outperforms all existing models in terms of R-1, R-2, and R-L.",
"On the one hand, effectively combining contrastive learning and graph into the sequence to sequence model is a better solution to improve feature extraction and thus promote the decoding process robustly.",
"On the other hand, the pre-trained model (i.e., BioBERT) used in our model is a more powerful feature extractor in modeling biomedical text than those existing studies, e.g., TRANSFORMERABS, ONTOLOGYABS, and PGN, which utilize randomly initialized encoders.",
"Third, when compared to those complicated models, e.g., WGSUM utilize stanza to extract entities and construct two extra graph encoders to extract features from a word graph, which are then regarded as background information and dynamic guiding information to enhance the decoding process for improving impression generation, our model can achieve better performance through a somewhat more straightforward method.",
"We further conduct a human evaluation to understand the quality of the generated impression better and alleviate the limitation of the ROUGE metric.",
"One hundred generated impressions on MIMIC-CXR from BASE and BASE + GRAPH +CL, along with their corresponding reference impressions, are randomly selected for expert evaluation (Ghare-bagh et al., 2020).",
"Besides, we follow Hu et al. (2021) to utilize four metrics: Key, Readability, Accuracy, and Completeness, respectively.",
"We invite three medical experts to score these generated impressions based on these four metrics, with the results shown in Figure",
"3. On the one hand, compared to BASE , we can find that our model outperforms it on all four metrics, where 16%, 25%, 18%, and 8% of impressions from our model obtain higher quality than BASE .",
"On the other hand, comparing our model against reference impressions, our model obtains close results on key, accuracy, and completeness, with 86%, 78%, and 92% of our model outputs being at least as good as radiologists, while our model is less preferred for readability with a 10% gap.",
"The main reason might be that many words removed in positive examples are used to keep sequence fluently, and our model tends to identify them as secondary information, leading that our model obtains relatively worse results on the readability metric.",
"We conduct further analyses on Findings Length and Case Study.",
"Findings Length To test the effectiveness of the word-based length of findings, we categorize the findings on the MIMIC-CXR test set into seven groups and present the R-1 score for each group in Figure",
"4. We have the following observations.",
"First, as the findings length becomes long, the performance of BASE and our model tend to decrease, except for the second group, i.e., [25, 45], since short text are more accessible for the encoder to capture valid features, which is consistent with previous studies (Dai et al., 2019).",
"Second, our model outperforms BASE in all the groups, further illustrating the effectiveness of our model regardless of the findings length.",
"Third, we can observe a grey line with a downward trend from the incremental chart in the upper right corner of Figure 4, indicating that our model (i.e., BASE + GRAPH +CL) tends to gain better improvements over BASE on shorter findings than that on longer ones.",
"This is because longer findings usually contain relatively more secondary information such that it is more challenging for contrastive learning to distinguish critical knowledge.",
"Case study To further demonstrate how our approach with graph and contrastive learning helps the generation of findings, we perform qualitative analysis on two cases, and the results are shown in Figure 5, where different colors on the texts indicate different critical information.",
"Compared to BASE model, our model can generate more complete impressions which cover almost all the crucial abnormalities.",
"In contrast, the BASE model fails to identify all the key information, e.g., ( moderate cardiomegaly in the left example and possible 4683 Figure 5: Examples of the generated impressions from BASE and BASE + GRAPH +CL as well as reference impressions. The yellow nodes in the graph indicate that these words are contained in entities. small left pleural effusion in the right case).",
"Besides, our model can generate more accurate impressions with an appropriate word to represent possibility and a better modifier to describe the observation.",
"On the one hand, in Figure 5, sug-gestive of in the left example and may in the right example imply a type of uncertainty, which means that doctors wonder whether the abnormal observation exists when writing findings, so that the corresponding word (i.e., likely ) is used to describe this sensitive information.",
"On the other hand, in the left case, according to the phrase Frontal and lateral in its original findings, our model can generate the synonym bilateral to depict the symptom pleural effusions more specifically.",
"Recently, NLP technology has broadly applied in the medical domain, such as medical entity recognition (Liu et al., 2021b; Zhao et al., 2019), radiology report generation (Chen et al., 2021; Zhang et al., 2020b; Liu et al., 2021a), AIG, etc.",
"Impression generation can be regarded as a type of summarization task that has drawn substantial attention in recent years, and there are many studies for addressing general abstractive summarization (See et al., 2017; Li et al., 2020; You et al., 2019; Huang et al., 2020).",
"You et al. (2019) designed a novel focus-attention mechanism and saliency-selection network, equipped in the encoder and decoder to enhance summary generation.",
"Li et al. (2020) proposed an abstractive sentence summarization method guided by the key words, which utilized a dual-attention and a dual-copy mechanism to integrate the semantics of both original sequence and key words.",
"Many methods propose to introduce specific designs on the general summarization model to address radiology impression generation (Zhang et al., 2018; Gharebagh et al., 2020; MacAvaney et al., 2019; Hu et al., 2021; Abacha et al., 2021).",
"MacAvaney et al. (2019); Gharebagh et al. (2020) extracted the salient clinical ontology terms from findings and then incorporated them into the summarizer through a separate encoder for enhancing AIG.",
"Hu et al. (2021) further introduced pre-defined word graphs to record salient words as well as their internal relation and then employed two separate graph encoders to leverage graphs for guiding the decoding process.",
"Most of these approaches exploit separate encoders to encode predefined knowledge (e.g., ontology terms and word graph), which are then utilized to enhance impression generation.",
"However, they tend to over-rely on the quality of pre-extracted ontologies and word graphs and lack sensitivity to vital information of findings themselves.",
"Compared to these models, our method offers an alternative solution to robustly improve key information extraction with the help of both graphs and contrastive learning.",
"In this paper, we propose to combine graphs and contrastive learning to better incorporate valuable features for promoting impression generation.",
"Specifically, we utilize the graph encoder to extract relation information from the graph, constructed by medical entities and the dependence tree, for enhancing the representation from the pre-trained text encoder.",
"In addition, we employ contrastive learning to assist the model in distinguishing between critical and secondary information, simultaneously improving sensitivity to important word represen-4684 tation by comparing positive and negative examples.",
"Furthermore, we conduct experiments on two benchmark datasets, and the results illustrate the effectiveness of our proposed method, where new state-of-the-art results are achieved.",
"This work is supported by Chinese Key-Area Research and Development Program of Guangdong Province (2020B0101350001), NSFC under the project The Essential Algorithms and Technologies for Standardized Analytics of Clinical Texts (12026610) and the Guangdong Provincial Key Laboratory of Big Data Computing, The Chinese University of Hong Kong, Shenzhen."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"method",
"result",
"objective",
"other"
] |
[
"Although neural models have achieved competitive results in dialogue systems, they have shown limited ability in representing core semantics, such as ignoring important entities.",
"To this end, we exploit Abstract Meaning Representation (AMR) to help dialogue modeling.",
"Compared with the textual input, AMR explicitly provides core semantic knowledge and reduces data sparsity.",
"We develop an algorithm to construct dialogue-level AMR graphs from sentence-level AMRs and explore two ways to incorporate AMRs into dialogue systems.",
"Experimental results on both dialogue understanding and response generation tasks show the superiority of our model.",
"To our knowledge, we are the first to leverage a formal semantic representation into neural dialogue modeling.",
"Dialogue systems have received increasing research attention (Wen et al., 2015; Serban et al., 2017; Bao et al., 2020), with much recent work focusing on social chats (Ritter et al., 2011; Li et al., 2017) and task-oriented dialogues (Wen et al., 2017; Dinan et al., 2019).",
"There are two salient subtasks in dialogue modeling, namely dialogue understanding (Choi et al., 2018; Reddy et al., 2019; Yu et al., 2020) and response generation (Li et al., 2017; Budzianowski et al., 2018).",
"The former refers to understanding of semantic and discourse details in a dialogue history, and the latter concerns making a fluent, novel and coherent utterance.",
"The current state-of-the-art methods employ neural networks and end-to-end training (Sutskever et al., 2014; Bahdanau et al., 2015) for dialogue modeling.",
"For instance, sequence-to-sequence models have been used to encode a dialogue history, before directly synthesizing the next utterance (Vinyals and Le, 2015; Wen et al., 2017; Bao et al., Dialogue History: SPEAKER-1 : Recently, I've been obsessed with horror films. SPEAKER-2 : Oh, how can you be infatuated with horror films? They're so scary . SPEAKER-1 : Yeah, you are right I used to not watch horror films, but after seeing Silence of the Lamb with Mike last month, I fell in love with them. SPEAKER-2 : It's amazing. But if I were you, I wouldn't have the courage to watch the first one. SPEAKER-1 : But it's really exciting . Ground-Truth : Maybe, but I would rather watch romance, science fiction, crime or even disaster movie instead of a horror picture Transformer : Great. I'm looking forward to it. I just can't keep away from the food that I saw. Figure 1: A conversation from DailyDialog. Some important contents are marked with squares. 2020).",
"Despite giving strong empirical results, neural models can suffer from spurious feature associations in their neural semantic representation (Poliak et al., 2018; Kaushik et al., 2020), which can lead to weak robustness, inducing irrelevant dialogue states (Xu and Sarikaya, 2014; Sharma et al., 2019; Rastogi et al., 2019) and generating unfaithful or irrelevant text (Maynez et al., 2020; Niu and Bansal, 2020).",
"As shown in Figure 1, the baseline Transformer model pays attention to the word lamb but ignores its surrounding context, which has important contents (marked with squares) that indicate its true meaning, thereby giving an irrelevant response that is related to food.",
"Intuitively, such issues can be alleviated by having a structural representation of semantic information, which treats entities as nodes and builds structural relations between nodes, making it easy to find the most salient context.",
"Explicit structures are also more interpretable compared to neural representation and have been shown useful for information extraction (Strubell et al., 2018; Sun et al., 2019; Li et al., 2020; Bai et al., 2021; Sachan et al., 2021), summarization (Liu et al., 2015; Hardy and Vlachos, 2018; Liao et al., 2018) and machine translation (Marcheggiani et al., 2018; Song et al., 2019a).",
"We explore AMR (Banarescu et al., 2013) as a semantic representation for dialogue histories in order to better represent conversations.",
"As shown in the central block of Figure 2, AMR is one type of sentential semantic representations, which models a sentence using a rooted directed acyclic graph, highlighting its main concepts ( e.g. mistake ) and semantic relations ( e.g. , ARG0 1 ), while abstracting away function words.",
"It can thus potentially offer core concepts and explicit structures needed for aggregating the main content in dialogue.",
"In addition, AMR can also be useful for reducing the negative influence of variances in surface forms with the same meaning, which adds to data sparsity.",
"Existing work on AMR parsing focuses on the sentence level.",
"However, as the left block of Figure 2 shows, the semantic structure of a dialogue history can consist of rich cross-utterance coreference links (marked with squares) and multiple speaker interactions.",
"To this end, we propose an algorithm to automatically derive dialogue-level AMRs from utterance-level AMRs, by adding cross-utterance links that indicate speakers, identical mentions and co-reference links.",
"One example is shown in the right block of Figure 2, where newly added edges are in color.",
"We consider two main approaches of making use of such dialogue-level AMR structures.",
"For the first method, we merge an AMR with tokens in its corresponding sentence via AMR-to-text alignments, before encoding the resulting structure using a graph Transformer (Zhu et al., 2019).",
"For the second method, we separately encode an AMR and its corresponding sentence, before leveraging both representations via feature fusion (Mangai et al., 2010) or dual attention (Cal-ixto et al., 2017).",
"We verify the effectiveness of the proposed framework on a dialogue relation extraction task (Yu et al., 2020) and a response generation task (Li et al., 2017).",
"Experimental results show that the proposed framework outperforms previous 1 Please refer to PropBank (Kingsbury and Palmer, 2002; Palmer et al., 2005) for more details.",
"methods (Vaswani et al., 2017; Bao et al., 2020; Yu et al., 2020), achieving the new state-of-the-art results on both benchmarks.",
"Deep analysis and human evaluation suggest that semantic information introduced by AMR can help our model to better understand long dialogues and improve the coherence of dialogue generation.",
"One more advantage is that AMR is helpful to enhance the robustness and has a potential to improve the interpretability of neural models.",
"To our knowledge, this is the first attempt to leverage the AMR semantic representation into neural networks for dialogue understanding and generation.",
"Our code is available at https://github.com/muyeby/AMR-Dialogue .",
"Figure 2 illustrates our method for constructing a dialogue-level AMR graph from multiple utterance-level AMRs.",
"Given a dialogue consisting multiple utterances, we adopt a pretrained AMR parser (Cai and Lam, 2020) to obtain an AMR graph for each utterance.",
"For utterances containing multiple sentences, we parse them into multiple AMR graphs, and mark them belonging to the same utterance.",
"We construct each dialogue AMR graph by making connections between utterance AMRs.",
"In particular, we take three strategies according to speaker, identical concept and co-reference information.",
"Speaker We add a dummy node and connect it to all root nodes of utterance AMRs.",
"We add speaker tags ( e.g. , SPEAKER 1 and SPEAKER 2) to the edges to distinguish different speakers.",
"The dummy node ensures that all utterance AMRs are connected so that information can be exchanged during graph encoding.",
"Besides, it serves as the global root node to represent the whole dialogue.",
"Identical Concept There can be identical mentions in different utterances ( e.g. possible in the first and the forth utterances in Figure 2), resulting in repeated concept nodes in utterance AMRs.",
"We connect nodes corresponding to the same non-pronoun concepts by edges labeled with SAME 2 .",
"This type of connection can further enhance cross-sentence information exchange.",
"2 Compared with co-reference, identical concept relations can connect different words which share the same meaning e.g. (cid:104) could , might (cid:105) , (cid:104) fear , afraid (cid:105) .",
"which are frequent in conversations (Grosz et al., 1995; Newman et al., 2008; Quan et al., 2019).",
"We conduct co-reference resolution on dialogue text using an off-to-shelf model 3 in order to identify concept nodes in utterance AMRs that refer to the same entity.",
"For example, in Figure 2, I in the first utterance, and sir in the second utterance refer to the same entity, SPEAKR 1.",
"We add edges labeled with COREF between them, starting from later nodes to earlier nodes ( later and earlier here refer to the temporal order of ongoing conversation), to indicate their relation 4 .",
"We adopt a standard Transformer (Vaswani et al., 2017) for dialogue history encoding.",
"Typically, a Transformer encoder consists of L layers, taking a sequence of tokens (i.e., dialogue history) S = { w 1 , w 2 , ..., w N } , where w i is the i -th token and N is the sequence length, as input and produces vectorized word representations { h l 1 , h l 2 , ..., h lN } iteratively, l [1 , ..., L ] .",
"Overall, a Transformer encoder can be written as: H = SeqEncoder ( emb ( S )) , (1) where H = { h L 1 , h L 2 , ..., h Ln } , and emb denotes a function that maps a sequence of tokens into the corresponding embeddings.",
"Each Transformer layer consists of two sub-layers: a self-attention sub-layer and a position-wise feed forward network.",
"The former calculates a set of attention scores: ij = Attn ( h i , h j ) .",
"which are used to update the hidden state of w i : h li = (cid:88) N j =1 ij ( WV h l 1 j ) , (3) where WV is a parameter matrix.",
"The position-wise feed-forward ( FFN ) layer consists of two linear transformations: FFN ( h ) = W 2 ReLU ( W 1 h + b 1 ) + b 2 , (4) where W 1 , W 2 , b 1 , b 2 are model parameters.",
"We take the dialogue relation extraction task (Yu et al., 2020) as an example.",
"Given a dialogue history S and an argument (or entity) pair ( a 1 , a 2 ), the goal is to predict the corresponding relation type r R between a 1 and a 2 .",
"We follow a previous dialogue relation extraction model (Chen et al., 2020) to feed the hidden states of a 1 and a 2 (denoted as h a 1 , h a 2 ) into a classifier to obtain the probability of each relation types: P rel = softmax ( W 3 [ h a 1 ; h a 2 ] + b 3 ) , (5) where W 3 and b 3 are model parameters.",
"where denotes the set of model parameters.",
"In practice, we use BERT (Devlin et al., 2019) for calculating h a 1 and h a 2 , which can be regarded as pre-trained initialization of the Transformer encoder.",
"Given a dialogue history S , we use a standard autoregressive Transformer decoder (Vaswani et al., 2017) to generate a response Y = { y 1 , y 2 , ..., y |Y| } .",
"At time step t , the previous output word y t 1 is firstly transformed into a hidden state s t by a self-attention layer as Equations 2 and 3.",
"Then an encoder-decoder attention mechanism is applied to obtain a context vector from encoder output hidden states { h L 1 , h L 2 , . . . , h LN } : ti = Attn ( s t , h Li ) , c t = (cid:88) N i =1 ti h Li , (7) The obtained context vector c t is then used to calculate the output probability distribution for the next word y t over the target vocabulary 5 : P voc = softmax ( W 4 c t + b 4 ) , (8) where W 4 , b 4 are trainable model parameters.",
"The k -th value of P voc is the conditional probability of k -th word in vocabulary given a dialogue.",
"Given a dialogue history-response pair {S , Y} , the model minimizes a cross-entropy loss: (cid:96) = | Y | (cid:88) t =1 logP voc ( y t | y t 1 , ..., y 1 , S ; ) , (9) where denotes all model parameters.",
"Our model takes a dialogue history S and the corresponding dialogue AMR as input.",
"Formally, 5 Similar to the encoder, there is also multi-head attention, a position-wise feed-forward layer and residual connections, which we omit in the equations.",
"an AMR is a directed acyclic graph G = (cid:104)V , E(cid:105) , where V denotes a set of nodes (i.e. AMR concepts) and E (i.e. AMR relations) denotes a set of labeled edges.",
"An edge can be further represented by a triple (cid:104) n i , r ij , n j (cid:105) , meaning that the edge is from node n i to n j with label r ij .",
"We consider two main ways of making use of dialogue-level AMRs.",
"The first method (Fig-ure",
"3(a)) uses AMR semantic relations to enrich a textual representation of the dialogue history.",
"We project AMR nodes onto the corresponding tokens, extending Transformer by encoding semantic relations between words.",
"For the second approach, we separately encode an AMR and its sentence, and use either feature fusion (Figure",
"3(b)) or dual attention (Figure",
"3(c)) to incorporate their embeddings.",
"We adopt a Graph Transformer (Zhu et al., 2019) to encode an AMR graph, which extends the standard Transformer (Vaswani et al., 2017) for modeling structural input.",
"A L -layer graph Transformer takes a set of node embeddings { n 1 , n 2 , ..., n M } and a set of edge embeddings { r ij | i [1 , ..., M ] , j [1 , ..., M ] } as input 6 and produces more abstract node features { h l 1 , h l 2 , ..., h lM } iteratively, where l [1 , ..., L ] .",
"The key difference between a graph Transformer and a standard Transformer is the graph attention layer.",
"Compared with self-attention layer (Equation 2), the graph attention layer explicitly considers graph edges when updating node hidden states.",
"For example, give an edge (cid:104) n i , r ij , n j (cid:105) , the attention score ij is calculated 6 If there is no relation between n i and n j , r ij = None as: ij = exp( e ij ) (cid:80) Mm =1 exp ( e im ) , e ij = ( WQ h l 1 i ) T ( WK h l 1 j + WR r ij ) d , (10) where WR is a transformation matrix, r ij is the embedding of relation r ij , d is hidden state size, and { h 01 , h 02 , ..., h 0 M } = { n 1 , n 2 , ..., n M } .",
"The hidden state of n i is then updated as: h li = (cid:88) M j =1 ij ( WV h l 1 j + WR r ij ) , (11) where WV is a parameter matrix.",
"We first use the JAMR aligner (Flanigan et al., 2014) to obtain a node-to-word alignment, then adopt the alignment to project the AMR edges onto text with following rules:",
"where A is a one-toK alignment ( K [0 , . . . , N ] ).",
"In this way, we obtain a projected graph G (cid:48) = (cid:104)V (cid:48) , E (cid:48) (cid:105) , where V (cid:48) represents the set of input words { w 1 , w 2 , ..., w N } and E (cid:48) denotes a set of word-to-word semantic relations.",
"Inspired by previous work on AMR graph modeling (Guo et al., 2019; Song et al., 2019b; Sun et al., 2019), we adopt a hierarchical encoder that stacks a sequence encoder and a graph encoder.",
"A sequence encoder ( SeqEncoder ) transforms a dialogue history into a set of hidden states: HS = SeqEncoder ( emb ( S )) .",
"H S = GraphEncoder ( HS , emb ( E (cid:48) )) , (15)",
"word representations before and after refinement (as shown in Figure",
"3(b)): HF = LayerNorm ( HS + H S ) .",
"where LayerNorm denotes the layer normalization (Ba et al., 2016).",
"We name the hierarchical encoder as Hier , which can be used for both dialogue understanding and dialogue response generation.",
"We consider integrating both text cues and AMR structure cues for dialogue understanding and response generation, using a dual-encoder network.",
"First, a sequence encoder is used to transform a dialogue history S into a text memory (denoted as HS = { h S 1 , h S 2 , ..., h SN } ) using Equation 1.",
"Second, the AMR graph G is encoded into graph memory (denoted as HG = { h G 1 , h G 2 , ..., h GM } ) by a graph Transformer encoder using Equation 12.",
"For dialogue understanding (Figure",
"3(b)) and dialogue response generation (Figure",
"3(c)), slightly different methods of feature integration are used due to their different nature of outputs.",
"Dialogue Understanding .",
"Similar to Section 4.2, we first use the JAMR aligner to obtain a node-to-word alignment A .",
"Then we fuse the word and AMR node representations as follows: h i = (cid:40) f ( h Si , h Gj ) , if j, A ( n j ) = w i , f ( h Si , h ) , otherwise , (17) where h is the vector representation of the dummy node (see Figure 2), f is defined as: h = LayerNorm ( h 1 + h 2 ) .",
"classifier for relation prediction (Equation 5).",
"Dialogue Response Generation .",
"We replace the standard encoder-decoder attention (Equation 7) with a dual-attention mechanism (Song et al., 2019a).",
"In particular, given a decoder hidden state s t at time step t , the dual-attention mechanism calculates a graph context vector c St and a text context vector c Gt , simultaneously: ti = Attn ( s t , h Si ) , tj = Attn ( s t , h Gj ) , c St = (cid:88) N i =1 ti h Si , c Gt = (cid:88) M j =1 tj h Gj , (19) Model data-v1 data-v2 dev test dev test F1 ( ) F 1 c ( ) F1 ( ) F1 c ( ) F1 ( ) F 1 c ( ) F1 ( ) F1 c ( ) AGGCN 46.6(-) 40.5(-) 46.2(-) 39.5 (-) --LSR 44.5(-) -44.4(-) --DHGAT 57.7(-) 52.7(-) 56.1(-) 50.7(-) --BERT 60.6(1.2) 55.4(0.9) 58.5(2.0) 53.2(1.6) 59.4 (0.7) 54.7(0.8) 57.9(1.0) 53.1(0.7) BERT s 63.0(1.5) 57.3(1.2) 61.2(0.9) 55.4(0.9) 62.2(1.3) 57.0(1.0) 59.5(2.1) 54.2(1.4) BERT c 66.8(0.9) 60.9(1.0) 66.1(1.1) 60.2(0.8) 66.2(0.9) 60.5(1.1) 65.1(0.8) 59.8(1.2) Hier 68.2(0.8) 62.2 (0.7) 67.0(0.9) 61.3(0.6) 68.0(0.6) 62.2(0.4) 66.7(0.3) 61.0(0.4) Dual 68.3 (0.6) 62.2 (0.2) 67.3 (0.4) 61.4 (0.2) 68.2 (0.5) 62.3 (0.4) 67.1 (0.4) 61.1 (0.5) Table 1: Performance on DialogRE, where denotes the standard deviation computed from 5 runs, and indicates results reported by Chen et al. (2020).",
"and the final context vector c t is calculated as: c t = W c [ c St ; c Gt ] + b c , (20) where W c and b c are model parameters.",
"We name the dual-encoder model as Dual .",
"We evaluate our model on DialogRE (Yu et al., 2020), which contains totally 1,788 dialogues, 10,168 relational triples and 36 relation types in total.",
"On average, a dialogue in DialogRE contains 4.5 relational triples and 12.9 turns.",
"We report experimental results on both original (v1) and updated (v2) English version.",
"7 5.1 Settings We adopt the same input format and hyper-parameter settings as Yu et al. (2020) for the proposed model and baselines.",
"In particular, the input sequence is constructed as [CLS] d [SEP] a 1 [SEP] a 2 [SEP] , where d denotes the dialogue, and a 1 and a 2 are the two associated arguments.",
"In the BERT model of Yu et al. (2020), only the hidden state of the [CLS] token is fed into a classifier for prediction, while our baseline (BERT c ) additionally takes the hidden states of a 1 and a 2 .",
"All hyperparameters are selected by prediction accuracy on validation dataset (See Table 6 for detailed hyperparameters).",
"Metrics Following previous work on DialogRE, we report macro F 1 score on relations in both the standard (F 1 ) and conversational settings (F 1 c ; Yu et al., 2020).",
"F 1 c is computed over the first few turns of a dialogue where two arguments are first mentioned.",
"7 https://dataset.org/dialogre/ 5.2 Main Results Table 1 shows the results of different systems on DialogRE.",
"We compare the proposed model with two BERT-based approches, BERT and BERT s .",
"Based on BERT, BERT s (Yu et al., 2020) highlights speaker information by replacing speaker arguments with special tokens.",
"For completeness, we also include recent methods, such as AGGCN (Guo et al., 2019), LSR (Nan et al., 2020) and DHGAT (Chen et al., 2020).",
"BERT c and Hier , Dual represent our baseline and the proposed models, respectively.",
"By incorporating speaker information, BERT s gives the best performance among the previous system.",
"Our BERT c baseline outperforms BERT s by a large margin, as BERT c additionally considers argument representations for classification.",
"Hier significantly ( p < 0 . 01 ) 8 outperforms BERT c in all settings, with 1.4 points of improvement in terms of F1 score on average.",
"A similar trend is observed under F1 c .",
"This shows that semantic information in AMR is beneficial to dialogue relation extraction, since AMR highlights core entities and semantic relations between them.",
"Dual obtains slightly better results than Hier , which shows effect of separately encoding a semantic structure.",
"Finally, the standard deviation values of both Dual and Hier are lower than the baselines.",
"This indicates that our approaches are more robust regarding model initialization.",
"We split the dialogues of the DialogRE (v2) devset into five groups by the utterance-based distance between two arguments.",
"As shown in Figure 4, Dual gives better results than BERT c except when 8 We use pair-wised t -test.",
"the argument distance is less than 5.",
"In particular, Dual surpasses BERT c by a large margin when the arguments distance is greater than 20.",
"The comparison indicates that AMR can help a model to better handle long-term dependencies by improving the entity recall.",
"In addition to utterance distance, we also consider word distance and observe a similar trend (as shown in Appendix 7).",
"Figure 5 shows a conversation between a manager and an employee who might have taken a leave.",
"The baseline model incorrectly predicts that the relation between two interlocutors is parent and child.",
"It might be influenced by the last sentence in the conversation, assuming that it is a dialogue between family members.",
"However, the proposed model successful predicts the interlocutors' relation, suggesting it can extract global semantic information in the dialogue from a comprehensive perspective.",
"We conduct experiments on the DailyDialog benchmark (Li et al., 2017), which contains 13,119 daily multi-turn conversations.",
"On average, the number of turns for each dialogue is 7.9, and each utterance has 14.6 tokens.",
"We take Transformer as a baseline.",
"Our hyperparameters are selected by word prediction accuracy on validation dataset.",
"The detailed hyperparameters are given in Appendix (See Table 6).",
"Metric We set the decoding beam size as 5 and adopt BLEU-1/2/3/4 (Papineni et al., 2002) and Distinct-1/2 (Li et al., 2016) as automatic evaluation metrics.",
"The former measures the n-gram overlap between generated response and Dialogue : SPEAKER-1 : A new place for a new Ross.",
"I'm gonna have you and all the guys from work over once it's y'know, furnished.",
"SPEAKER-2 : I must say it's nice to see you back on your feet.",
"SPEAKER-1 : Well I am that.",
"And that whole rage thing is definitely behind me.",
"SPEAKER-2 : I wonder if its time for you to rejoin our team at the museum?",
"SPEAKER-1 : Oh Donald that-that would be great.",
"I am totally ready to come back to work.",
"IWhat?",
"No!",
"Wh-What are you doing?!! GET OFF MY SISTER!!!!!!!!!!!!! Ground-Truth : per:boss(S1, S2) Baseline : per:parent(S1, S2) Ours : per:boss(S1, S2) Figure 5: Case study for dialogue relation extraction.",
"the target response while the latter assesses the generation diversity, which is defined as the number of distinct unior bi-grams divided by the total amount of generated words.",
"In addition, we also conduct human evaluation.",
"Following Bao et al. (2020), we ask annotators who study linguistics to evaluate model outputs from four aspects, which are fluency, coherence, informativeness and overall performance.",
"The scores are in a scale of { 0, 1, 2 } .",
"The higher, the better.",
"Table 2 reports the performances of the previous state-of-the-art methods and proposed models on the DailyDialog testset.",
"For the previous methods, PLATO and PLATO w/o L are both Transformer models pre-trained on large-scale conversational data (8.3 million samples) and finetuned on DailyDialog.",
"For completeness, we also report other systems including Seq2Seq (Vinyals and Le, 2015) and iVAE MI (Fang et al., 2019).",
"Among the previous systems, PLATO and PLATO w/o L report the best performances.",
"Our Transformer baseline is highly competitive in terms of BLEU and Distinct scores.",
"Compared with the Transformer baseline, both Dual and Hier show better numbers regarding BLEU and Distinct, and the gains of both models are significant ( p < 0 . 01 ).",
"This indicates that semantic information in AMR graphs is useful for dialogue response generation.",
"In particular, the gains come from better recall of the important entities and their relations in a dialogue history, which can leads to generating a more detailed response.",
"We conduct human evaluation on randomly selected 50 dialogues and corresponding generated responses of the baseline and our models.",
"As shown in Table 3, the Transformer baseline gives the lowest scores, while Dual sees the highest scores from all aspects.",
"Our main advantage is on the Coherence , meaning that AMRs are effective on recalling important concepts and relations.",
"As the result, it makes it easier for our models to generate coherent replies.",
"Examples are shown in Figure 8 in Appendix.",
"Comparatively, all systems achieve high scores regarding Fluency , suggesting that this aspect is not the current bottleneck for response generation.",
"This section contains analysis concerning the effects of graph features, dialogue length and model robustness.",
"We use Dual model for experiments since it gives slightly better results than Hier .",
"Table 4 shows the results of our best performing models on the two datasets regarding different con-figurations on the dialogue AMR graphs.",
"We report the average F1 score for DialogRE and the BLEU-1/Distinct-1 score for DailyDialog.",
"First, using utterance-level AMR improves the text baseline by 1.2 points and 1.5 points with regard to F1 and Setting DialogRE (v2) DailyDialog Dialog-AMR( Dual ) 68.2 38.2/5.9 -Speaker 67.5 37.7/5.7 -Ident.",
"BLEU-1 scores, respectively.",
"This indicates that the semantic knowledge in formal AMR is helpful for dialogue modeling.",
"Second, our manually added relations (in Section 2) also leads to improvements, ranging from 0.5 to 1.0 in BLEU-1 score.",
"The speaker relation is the most important for dialogue relation extraction, a possible reason is that DialogRE dataset mainly focus on person entities.",
"Also, co-reference relations help the most in dialogue response generation.",
"The identical concept relations give least improvements among three relations.",
"Finally, combining all relations to build a Dialog-AMR graph achieves best performance on both datasets.",
"We group the devset of DialogRE (v2) and DailyDialog into five groups according to the number of utterances in a dialogue.",
"Figure 6 summarizes the performance of the baseline and the proposed model on dialogue understanding (DU) and response generation (RG) tasks.",
"In dialogue understanding, our model gives slightly better F1 scores than the baseline when a dialogue has smaller than 12 utterance.",
"The performance improvement is more significant when modeling a long dialogue.",
"This confirms our motivation that AMR can help to understand long dialogues.",
"In dialogue response generation, our model consistently outperforms the Transformer baseline by a large margin on Model Original Paraphrased Baseline 100 94.50 Ours 100 98.50 Table 5: F1 on original and paraphrased testsets.",
"dialogues of different lengths, still with more improvements on larger dialogues.",
"Overall, these results are consistent with Table 1 and 2, showing that AMR can provide useful semantic information and alleviate the issue of long-range dependency.",
"Recent studies show that neural network-based dialog models lack robustness (Shalyminov and Lee, 2018; Einolghozati et al., 2019).",
"We select 100 instances from the testset of DialogRE (v2) where both baseline and our model gives true prediction, before paraphrasing the source dialogues manually (see appendix B.3 for paraphrasing guidelines.).",
"Results on the paraphrased dataset are given in Table 5.",
"The performance of baseline model drop from 100 to 94.5 on paraphrased dataset.",
"By contrast, the result of our model reaches 98.5, 4 points higher than baseline.",
"This confirms our assumption that AMR can reduce data sparsity, thus improve the robustness of neural models.",
"Semantic Parsing for Dialogue Some previous work builds domain-specified semantic schema for task-oriented dialogues.",
"For example, in the PEGASUS (Zue et al., 1994) system, a sentence is first transformed into a semantic frame and then used for travel planing.",
"Wirsching et al. (2012) use semantic features to help a dialogue system perform certain database operations.",
"Gupta et al. (2018) represent task-oriented conversations as semantic trees where intents and slots are tree nodes.",
"They solve intent classification and slot-filling task via semantic parsing.",
"Cheng et al. (2020) design a rooted semantic graph that integrates domains, verbs, operators and slots in order to perform dialogue state tracking.",
"All these structures are designed for specified task only.",
"In contrast, we investigate a general semantic representation for the modeling of everyday conversations.",
"Constructing AMRs beyond Sentence Level There are a few attempts to construct AMRs beyond the sentence level.",
"Liu et al. (2015) construct document-level AMRs by merging identical concepts of sentence-level AMRs for abstractive summerization, and Liao et al. (2018) further extend this approach to multi-document summer-ization.",
"O'Gorman et al. (2018) manually annotate co-reference information across sentence AMRs.",
"We focus on creating conversation-level AMRs to facilitate information exchange more effectively for dialogue modeling.",
"Bonial et al. (2020) adapt AMRs on dialogues by enriching the standard AMR schema with dialogue acts, tense and aspect, and they construct a dataset consisting of 340 dialogue AMRs.",
"However, they propose theoretical changes in the schema for annotating AMRs, while we explore empirical solutions that leverage existing AMRs of the standard schema on dialogues.",
"AMR Parsing and Encoding Our work is also related to AMR parsing (Flanigan et al., 2014; Konstas et al., 2017a; Lyu and Titov, 2018; Guo and Lu, 2018; Zhang et al., 2019; Cai and Lam, 2020) and AMR encoding (Konstas et al., 2017b; Song et al., 2018; Zhu et al., 2019; Song et al., 2020; Zhao et al., 2020; Bai et al., 2020).",
"The former task makes it possible to use automatically-generated AMRs for downstream applications, while the latter helps to effectively exploit structural information in AMRs.",
"In this work, we investigate AMRs for dialogue representation and combine AMRs with text for dialogue modeling.",
"We investigated the feasibility of using AMRs for dialogue modeling, describing an algorithm to construct dialogue-level AMRs automatically and exploiting two ways to incorporate AMRs into neural dialogue systems.",
"Experiments on two benchmarks show advantages of using AMR semantic representations model on both dialogue understanding and dialogue response generation.",
"Yue Zhang is the corresponding author.",
"We would like to thank the anonymous reviewers for their insightful comments and Jinhao Jiang for his help for data preparation.",
"This work has been supported by Tencent AI Lab Rhino-Bird Focused Research Program.",
"It also receives support from the Westlake University and Bright Dream Joint Institute for Intelligent Robotics, and a research grant from Rxhui Inc."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"method",
"other",
"objective",
"method",
"other",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Summarization of clinical narratives is a longstanding research problem.",
"Here, we introduce the task of hospital-course summarization.",
"Given the documentation authored throughout a patient's hospitalization, generate a paragraph that tells the story of the patient admission.",
"We construct an English, text-to-text dataset of 109,000 hospitalizations (2M source notes) and their corresponding summary proxy: the clinician-authored Brief Hospital Course paragraph written as part of a discharge note.",
"Exploratory analyses reveal that the BHC paragraphs are highly abstractive with some long extracted fragments; are concise yet comprehensive; differ in style and content organization from the source notes; exhibit minimal lexical cohesion; and represent silver-standard references.",
"Our analysis iden-tifies multiple implications for modeling this complex, multi-document summarization task.",
"The electronic health record (EHR) contains critical information for clinicians to assess a patient's medical history (e.g., conditions, laboratory tests, procedures, treatments) and healthcare interactions (e.g., primary care and specialist visits, emergency department visits, and hospitalizations).",
"While medications, labs, and diagnoses are documented through structured data elements and flowsheets, clinical notes contain rich narratives describing the patient's medical condition and interventions.",
"A single hospital visit for a patient with a lengthy hospital stay, or complex illness, can consist of hundreds of notes.",
"At the point of care, clinicians already pressed for time, face a steep challenge of making sense of their patient's documentation and synthesizing it either for their own decision making process or to ensure coordination of care (Hall and Walton, 2004; Ash et al., 2004).",
"making sense of a patient's longitudinal record over long periods of time and multiple interactions with the healthcare system, to synthesizing a specific visit's documentation.",
"Here, we focus on hospital-course summarization : faithfully and concisely summarizing the EHR documentation for a patient's specific inpatient visit, from admission to discharge.",
"Crucial for continuity of care and patient safety after discharge (Kripalani et al., 2007; Van Walraven et al., 2002), hospital-course summarization also represents an incredibly challenging multi-document summarization task with diverse knowledge requirements.",
"To properly synthesize an admission, one must not only identify relevant problems, but link them to symptoms, procedures, medications, and observations while adhering to temporal, problem-specific constraints.",
"Our main contributions are as follows: (1) We introduce the task of hospital-course summarization; (2) we collect a dataset of inpatient documentation and corresponding \"Brief Hospital Course\" paragraphs extracted from discharge notes; and (3) we assess the characteristics of these summary paragraphs as a proxy for target summaries and discuss implications for the design and evaluation of a hospital-course summarization tool.",
"Summarization of clinical data and documentation has been explored in a variety of use cases (Pivovarov and Elhadad, 2015).",
"For longitudinal records, graphical representations of structured EHR data elements (i.e., diagnosis codes, laboratory test measurements, and medications) have been proposed (Powsner and Tufte, 1997; Plaisant et al., 1996).",
"Interactive visualizations of clinical problems' salience, whether extracted from notes (Hirsch et al., 2015) or inferred from clinical documentation (Levy-Fix et al., 2020) have shown promise (Pivovarov et al., 2016; Levy-Fix, 2020).",
"Most work in this area, however, has focused on clinical documentation of a fine temporal resolution.",
"Traditional text generation techniques have been proposed to synthesize structured data like ICU physiological data streams (Hunter et al., 2008; Goldstein and Shahar, 2016).",
"Liu (2018) use a transformer model to write EHR notes from the prior 24 hours, while Liang et al. (2019) perform disease-specific summarization from individual progress notes.",
"McInerney et al. (2020) develop a distant supervision approach to generate extractive summaries to aid radiologists when interpreting images.",
"Zhang et al. (2018, 2020); MacAvaney et al. (2019); Sotudeh Gharebagh et al. (2020) generate the Impression section of the Radiology report from the more detailed Findings section.",
"Finally, several recent works aim to generate EHR notes from doctor-patient conversations (Krishna et al., 2020; Joshi et al., 2020; Research, 2020).",
"Recent work on summarizing hospital admissions focuses on extractive methods (Moen et al., 2014, 2016; Liu et al., 2018b; Alsentzer and Kim, 2018).",
"Given the clinical documentation available for a patient hospitalization, our task of interest is to generate a text that synthesizes the hospital course in a faithful and concise fashion.",
"For our analysis, we rely on the Brief Hospital Course (BHC), a mandatory section of the discharge note, as a proxy reference.",
"The BHC tells the story of the patient's admission: what was done to the patient during the hospital admission and why , as well as the follow up steps needed to occur post discharge, whenever needed.",
"Nevertheless, it is recognized as a challenging and time consuming task for clinicians to write (Dodd, 2007; UC Irvine Residency, 2020).",
"To carry out our analysis, we construct a large-scale, multi-document summarization dataset, CLINSUM",
"Materials come from all hospitalizations between 2010 and 2014 at Columbia University Irving Medical Center.",
"Table 1 shows summary statistics for the corpus.",
"There are a wide range of reasons for hospitalizations, from life-threatening situations (e.g., heart attack) to when management of a specific problem cannot be carried out effectively outside of the hospital (e.g., uncontrolled diabetes).",
"This contributes to the high variance in documentation.",
"For reference, Table 7 provides a comparison of basic statistics to widely used summarization Variable Value STD Global # Patients 68,936 N/A # Admissions 109,726 # Source Notes 2,054,828 PerAdm.",
"datasets.",
"Relatively speaking, CLINSUM is remarkable for having a very high compression ratio despite having long reference summaries.",
"Additionally, it appears highly extractive with respect to fragment density (we qualify this in Section 4.1).",
"Based on advice from clinicians, we rely on the following subset of note types as source documents: Admission, Progress, and Consult notes.",
"The dataset does not contain any structured data, documentation from past encounters, or other note types (e.g., nursing notes, social work, radiology reports) (Reichert et al., 2010).",
"Please refer to Appendix A for more details and rationale.",
"Entity Extraction & Linking.",
"We use the Med-CAT toolkit (Kraljevic et al., 2020) to extract medical entity mentions and normalize to concepts from the UMLS (Unified Medical Language System) terminology (Bodenreider, 2004).",
"To exclude less relevant entities, we only keep entities from the Disorders, Chemicals & Drugs, and Procedures semantic groups, or the Lab Results semantic type.",
"Local Coherence.",
"We examine inter-sentential coherence in two ways.",
"Next-Sentence Prediction (NSP) .",
"Since we compare across a few datasets representing different domains, we use domain-specific pre-trained BERT models via HuggingFace (Wolf et al., 2019): bert-base-cased for CNN/DM and Arxiv, monologg/biobert_v1.1_pubmed for Pubmed, and emilyalsentzer/Bio_ClinicalBERT for CLINSUM .",
"Entity-grids.",
"Entity-grids model local coherence by considering the distribution of discourse entities (Barzilay and Lapata, 2005).",
"An entity grid is a 2-D representation of a text whose entries represent the presence or absence of a discourse entity in a sentence.",
"For our analyses, we treat UMLS concepts as entities and train a neural model, similar to Tien Nguyen and Joty (2017); Joty et al. (2018), which learns to rank the entity grid of a text more highly than the same entity grid whose rows (sentences) have been randomly shuffled.",
"Please see Appendix B for more details.",
"Lexical Overlap Metric.",
"We use ROUGE-1 (R1) & ROUGE-2 (R2) F-1 (Lin, 2004) to measure lexical overlap, while ignoring higher order variants based on analysis from other work (Kr-ishna et al., 2021).",
"We denote the average of R1 & R2 scores as R 12 .",
"Extractive Summarization Baselines.",
"We rely on a diverse set of sentence extraction methods, whose performance on a held-out portion of CLINSUM is reported in Table 2.",
"Oracle models have access to the ground-truth reference and represent upper bounds for extraction.",
"Here, we define the sentence selection criteria for each oracle variant, leaving more in-depth discussion to the subsequent analysis.",
"ORACLETOP-K : Take sentences with highest R 12 vis-a-vis the reference until a target token count is reached; ORACLEGAIN : Greedily take source sentence with highest relative R 12 gain conditioned on existing summary 1 .",
"Extract sentences until the change in R 12 is negative; ORACLESENT-ALIGN : For each sentence in reference, take source sentence with highest R 12 score; ORACLERETRIEVAL : For each sentence in reference, take reference sentence from train set with largest BM25 score (Robertson and Walker, 1994); and ORACLESENT-ALIGN + RETRIEVAL : For each sentence in reference, take sentence with highest R 12 between ORACLESENT-ALIGN and ORACLERETRIEVAL .",
"We provide two unsupervised methods as well.",
"RANDOM : extracts random sentences until summary reaches target word count (average summary length); LEXRANK : selects the top-k sentences with largest LexRank (Erkan and Radev, 2004) score until target word count is reached.",
"For a supervised baseline, we present CLINNEUSUM : a variant of the Neusum model adapted to the clinical genre (Zhou et al., 2018).",
"CLINNEUSUM is a hierarchical LSTM network trained on ground-truth labels derived from ORACLEGAIN , which we detail in Appendix C. 1 This is the Neusum model's objective (Zhou et al., 2018) 4 Dataset Analysis & Implications To motivate future research in multiple, self-contained directions, we distill task-specific characteristics to a few salient, standalone takeaways.",
"For each takeaway, we provide evidence in the data and/or literature, before proposing implications of findings on model development and evaluation.",
"tl;dr.",
"CLINSUM summaries appear extractive according to widely used metrics.",
"Yet, there is large variance within summaries.",
"This directly affects the performance of a supervised extractive model, whose selection capability degrades as summary content transitions from copy-paste to abstractive.",
"In turn, we need models which can handle abrupt transitions between extractive and abstractive text.",
"Background.",
"Clinicians copy forward information from previous notes to save time and ensure that each note includes sufficient evidence for billing and insurance purposes (Wrenn et al., 2010).",
"Copy-paste is both widely used (66-90% of clinicians according to a recent literature review (Tsou et al., 2017)) and widely applied (a recent study concluded that in a typical note, 18% of the text was manually entered; 46%, copied; and 36% imported 2 (Wang et al., 2017)).",
"Please see Appendix D for more information on the issue of copy-paste.",
"Analysis extractiveness.",
"CLINSUM appears very extractive: a high coverage (0.83 avg / 0.13 std) and a very high density (13.1 avg / 38.0 std) (See Grusky et al. (2018) for a description of the statistics).",
"However, we find that 64% of the extractive fragments are unigrams, and 25% are bigrams, which indicate a high level of re-writing.",
"The density measure is large because the remaining 11% of extractive fragments are very long.",
"Yet, there is a strong positional bias within summaries for long fragments.",
"Figure 1 , groups fragments according to their relative order within each summary.",
"The longest fragments are usually first.",
"Qualitative analysis confirms that the beginning of the BHC is typically copied from a previous note and conveys the one-liner (e.g., pt is a 50yo male with history of CHF who presents with edema. ) This abrupt shift in extractiveness should affect content selection.",
"In particular, when look-2 Imported refers to text typically pulled in from structured data, such as a medication or problem list.",
"ing at oracle extractive strategies, we should see clear-cut evidence of (1) 1-2 sentences which are easy to identify as salient (i.e., high lexical overlap with source due to copy-paste), (2) a murkier signal thereafter.",
"To confirm this, we analyze the sentences selected by the ORACLEGAIN method, which builds a summary by iteratively maximizing the R 12 score of the existing summary vis-a-vis the reference.",
"In Figure 2, two supporting trends emerge.",
"(1) On average, one sentence accounts for roughly 50% 3 of the overall R 12 score.",
"(2) Afterwards, the marginal contribution of the next shrinks, as well as the R 12 gap between the best sentence and the minimum / average, according to the oracle.",
"There should also be evidence of the copy-paste positional bias impacting content selection.",
"Table 3 reveals that the order in which the ORACLEGAIN summary is builtby maximal lexical overlap with the partially built summaryroughly corresponds to the true ordering of the summary.",
"More simply, the summary transitions from extractive to abstractive.",
"3 From Table 2, the average R 12 score is 0.39 for ORACLEGAIN .",
"To reconcile this number with respect to Figure 2, we note that the average oracle summary is far less than the 20 sentence upper bound shown in the chart.",
"Unsurprisingly, a model (CLINNEUSUM ) trained on ORACLEGAIN extractions gets progressively worse at mimicking it.",
"Specifically, for each extractive step, there exists a ground-truth ranking of candidate sentences by relative R 12 gain.",
"As the relevance gap between source sentences shrinks (from Figure 2), CLINNEUSUM 's predictions deviate further from the oracle rank (Table 4).",
"Analysis Redundancy.",
"Even though we prevent all baseline methods from generating duplicate sentences (23% of source sentences have exact match antecedents), there is still a great deal of redundancy in the source notes (i.e., modifications to copy-pasted text).",
"This causes two issues related to content selection.",
"The first is fairly intuitive that local sentence extraction propagates severe redundancy from the source notes into the summary and, as a result, produces summaries with low lexical coverage.",
"We confirm this by examining the performance between the ORACLETOP-K and ORACLEGAIN , which represent summary-unaware and summary-aware variants of the same selection Extractive Average Rank of Closest Step Reference Sentence 1 4.7 2 6.0 3 6.3 4 6.7 5 7.3 > 5 10.1 Table 3: ORACLEGAIN greedily builds summaries by repeatedly selecting the sentence which maximizes the R 12 score of the partially built summary.",
"method.",
"While both extract sentences with the highest R 12 score, ORACLEGAIN outperforms because it incorporates redundancy by considering the relative R 12 gain from an additional sentence.",
"The second side effect is perhaps more surprising, and divergent from findings in summarization literature.",
"For most corpora, repetition is indicative of salience.",
"In fact, methods based on lexical centrality, i.e., TextRank (Mihalcea and Tarau, 2004) and LexRank (Erkan and Radev, 2004), still perform very competitively for most datasets.",
"Yet, for CLINSUM , LexRank barely outperforms a random baseline.",
"Poor performance is not only due to redundance, but also a weak link between lexical centrality and salience.",
"The Pearson correlation co-efficient between a sentence's LexRank score and its R 12 overlap with the reference is statistically significant ( p = 0 ) yet weak ( r = 0 . 29 ).",
"Qualitative analysis reveals two principal reasons, both related to copy-paste and/or imported data.",
"The first relates to the propagation of frequently repeated text which may not be useful for summaries: administrative (names, dates), imported structured data, etc.",
"The second relates to sentence segmentation.",
"Even though we use a cus-Figure 3: Relationship between source entity mentions and probability of inclusion in the summary.",
"tom sentence splitter, our notes still contain some very long sentences due to imported lists and semi-structured texta well-documented issue in clinical NLP (Leaman et al., 2015).",
"LexRank summaries have a bias toward these long sentences (26.2 tokens versus source average of 10.9), which have a greater chance of containing lexical centroid(s).",
"To bypass some of these issues, however, one can examine the link between centrality and salience at the more granular level of entities.",
"Figure 3 shows a clear-cut positive correlation between source note mention frequency of UMLS concepts and the probability of being included in the summary.",
"Implications.",
"Regarding within-summary variation in extractiveness , we argue for a hybrid approach to balance extraction and abstraction.",
"One of the most widely-used hybrid approaches to generation is the Pointer-Generator (PG) model (See et al., 2017), an abstractive method which allows for copying (i.e., extraction) of source tokens.",
"Another research avenue explicitly decouples the two.",
"These extract-then-abstract approaches come in different flavors: sentence-level re-writing (Chen and Bansal, 2018; Bae et al., 2019), multi-sentence fusion (Lebanoff et al., 2019), and two-step disjoint extractive-abstracive steps (Mendes et al., 2019).",
"While highly effective in many domains, these approaches do not consider systematic differences in extractiveness within a single summary.",
"To incorporate this variance, one could extend the PG model to copy pre-selected long snippets of text.",
"This would mitigate the problem of copy mechanisms learning to copy very long pieces of text (Gehrmann et al., 2018) undesirable for the highly abstractive segments of CLINSUM .",
"Span-level extraction is not a new idea (Xu et al., 2020), but, to our knowledge, it has not been studied much in otherwise abstractive settings.",
"For instance, Joshi et al. (2020) explore patient-doctor conversation summarization and add a penalty to the PG network for over-use of the generator, yet this does not account for intra-summary extractiveness variance.",
"Regarding redundancy , it is clear that, in contrast to some summarization tasks (Kedzie et al., 2018), summary-aware content selection is essential for hospital course summarization.",
"Given so much noise, massive EHR and cite-specific preprocessing is necessary to better understand the signal between lexical centrality and salience.",
"tl;dr.",
"BHC summaries are packed with medical entities, which are well-distributed across the source notes.",
"As such, relations are often not explicit.",
"Collectively, this difficult task calls for a domain-specific approach to assessing faithfulness.",
"Analysis concise We find that summaries are extremely dense with medical entities: 20 .",
"9% of summary words are medical UMLS entities, compared to 14 .",
"1% in the source notes.",
"On average, summaries contain 26 unique entities whereas the source notes contain 265 an entity compression ratio of 10 (versus token-level compression of 43).",
"Analysis comprehensive.",
"Many summarization corpora exhibit systematic biases regarding where summary content can be found within source document(s) (Dey et al., 2020).",
"On CLINSUM , we examine the distribution of entities along two dimensions: macro considers the differences in entity share across notes, and micro considers the differences within each note (i.e., lead bias).",
"(1) Macro Ordering.",
"When looking at the source notes one by one, how much additional relevant information (as measured by entities present in the summary) do you get from each new note?",
"We explore three different orderings: (1) FORWARD orders the notes chronologically, (2) BACKWARD the reverse, and (3) GREEDYORACLE examines notes in order of decreasing entity entity overlap with the target.",
"Given the large variation in number of notes per admission, we normalize by binning notes into deciles.",
"Figure 4 shows that it is necessary to read the entire set of notes despite diminishing marginal returns.",
"One might expect the most recent notes to have the most information, considering present as well as copy-forwarded text.",
"Surprisingly, FORWARD and BACKWARD distributions are very similar.",
"GREEDYORACLE gets at the level of information concentration.",
"On average, the top 10% of most informative notes cover just over half of the entities found in the summary.",
"We include absolute and percentage counts in Table 5.",
"(2) Micro Ordering.",
"We plot a normalized histogram of summary entities by relative position within the source documents.",
"Figure 5 reveals a slight lead bias, followed by an uptick toward the end.",
"Clinical notes are organized by section: often starting with the past medical history and present illness, and typically ending with the plan for future care.",
"All are needed to write a complete BHC.",
"Implications.",
"The fact that entities are so densely packed in summaries makes models more susceptible to factual errors that misrepresent complex relations.",
"On the CNN/DailyMail dataset, Goel et al. (2021) reveal performance degradation as a function of the number of entities.",
"This is magnified for clinical text, where failure to identify which treatments were tolerated or discontinued, or to differentiate conditions of the patient or family member, could lead to serious treatment errors.",
"proposed methods treat global evaluation as the independent sum of very local assessments.",
"In the case of QA-based methods, it is a quiz-like aggregation of individual scores to fairly narrow questions that usually seek to uncover the presence or absence of a single entity or relation.",
"Yet, fac-toid (Chen et al., 2018), cloze-style (Eyal et al., 2019; Scialom et al., 2019; Deutsch et al., 2020), or mask-conditioned question generation (Durmus et al., 2020) may not be able to directly assess very fine-grained temporal and knowledge-intensive dependencies within a summary.",
"This is a natural byproduct of the fact that many of the factuality assessments were developed for shorter summarization tasks (i.e., headline generation) in the news domain (Cao et al., 2018b; Kryscinski et al., 2019; Maynez et al., 2020).",
"Entailment-based measures to assess faithfulness (Pasunuru and Bansal, 2018; Welleck et al., 2019) can capture complex dependencies yet tend to rely heavily on lexical overlap without deep reasoning (Falke et al., 2019).",
"Taken together, we argue for the development of fact-based evaluation metrics which encode a deeper knowledge of clinical concepts and their complex semantic and temporal relations 4 .",
"tl;dr.",
"Hospital course summarization involves not only massive compression, but a large style and organization transfer.",
"Source notes are written chronologically yet the way clinicians digest the information, and write the discharge summary, 4 Zhang et al. (2020) directly address factuality of clinical text, yet the setting is very different.",
"They explore radiology report accuracy, which is not a temporal multi-document summarization task.",
"Additionally, they rely on a smaller IE system tailored specifically for radiology reports (Irvin et al., 2019).",
"is largely problem-oriented.",
"With simple oracle analysis, we argue that retrieve-edit frameworks are well-suited for hospital course generation.",
"Analysis Style.",
"Clinical texts contain many, often obscure, abbreviations (Finley et al., 2016; Adams et al., 2020), misspellings, and sentence fragments (Demner-Fushman et al., 2009).",
"Using a publicly available abbreviation inventory (Moon et al., 2014), we find that abbreviations are more common in the BHC.",
"Furthermore, summary sentences are actually longer on average than source sentences (15.8 versus 12.4 words).",
"Analysis Organization.",
"Qualitative analysis confirms that most BHCs are written in a problem-oriented fashion (Weed, 1968), i.e., organized around a patient's disorders.",
"To more robustly analyze content structure, we compare linked UMLS entities at the semantic group level: DRUGS , DISORDERS , and PROCEDURES (McCray et al., 2001).",
"In particular, we compare global proportions of semantic groups, transitions between entities, as well as positional proportions within summaries.",
"(1) Global.",
"Procedures are relatively more prevalent in summaries (31% versus 24%), maybe because of the emphasis on events happening during the hospitalization.",
"In both summary and source notes, DISORDERS are the most prevalent (54% and 46%, respectively).",
"Drugs make up 23% and 22% of entity mentions in summary and source notes, respectively.",
"(2) Transitions.",
"From both source and summary text, we extract sequences of entities and record adjacent transitions of their semantic groups in a 3 3 matrix.",
"Figure 7 indicates that summaries have fewer clusters of semantically similar entities (diagonal of the transition matrix).",
"This transition matrix suggests a problem-oriented approach in which disorders are interleaved with associated medications and lab results.",
"(3) Positional.",
"Finally, within summaries, we examine the positional relative distribution of semantic groups and connect it to findings from Section 4.1.",
"In Figure 6 , we first compute the start index of each clinical entity, normalized by the total length, and then group into ten equally sized bins.",
"The early prevalence of disorders and late prevalence of medications is expected, yet the difference is not dramatic.",
"This suggests an HPI-like statement up front, followed by a problem oriented narrative.",
"from other summaries in the dataset would have similar or better lexical coverage than summaries constructed from sentences in the source notes.",
"To assess this, we compare two oracle baselines, SENT-ALIGN and RETRIEVAL .",
"For each sentence in the summary, we find its closest corollary either in the source text ( SENT-ALIGN ) or in other summaries in the dataset ( RETRIEVAL ).",
"While the retrieval method is at a distinct disadvantage because it does not contain patient-specific information and retrieval is performed with BM25 scores, we find both methods yield similar results ( Table 2 ).",
"An ensemble of SENT-ALIGN and RETRIEVAL performs better than either alone, suggesting that the two types of sources may be complementary.",
"82% of this oracle's summary sentences are retrievals.",
"Summaries adapt the style and problem-oriented structure of other summaries, but contain patient-specific information from the source notes.",
"Implications.",
"Hospital-course summaries weave together disorders, medications, and procedures in a problem-oriented fashion.",
"It is clear that substantial re-writing and re-organization of source content is needed.",
"One suitable approach is to use the retrieve-rerank-rewrite ( R 3 ) framework proposed by Cao et al. (2018a).",
"To support this notion, more recent work demonstrates that retrieval augmented generation is effective for knowledge-intensive tasks (Lewis et al., 2020b), enhances sys-Figure 8: NSP logit by relative position of the next sentence across summaries for several datasets.",
"An offset of 1 corresponds to the true next sentence.",
"tem interpretability (Guu et al., 2020; Krishna et al., 2020), and can improve LM pre-training (Lewis et al., 2020a) 5 .",
"Also, efforts to bridge the gap between template-based and abstractive generation have been successful in the medical domain for image report generation (Li et al., 2018).",
"In this light, BHC generation could be truly problem-oriented.",
"The first step would involve selecting salient problems (i.e., disorders) from the source texta well-defined problem with proven feasibility (Van Vleck and Elhadad, 2010).",
"The second step would involve separately using each problem to retrieve problem-specific sentences from other summaries.",
"These sentences would provide clues to the problem's relevant medications, procedures, and labs.",
"In turn, conceptual overlap could be used to re-rank and select key, problem-specific source sentences.",
"The extracted sentences would provide the patient-specific facts necessary to rewrite the problem-oriented retrieved sentences.",
"tl;dr.",
"Lexical cohesion is sub-optimal for evaluating hospital-course discourse because clinical summaries naturally exhibit frequent, abrupt topic shifts.",
"Also, low correlation exists between lexical overlap and local coherence metrics.",
"Analysis.",
"Entity-based coherence research posits that \"texts about the same discourse entity are perceived to be more coherent than texts fraught with abrupt switches from one topic to the next\" (Barzi-lay and Lapata, 2005).",
"Yet, for CLINSUM summaries, coherence and abrupt topic shifts are not mutually exclusive.",
"An analysis of the entity grids of summaries, presumably coherent, are sparse, with few lexical chains.",
"In fact, over 66% of the 5 The related idea of template-based generation has gained traction within the probabilistic community (Wiseman et al., 2018; Guu et al., 2018; Wu et al., 2019; He et al., 2020).",
"entities in the BHC appear only once.",
"Of those with multiple mentions, the percentage which appear in adjacent sentences is only 9.6%.",
"As in Prabhumoye et al. (2020), we also compare coherence with next-sentence prediction (NSP).",
"Figure 8 plots the NSP logit by positional offset, where an offset of 1 corresponds to the next sentence, and -1 to the previous.",
"NSP relies on word overlap and topic continuity (Bommasani and Cardie, 2020), so it makes sense it is lowest for CLINSUM .",
"To confirm the hypothesis that ROUGE does not adequately capture content structure, we use the pairwise ranking approach to train and evaluate an entity-grid based neural coherence model (Barzilay and Lapata, 2005; Tien Nguyen and Joty, 2017).",
"Table 6 shows ROUGE and coherence metrics side-by-side for ORACLEGAIN , which naively orders sentences according to document timestamp, then within-document position, and ORACLESENTALIGN , which maintains the structure of the original summary.",
"The poor coherence of ORACLEGAIN is obscured by comparable ROUGE scores.",
"Implications.",
"Content organization is critical and should be explicitly evaluated.",
"A well-established framework for assessing organization and readability is coherence.",
"A large strand of work on modeling coherent discourse has focused on topical clusters of entities (Azzam et al., 1999; Barzilay and Elhadad, 2002; Barzilay and Lee, 2004; Okazaki et al., 2004).",
"Yet, as shown above, CLINSUM summaries exhibit abrupt topic shifts and contain very few repeated entities.",
"The presence and distribution of lexical (Morris and Hirst, 1991; Barzilay and Elhadad, 1997) or co-referential (Azzam et al., 1999) chains, then, might not be an appropriate proxy for clinical summary coherence.",
"Rather, we motivate the development of problem-oriented models of coherence, which are associative in nature, and reflect a deeper knowledge about the relationship between disorders, medications, and procedures.",
"The impetus for task-tailored evaluation metrics is supported by recent meta analyses (Fabbri et al., 2020; Bhandari et al., 2020).",
"tl;dr.",
"Discharge summaries and their associated BHC sections are frequently missing critical information or contain excessive or erroneous content.",
"Modeling efforts should address sample quality.",
"Analysis.",
"Kripalani et al. (2007) find that discharge summaries often lack important information including diagnostic test results (33-63% missing) treatment or hospital course (7-22%), discharge medications (2-40%), test results pending at discharge (65%), patient/family counseling (90-92%), and follow-up plans (2-43%).",
"The quality of the reporting decreases as the length of the discharge summary increases, likely due to copy-pasted information (van Walraven and Rokosh, 1999).",
"These quality issues occur for a number of reasons: (1) limited EHR search functionality makes it difficult for clinicians to navigate through abundant patient data (Christensen and Grimsmo, 2008); (2) multiple clinicians contribute to incrementally documenting care throughout the patient's stay; (3) despite existing guidance for residents, clinicians receive little to no formal instruction in summarizing patient information (Ming et al., 2019); and (4) clinicians have little time for documenting care.",
"Implications.",
"Noisy references can harm model performance, yet there is a rich body of literature to show that simple heuristics can identify good references (Bommasani and Cardie, 2020) and/or filter noisy training samples (Rush et al., 2015b; Akama et al., 2020; Matsumaru et al., 2020).",
"Similar strategies may be necessary for hospital-course generation with silver-standard data.",
"Another direction is scalable reference-free evaluations (ShafieiBavani et al., 2018; Hardy et al., 2019; Sellam et al., 2020; Gao et al., 2020; Vasilyev et al., 2020).",
"Based on a comprehensive analysis of clinical notes, we identify a set of implications for hospital-course summarization on future research.",
"For modeling, we motivate (1) the need for dynamic hybrid extraction-abstraction strategies (4.1); (2) retrieval-augmented generation (4.3); and (3) the development of heuristics to assess reference quality (4.5).",
"For evaluation, we argue for (1) methods to assess factuality and discourse which are associative in nature, i.e., incorporate the complex inter-dependence of problems, medications, and labs (4.2, 4.4); and (2) scalable reference-free metrics (4.5).",
"Dataset creation.",
"Our CLINSUM dataset contains protected health information about patients.",
"We have received IRB approval through our institution to access this data in a HIPAA-certified, secure environment.",
"To protect patient privacy, we cannot release our dataset, but instead describe generalizable insights that we believe can benefit the general summarization community as well as other groups working with EHR data.",
"Intended Use & Failure Modes.",
"The ultimate goal of this work is to produce a summarizer that can generate a summary of a hospital course, and thus support clinicians in this cognitively difficult and time-consuming task.",
"While this work is a preface to designing such a tool, and significant advances will be needed to achieve the robustness required for deployment in a clinical environment, it is important to consider the ramifications of this technology at this stage of development.",
"We can learn from existing clinical summarization deployed (Pivovarov et al., 2016) and other data-driven clinical decision support tools (Chen et al., 2020).",
"As with many NLP datasets, CLINSUM likely contains biases, which may be perpetuated by its use.",
"There are a number of experiments we plan to carry out to identify documentation biases and their impact on summarization according to a number of dimensions such as demographics (e.g., racial and gender), social determinents of health (e.g., homeless individuals), and clinical biases (e.g., patients with rare diseases).",
"Furthermore, deployment of an automatic summarizer may lead to automation bias (Goddard et al., 2012), in which clinicians over rely on the automated system, despite controls measures or verification steps that might be built into a deployed system.",
"Finally, medical practices and EHRs systems constantly change, and this distribution drift can cause models to fail if they are not updated.",
"As the NLP community continues to develop NLP applications in safety-critical domains, we must carefully study how can can build robustness, fairness, and trust into these systems.",
"We thank Alex Fabbri and the NAACL reviewers for their constructive, thoughtful feedback.",
"This work was supported by NIGMS award R01 GM114355 and NCATS award U01 TR002062."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Word and morpheme segmentation are fundamental steps of language documentation as they allow to discover lexical units in a language for which the lexicon is unknown.",
"However, in most language documentation scenarios, linguists do not start from a blank page: they may already have a pre-existing dictionary or have initiated manual segmentation of a small part of their data.",
"This paper studies how such a weak supervision can be taken advantage of in Bayesian non-parametric models of segmentation.",
"Our experiments on two very low resource languages (Mboshi and Japhug), whose documentation is still in progress, show that weak supervision can be beneficial to the segmentation quality.",
"In addition, we investigate an incremental learning scenario where manual segmentations are provided in a sequential manner.",
"This work opens the way for interactive annotation tools for documentary linguists.",
"Recent years have witnessed a blooming of research aimed at applying language technologies (LTs) to under-resourced languages.",
"1 Such studies have been mostly motivated on three main grounds (not necessarily mutually exclusive):",
"(a) to develop tools that could speed up the work of field linguists collecting and annotating recordings for these languages;",
"(b) to provide linguistic communities with LTs that are necessary in an increasingly digitalised world,",
"e.g.",
"to interact with smartphones or computers in their own language and communicate with speakers of other languages;",
"(c) to challenge existing machine-learning techniques in very low resource settings, where hardly any resource (dictionary, corpus, grammar) is available.",
"1 Acknowledged by workshop series such as Spoken Languages Technologies for Under-resourced languages (SLTU), Collaboration and Computing for Under-Resourced Languages (CCURL) and Computational Methods in the Study of Endangered Languages (ComputEL) inter alia.",
"Those objectives are thoroughly discussed in a recent position paper (Bird, 2020) who notices, among other things, that objective",
"(c) (training language processing tools with zero resource) is questionable in the context of language documentation works which can often rely on some pre-existing knowledge, such as a word list, or information from related languages.",
"Accordingly, this paper explores ways to make the best of prior resources and improve the effectiveness of unsupervised language analysis techniques for the purpose of linguistic documentation.",
"Our main objective is to develop tools that will effectively assist field linguists in their documentary tasks (objective",
"(a)).",
"We focus on segmentation tasks, which aim to automatically identify meaningful units in an unsegmented phonetic or orthographic string (Johnson, 2008; Doyle and Levy, 2013; Eskander et al., 2016; Godard et al., 2018b; Eskander et al., 2019).",
"Following these authors, we experiment with Bayesian non-parametric segmentation models, derived in our case from Goldwater et al. (2009) and subsequent work, which we recap in Section 2.",
"Our first contribution is in Section 3 which studies multiple semi-supervised learning regimes aimed to take advantage of pre-existing linguistic material such as incomplete segmentations and word lists.",
"In Sections 4 and 5, we experimentally assess the pros and cons of these weakly supervised approaches in batch and online learning, for two extremely low-resource languages currently in the process of being documented: Mboshi, a Bantu language used in former studies (Godard et al., 2018a); and Japhug, a language from the Sino-Tibetan family spoken in the Western part of China thoroughly documented by Jacques (2021).",
"These two languages were selected because they illustrate actual documentation processes, for which high-quality linguistic resources have been derived from fieldwork, at the end of a long and difficult procedure (Aiton, 2021).",
"A complementary analysis follows, 7385 where we use the Japhug corpus to take a closer look at the units identified automatically, contrasting morpheme-based and word-based supervision.",
"Going from audio recordings to fully annotated transcripts implies two successive segmentation steps: the first segments words and happens during the production of phonemic or orthographic transcripts; the second further splits words into morphs, which are then annotated with syntactic information and glosses.",
"We mostly focus on the former task, assuming a two-step process: first, the computation of a phonemic transcript that we assume is given; then the segmentation into words for which we consider two settings: batch and online learning.",
"The word and morpheme segmentation tasks are closely related and rely on similar tools: using the Japhug corpus, which contains both levels of segmentations, we also study the implications of using lists of words vs morphemes as weak supervision.",
"In its baseline form, the word segmentation process is fully unsupervised, and the only training material is a set of transcribed sentences (see Fig. 1).",
"We rely on Bayesian non-parametric approaches to word segmentation (see (Cohen, 2016) for a thorough exposition), and our baselines are the unigram version of the dpseg model (Goldwa-ter et al., 2009) and a variant where the underlying Dirichlet Process is replaced by a Pitman-Yor Process as in (Neubig, 2014).",
"We selected unigram models for their simplicity, which",
"(a) makes them amenable to the processing of very small sets of sentences;",
"(b) makes the online learning setting tractable.",
"While using higher-order models or more sophisticated models of the same family (Teh, 2006b; Mochihashi et al., 2009) may improve the performance (see (Godard et al., 2016) for an experimental comparison), we believe that in our low-resource conditions, these variations would be small 2 and would not change our main conclusions.",
"Word segmentation models fundamentally rely on probabilistic models for word sequences defin-ing P ( w = w 1 . . . w T ) ; word sequences can also be viewed as segmented sequences of characters y = y 1 . . . y L , so that the same model can be used for the joint probability of ( y , b ) , with b = b 1 . . . b L representing the vector of boundary 2 Godard et al. (2018a) report results with the bigram version of dpseg on the Mboshi corpus; the difference with our unigram version is about 4 points for the boundary F-score.",
"locations where value b t = 1 (resp. b t = 0 ) denotes a boundary (resp. no boundary) after symbol y t .",
"In an unsupervised setting, these boundaries are hidden and are latent variables in the model.",
"Such models lend themselves well to Gibbs sampling, which repeatedly produces samples of each boundary given all the other boundaries in the corpus.",
"In dpseg , the underlying sequence model is a unigram model: P ( w 1 . . . w T ) = (cid:81) Tt =1 P ( w t ) .",
"The probability of individual words corresponds to a Dirichlet Process with parameters , the concentration parameter, and P 0 , the base distribution , and yields the following formulation for the conditional probability of w t given the past words w <t : P ( w t = w | w <t ) = n w ( w <t ) + P 0 ( w ) t + 1 , (1) where n w ( w <t ) counts the number of times w has occurred in the past.",
"With lower values of , the most frequent words tend to be generated more (hence, concentration), while with higher values, the words are more smoothly distributed.",
"P 0 , the base distribution, assigns scores to arbitrary character strings; Goldwater et al. (2009) use a length model and a uniform character model.",
"For word w made of characters y 1 , ..., y m , P 0 is computed as: P 0 ( w ) = p # (1 p # ) m 1 (cid:124) (cid:123)(cid:122) (cid:125) length model m (cid:89) j =1 P ( y j ) (cid:124) (cid:123)(cid:122) (cid:125) character model (2) where p # is the probability to end the word.",
"For this model, Gibbs sampling compares at each position t two sequences of words w t =0 (no boundary at position t ) and w t =1 (a boundary is inserted).",
"As these sequences only differ minimally, terms such as P ( b t = 0 | y , b t ) are readily derived (see",
"e.g.",
"(Goldwater et al., 2009)).",
"Gibbs sampling is performed for a number of iterations that are suf-ficient to reach convergence, and we use the last iteration to uncover the resulting segmentation.",
"To speed up mixing, Goldwater et al. (2009) also use annealing, so that a larger search space is explored.",
"An extension of dpseg , denoted pypseg , uses a Pitman-Yor Process (PYP) instead of the Dirichlet Process and generalises equation (1) with an additional discount parameter, which enables to better control the generation of new words.",
"PYPs are introduced in (Teh, 2006b; Mochihashi et al., 2009); a fast implementation is in (Neubig, 2014).",
"For our experiments, both models have 7386 y = b 1 2 a 3 4 m 5 i 6 k 7 8 n 9 d 10 11 p 12 o 13 o 14 y 15 16 k 17 a 18 l 19 a 20 b = 0 0 1 0 0 0 0 0 0 0 1 0 0 1 0 1 0 0 0 1 w 6=0 ba miknd poo y kala b = 0 0 1 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 1 w 6=1 ba mi knd poo y kala Figure 1: The sentence segmentation task illustrated with a sentence from the Mboshi corpus: ba miknd poo y kala' ( they found the old village ').",
"In this section, we discuss realistic sources of weak supervision for segmentation tasks and how they can be included in Bayesian models.",
"Segmentation boundaries Segmentation data, corresponding to the location of boundary (and non-boundary) information, can be obtained in different ways.",
"For instance, when audio recordings are available, prosodic cues such as short silences or specific intonative patterns can serve to identify plausible locations for word endings.",
"Longer pauses generally denote the end of an utterance, which we assume are already given.",
"This would yield a sparse partial annotation , where supervision data is randomly scattered across the corpus.",
"Another realistic situation where we have access to a partial annotation is when a small subset is already segmented.",
"In this case, the partial annotation is dense and concentrated in a few sentences, a semi-supervised setting also studied in (Sirts and Goldwater, 2013).",
"We thus consider two questions:",
"(a) which is more effective between dense and sparse annotations?",
"(b) how effective is supervision in an incremental learning regime, where automatic (dense) annotations are progressively corrected and used to update the model?",
"Word lists Word lists constitute another valuable and common source of information.",
"They may contain morphs, morphemes, lexemes or fully inflected forms, with various levels of information (part-of-speech, gloss, translation, etc.).",
"In this study, we consider that lists of surface forms are available and evaluate their usefulness, depending on their size and on the way they were collected.",
"A related question is about the relative interest of word and morph lists, which we study in Section 5.3.",
"The use of more sophisticated forms of lexical information regarding word structure, PoS, is out of the scope of this paper and is left for future work.",
"Having a collection of fully segmented utterances, as discussed above, is another way to generate word lists.",
"So these two sources of information must be viewed as complementary ways to supervise the task at hand: boundary marks at the token level, word list at the type level.",
"Segmentation boundaries Observed segmentation boundaries can be used to facilitate the training process.",
"Two experimental conditions, both affecting the Gibbs sampler ( gs ), have been considered: gs.sparse : a fraction ( %) of the actual boundaries are observed, which corresponds to a sparse annotation scenario.",
"gs.dense : for % of sentences, all boundary and non-boundary variables are given.",
"In both cases, we modify the sampling process and make sure that the value of observed variables is not sampled, as in (Sirts and Goldwater, 2013).",
"Using a word list Assuming now that a word list D is available, we consider the following approaches to reinforce the likelihood of units in D in the output segmentation:",
"d.count : D is used to initialise the inter-nal' model dictionary, and words in D are created with a fixed pseudo-count of value .",
"Formally, w D , the counting function n w () of Equation (1) will add to their actual count.",
"",
"d.mix : D is combined with the base distribution, resulting in the following mixture P (cid:48) 0 : P (cid:48) 0 ( w ) = | D | 1 { w D } + (1 ) P 0 ( w ) , (3) 7387 where [0 , 1] , | D | is the size of D , and 1 { w D } is the indicator function testing membership in D .",
"As for",
"d.count , P (cid:48) 0 increases the probability of words in D , but in a looser way, due to the term P 0 in Equation (1).",
"",
"d.ngram : the baseline dpseg version uses a uniform character model for P 0 (Equa-tion (2)); here, we use D to train a character n-gram language model (LM), with n = 2 and addk smoothing in our experiments.",
"",
"d.mix+ngram : this method combines",
"d.mix and",
"d.ngram : P 0 is replaced with the mixture P (cid:48) 0 of Equation (3) and the character model is an n-gram LM.",
"This can be viewed as a proxy to the complete nested Dirichlet Process of Mochihashi et al. (2009), with D implementing a cache mechanism for known words.",
"We have also used weaker forms of supervision aimed at learning a better length model, with hardly any improvement with respect to the baseline; these results are not reported below.",
"In addition to the static use of supervision information described above, we also considered a more dynamic training regime, where dense annotations are provided in a sequential manner through interaction with an expert linguist, enabling incremental learning.",
"To measure the effectiveness of this approach, we contrast three scenarios in Section 5.2: the baseline is the post-edition of a fully unsupervised model without further training; the post-edition of a fully unsupervised model, with additional Gibbs sampling iterations every batch utterances for iter iterations.",
"This aims at propagating forward the supervision information obtained from past annotations.",
"This method is referred to as o.regular .",
"on top of this, we also used the past annotated sentences to reestimate the base distribution of the underlying process as in",
"d.ngram .",
"The corresponding results are labelled o.2level in Figure 2.",
"Two languages have been considered in this paper: Mboshi and Japhug.",
"Mboshi is a tonal Bantu language spoken in the Republic of Congo (Bantu C25).",
"The data has been collected as part of the BULB project (Adda et al., 2016).",
"It has seven vowels and 25 consonant phonemes with five prenasalised consonants (made of two to three consonants), a common feature in Bantu languages (Embanga Aborobongui, 2013; Kouarata, 2014).",
"Although the language is usually not written, linguists have transcribed it with graphemes in a way that approximates the phonetic content.",
"To mark the distinction between long and short vowels, they were either duplicated (VV) or not (V).",
"One challenge for Mboshi word segmentation is its complex phonological rules, notably, vowel elision patterns whereby a vowel disappears before another one (also a common Bantu feature) (Rialland et al., 2015).",
"This kind of phenomenon makes it harder to find the boundaries.",
"From a morphological point of view, words are composed of roots and affixes.",
"Another characteristic Bantu feature is its deletion rule for class-prefix consonants in nouns.",
"Templates for verb structure are also quite rigid, with affixes following a strict ordering (Godard et al., 2018a).",
"Our corpus is a manual alphabetic transcription of audio recordings.",
"3 It contains 5,312 sentences segmented in words, one sentence per line.",
"Japhug is a Sino-Tibetan language from the Gyal-rong family spoken in the Sichuan province in China.",
"Japhug has eight vowels and 50 consonant phonemes, which can combine to create a large number (more than 400) of consonant clusters.",
"The rich cluster feature is one important characteristic of Japhug, which actually has one of the largest inventory of consonant clusters in the Trans-Himalayan language family.",
"The structure of these clusters can be analysed by looking at patterns of partial reduplication of syllable initial consonants.",
"There are no tones in this language.",
"Japhug also has a rich morphology, both for verbs and nouns.",
"Remarkably, in verb forms, up to six or seven prefixes can be chained to express features such as tense, aspect, modality, while suf-fixation is used to express inflectional phenomena.",
"Even though these processes are quite regular, they contribute to generating a large number of possible word forms.",
"Recordings, annotated corpora, and dictionaries for Japhug are available from the Pangloss collection.",
"4 An extensive description of the language is given in (Jacques, 2021).",
"5 Our training material has been extracted from the LATEX source files of this book, by collecting all Japhug examples.",
"These can easily be retrieved by searching the \\gll command introducing Japhug sentences.",
"Not only are the resulting sentences well-curated, but they are also segmented at two levels: words and morphemes.",
"This will lead to a specific experiment presented in Section 5.3.",
"Table 1 displays the general statistics for the two languages.",
"N utt , N type , and N token represent the number of utterances, of word types, and of word tokens, respectively.",
"WL represents the average token length, while TL is the average type length.",
"The sentences used for semi-supervision correspond to the first 200 sentences of each dataset, which is a realistic amount of data.",
"Likewise, lexical supervision corresponds to the list of words observed in the same 200 sentences, and respectively contain 517 words for Mboshi, 664 words and 493 morphemes for Japhug.",
"In our experimental setting, we made sure to also resample the hyperparameter(s) after each iteration, following mostly (Teh, 2006a; Mochihashi et al., 2009): the concentration parameter has a Gamma posterior distribution, and the discount parameter d a Beta distribution.",
"The initial values of the hyperparameters were set as in Goldwater et",
"al.'s work on the unigram dpseg : concentration 4 http://pangloss.cnrs.fr/corpus/Japhug .",
"parameter: = 20 , p # = 0 .",
"5 , discount parameter for pypseg : d = 0 .",
"5 .",
"The Gibbs sampler always runs for 20,000 iterations and simulated annealing is implemented as in (Goldwater et al., 2009) with 10 increments of temperature.",
"All the results are obtained by collecting the predicted boundaries at the end of the last sampling iteration of one single run.",
"Following Goldwater et al. (2009), evaluation relies on PRF' metrics: precision, recall, and F-score, defined as follows: precision P = TP TP + FP , recall R = TP TP + FN , and F-score F = 2 precision recall precision + recall , where TP are the true positives (match in the reference and segmented texts), FP are the false positives, and FN are the false negatives.",
"These metrics are computed at three levels: 6 boundary level (BP, BR, BF): compare the reference boundary vectors with the predictions; token level (WP, WR, WF): compare word in the reference and segmented sentences: a correct match requires two correct boundaries; type level (LP, LR, LF): compare the set of unique words in the reference and segmented utterances.",
"To have an overall view of the output text, we also report the average type and token lengths (TL and WL) as well as their counts ( N type and N token ), as in Table 1.",
"Numbers are computed on the entire text (including the supervised part).",
"This section presents the results for the models presented above.",
"We also report the performance of SentencePiece, another word segmentation tool based on a unigram language model (Kudo, 2018): 7 To boost this baseline, the vocabulary size has been set to the reference number of N type (cf. Table 1).",
"Supplementary material additionally contains results for Morfessor baselines (Creutz and Lagus, 2002), with the corresponding weak supervision.",
"As a reminder, our supervision here consists of the first 200 sentences in the text, either directly given as observed boundaries or used to generate the initial word list.",
"Table 2 displays our experimental results for the 5K Mboshi corpus for SentencePiece (SP), dpseg and pypseg with various amounts of supervision.",
"First, the unsupervised dpseg model has better results than SP on all three levels by a significant margin.",
"SP, on the other hand, produces more types as it knows' the actual number of types to generate.",
"Regarding segmentation boundaries, the gs.sparse model has disappointing results, with scores lower than the baseline.",
"On the other hand, the dense supervision manages to improve the baseline scores by around 2.5 points for BF, 4.5 points for WF, and 7.5 points for LF.",
"This is an encouraging result, since, with less than 5% of the whole text, the model has improved in a noticeable way, especially at type level, which seems to be difficult for fully unsupervised learning.",
"When supervising with a word list, all models but",
"d.2gram outperform the baseline.",
"Yet, the",
"d.count and",
"d.mix methods have lower scores than the gs.dense : this was expected for BF and WFwhere directly supervising boundaries is likely to be more useful than an indirect one, but less so for LF.",
"Regarding the",
"d.2gram model, its poor BF and WF scores are more than compensated by an increase of around 12 points in LF, showing the impact of a better type model.",
"Finally, by combining the",
"d.mix and",
"d.2gram strategies,",
"d.mix+2gram obtains the overall best results.",
"Results are in the right part of Table 2, where the baseline is the fully unsupervised pypseg .",
"It slightly outperforms dpseg by less than 1 point in terms of F-scores.",
"In our setting, although PYP increases the number of discovered types, it does not improve the performance in any significant manner.",
"This trend is confirmed for weakly supervised models: 8 the gs.dense model is the only one benefiting from a small improvement in all F-scores.",
"d.count underperforms both the baseline and its dpseg version.",
"With worsened BF and WF scores compared to the baseline,",
"d.mix+2gram with pypseg is worse than with dpseg .",
"Overall, the former seems to benefit less from annotations than the latter.",
"The performance of the bigram character model is noteworthy both with dpseg and pypseg .",
"This improvement alone (i.e.",
"d.2gram ) is responsible not only for a large increase in LF, but also for an average type length that gets much closer to its true value (6.39 in the reference, 6.60 with dpseg and",
"d.mix+2gram ).",
"Table 3 displays a selection of results for Japhug (segmented in words).",
"As previously observed, supervision noticeably improves the results for both models, with pypseg outperforming dpseg by a small margin on all metrics.",
"9 Note also that SP is much worse than Bayesian models, only reaching the same F-score as dpseg for the LF metric.",
"The best results are obtained with lexical supervision and the",
"d.mix+2gram model for dpseg : it combines the type boost in P (cid:48) 0 from",
"d.mix and the improved base model from",
"d.2gram .",
"Figure 2 displays the evolution of the boundary error rate (number of errors over 100 sentences / length of the 100 sentences) as more annotated sentences are available, for three contrasts of 3.3 (baseline, o.regular , and o.2level ).",
"We use the dpseg model and 50 complementary Gibbs sampling iterations every 100 sentences.",
"The large drop at the beginning for the o.2level 9 Full results are in appendix A.1.",
"model (green) can be attributed to the use of the bigram character model.",
"It gives this model an initial edge over o.regular that remains significant for the first 3,000 sentences.",
"Here again, the benefits of improving the base distribution (character-based model) as much as possible in the early training iterations clearly appear.",
"This section addresses a recurring issue in word segmentation model related to the linguistic nature of the units learnt by the model and the consequences of choosing one or the other reference in training.",
"The Japhug corpus contains both annotation levels and is a perfect test bed for this study.",
"We have thus used a segmentation model ( dpseg ) with and without weak supervision (using the",
"d.mix+2gram variant) at the level of words or morphemes, and the results are also evaluated against the two references (a segmentation in words or in morphemes).",
"Results are in Table 4.",
"erences, especially for the LF metric.",
"This again shows the tendency of the unigram model to over-segment the training sentences.",
"With word supervision, we observe a shift in behaviour that is consistent with the provided annotations: better word-level metrics with word-based annotations, and accordingly, a decrease of performance for morpheme-based scores.",
"With morpheme supervision, results are more contrasted: an improvement for word segmentation (because some words are also morphemes) that is not matched for morpheme boundaries.",
"Looking at the detailed results (see appendix A.1, Table 7), one can see that this is due to an undersegmentation, which yields a poor recall at the boundary and token levels.",
"Here, the main remaining benefit of supervision is an increase in the LF score.",
"These preliminary results suggest that considering only one type of boundary is a too naive view of the segmentation process and does not allow us to fully benefit from annotated data.",
"They call for models that would carefully distinguish boundaries within words and between words, with appropriate supervision for each of these levels.",
"It is noteworthy that dictionary supervision almost deterministically ensures that the input word types will occur in the segmented output.",
"For instance, 96% of the words in the Mboshi supervision dictionary are found in the output of the",
"d.mix+2gram method, whereas we only find 44% with fully unsupervised learning.",
"Similar trends are observed for Japhug.",
"Some remaining errors are, however, observed: in the example of Figure 3, the word bana' belongs to the supervision dictionary but remains attached to the following word ba'.",
"Additional examples are in appendix A.2.",
"This may be because both words bana' and ba' often occur together, a cooccurrence that can not be captured by our unigram model (Goldwater et al., 2009).",
"Unsupervised segmentation is a generic NLP task that can be performed at multiple levels of analysis: a document segmented in sections, a speech segmented in utterances, an utterance segmented in words, a word segmented in morphemes, syllables or phonemes.",
"It has been studied in multiple ways, and we report here recent work related to word discovery for language documentation, noting that the same methods also apply to the unsupervised segmentation of continuous speech into words' (de Marcken, 1996) which has given rise to a vast literature on language acquisition.",
"Recently, this task has become central in preprocessing pipelines, with new implementations of simple models (Sen-nrich et al., 2016; Kudo and Richardson, 2018).",
"Linear segmentation models in the Bayesian realm can be traced back to (Goldwater et al., 2006, 2009).",
"They were extended with nesting in (Mochihashi et al., 2009), where the base distribution of the Dirichlet Process is a char-based nonparametric model; and in (Uchiumi et al., 2015; Lser and Allauzen, 2016), who consider hidden state variables in the word generation process.",
"This extension enables, for instance, to jointly learn segmentation and PoS tagging or to introduce some morphotactics in the model.",
"Other sources of weak supervisions along these lines concern the use of higher-order n-grams and of prosodic cues (Doyle and Levy, 2013).",
"Finally, (Brschinger and Johnson, 2012) (with particle filtering techniques) and (Neubig, 2014) (with block sampling) study ways to speed up inference.",
"The unsupervised techniques exposed in Section 2 only depend on the design of a probabilistic word generation process.",
"This means that they are also readily applicable when this process is conditioned to some input, for instance, when a translation is available as an additional information source.",
"This setup is notably studied in (Neubig et al., 2011; Stahlberg et al., 2012), and also considered, with radically different tools, in (Anastasopoulos and Chiang, 2017; Godard et al., 2018c).",
"A somewhat richer trend of works aimed at informing word segmentation relies on the model of adaptor grammars (AG) of Johnson et al. (2007), applied to the segmentation task as early as (John-son, 2008).",
"AGs generalise finite-state models such as dpseg and pypseg by modelling trees and subtrees, rather than mere strings.",
"Their use necessitates a context-free description of the language, which enables to integrate information regarding word and syllable structures.",
"Even generic descriptions can be useful, but finding the most appropriate and effective one is challenging (Johnson and Goldwater, 2009; Eskander et al., 2016).",
"This formalism has also been used to introduce syntactic information (Johnson et al., 2014), prosodic information (Brschinger and Johnson, 2014), and partial annotations (Sirts and Goldwater, 2013).",
"Recent software packages for AGs are presented in (Bernard et al., 2020) and (Eskander et al., 2020).",
"Using AGs comes, however, with a high computational price, as the Gibbs sampling process typically requires repeated parses of the corpus, even though cheaper estimation techniques may also be considered (Cohen et al., 2010).",
"As our goal is to integrate learning techniques in interactive annotation tools, AGs were not deemed appropriate, and we explored simpler alternatives.",
"Similar arguments apply to the use of neural networks, which have attracted a growing interest even for very low-resource languages, combining supervised segmentation methods (Moeng et al., 2021; Liu et al., 2021) with cross-lingual transfer or data augmentation techniques (Silfverberg et al., 2017; Kann et al., 2018; Lane and Bird, 2020).",
"In this work, we have studied various ways to use weak supervision for automatic word segmentation.",
"In language documentation scenarios, such supervision is often available, taking the form of a partial annotation or word lists.",
"Bayesian non-parametric models lend themselves well to this setting, and our experiments have shown that two variants of a simple unigram model were getting a substantial boost from weak supervision, a result that has been obtained with two languages currently being documented.",
"The most effective approach seems to start with a small set of fully segmented data, which helps learning in two ways: as a training signal for segmentation and as lexical prior for the base distribution.",
"Based on this observation, we have further evaluated the longer-term benefits of an incremental training regime and also contrasted the improvement obtained using a word-based vs a morpheme-based vocabulary list.",
"Our future work will continue to explore the interplay between word and morpheme segmentations, as both are required in actual documentation settings, possibly extending our analyses on additional languages.",
"We will also consider supervising the annotation process with lists of non-inflected forms , which requires to jointly learn inflectional patterns and segmentation.",
"Finally, our main objective remains to integrate these techniques into an annotation platform and evaluate how much they help speed up the annotation process, hence the need to control the run-time of our algorithms.",
"This work was partly funded by French ANR and German DFG under grant ANR-19-CE38-0015 (CLD 2025).",
"The authors wish to thank Alexis Michaud and Guillaume Jacques for their help in preparing the Japhug corpus."
] | [
"abstain",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"objective",
"other",
"objective",
"method",
"other",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"abstain",
"result",
"abstain",
"objective",
"objective",
"method",
"objective",
"other",
"other"
] |
[
"We study the settings for which deep contextual embeddings (e.g., BERT) give large improvements in performance relative to classic pretrained embeddings (e.g., GloVe), and an even simpler baseline random word embeddingsfocusing on the impact of the training set size and the linguistic properties of the task.",
"Surprisingly, we find that both of these simpler baselines can match contextual embeddings on industry-scale data, and often perform within 5 to 10% accuracy (absolute) on benchmark tasks.",
"Furthermore, we identify properties of data for which contextual embeddings give particularly large gains: language containing complex structure, ambiguous word usage, and words unseen in training.",
"In recent years, rich contextual embeddings such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) have enabled rapid progress on benchmarks like GLUE (Wang et al., 2019a) and have seen widespread industrial use (Pandu Nayak, 2019).",
"However, these methods require significant computational resources (memory, time) during pretraining, and during downstream task training and inference.",
"Thus, an important research problem is to understand when these contextual embeddings add significant value vs. when it is possible to use more efficient representations without significant degradation in performance.",
"As a first step, we empirically compare the performance of contextual embeddings with classic embeddings like word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014).",
"To further understand what performance gains are attributable to improved embeddings vs. the powerful downstream models that leverage them, we also compare with a simple baseline fully random embedEqual contribution.",
"dings which encode no semantic or contextual information whatsoever.",
"Surprisingly, we find that in highly optimized production tasks at a major technology company, both classic and random embeddings have competitive (or even slightly better!) performance than the contextual embeddings.",
"1 , 2 To better understand these results, we study the properties of NLP tasks for which contextual embeddings give large gains relative to non-contextual embeddings.",
"In particular, we study how the amount of training data, and the linguistic properties of the data, impact the relative performance of the embedding methods, with the intuition that contextual embeddings should give limited gains on data-rich, linguistically simple tasks.",
"In our study on the impact of training set size, we find in experiments across a range of tasks that the performance of the non-contextual embeddings (GloVe, random) improves rapidly as we increase the amount of training data, often attaining within 5 to 10% accuracy of BERT embeddings when the full training set is used.",
"This suggests that for many tasks these embeddings could likely match BERT given sufficient data, which is precisely what we observe in our experiments with industry-scale data.",
"Given the computational overhead of contextual embeddings, this exposes important trade-offs between the computational resources required by the embeddings, the expense of labeling training data, and the accuracy of the downstream model.",
"To better understand when contextual embeddings give large boosts in performance, we identify three linguistic properties of NLP tasks which help explain when these embeddings will provide gains: Complexity of sentence structure : How interdependent are different words in a sentence?",
"1 This aligns with recent observations from experiments with classic word embeddings at Apple (R e et al., 2020).",
"2 These tasks are proprietary, so we share these results anecdotally as motivation for our study.",
"Ambiguity in word usage : Are words likely to appear with multiple labels during training?",
"Prevalence of unseen words : How likely is encountering a word never seen during training?",
"Intuitively, these properties distinguish between NLP tasks involving simple and formulaic text (e.g., assistant commands) vs. more unstructured and lexically diverse text (e.g., literary novels).",
"We show on both sentiment analysis and NER tasks that contextual embeddings perform significantly better on more complex, ambiguous, and unseen language, according to proxies for these properties.",
"Thus, contextual embeddings are likely to give large gains in performance on tasks with a high prevalence of this type of language.",
"We discuss the different types of word embeddings we compare in our study: contextual pretrained embeddings, non-contextual pretrained embeddings, and random embeddings; we also discuss the relative efficiency of these embedding methods, both in terms of computation time and memory (Sec. 2.1).",
"Pretrained contextual embeddings Recent contextual word embeddings, such as BERT (De-vlin et al., 2018) and XLNet (Yang et al., 2019), consist of multiple layers of transformers which use self-attention (Vaswani et al., 2017).",
"Given a sentence, these models encode each token into a feature vector which incorporates information from the token's context in the sentence.",
"Pretrained non-contextual embeddings Noncontextual word embeddings such as GloVe (Pen-nington et al., 2014), word2vec (Mikolov et al., 2013), and fastText (Mikolov et al., 2018) encode each word in a vocabulary as a vector; intuitively, this vector is meant to encode semantic information about a word, such that similar words (e.g., synonyms) have similar embedding vectors.",
"These embeddings are pretrained from large language corpora, typically using word co-occurrence statistics.",
"Random embeddings In our study, we consider random embeddings (e.g., as in Limsopatham and Collier (2016)) as a simple and efficient baseline that requires no pretraining.",
"Viewing word embeddings as n -byd matrices ( n : vocabulary size, d : embedding dimension), we consider embedding matrices composed entirely of random values.",
"To reduce the memory overhead of storing these n d random values to O ( n ) , we use circulant random matrices (Yu et al., 2017) as a simple and efficient approach (for more details, see Appendix A.1).",
"3 , 4 2.1 System Efficiency of Embeddings We discuss the computational and memory requirements of the different embedding methods, focusing on downstream task training and inference.",
"5 Computation time For deep contextual embeddings, extracting the word embeddings for tokens in a sentence requires running inference through the full network, which takes on the order of 10 ms on a GPU.",
"Non-contextual embeddings (e.g., GloVe, random) require negligible time ( O ( d ) ) to extract an embedding vector.",
"Memory Using contextual embeddings for downstream training and inference requires storing all the model parameters, as well as the model activations during training if the embeddings are being fine-tuned (e.g., 440 MB to store BERTBASE parameters, and on the order of 5-10 GB to store ac-tivations).",
"Pretrained non-contextual embeddings (e.g., GloVe) require O ( nd ) to store a n -byd embedding matrix (e.g., 480 MB to store a 400k by 300 GloVe embedding matrix).",
"Random embeddings take O (1) memory if only the random seed is stored, or O ( n ) if circulant random matrices are used (e.g., 1.6 MB if n = 400 k).",
"We provide an overview of our experimental protocols (Section 3.1), the results from our study on the impact of training set size (Section 3.2), and the results from our linguistic study (Section 3.3).",
"We show that the gap between contextual and noncontextual embeddings often shrinks as the amount of data increases, and is smaller on language that is simpler based on linguistic criteria we identify.",
"To study the settings in which contextual embeddings give large improvements, we compare",
"3 Note that one could also simply store the random seed, though this requires regenerating the embedding matrix every",
"time it is accessed.",
"4 We provide an efficient implementation of circulant random embedding matrices here: https://github.com/ HazyResearch/random_embedding .",
"5 Pretrained contextual and non-contextual embeddings also require significant computational resources during pretraining.",
"For example training BERTBASE takes 4 days on 16 TPU chips.",
"them to GloVe and random embeddings across a range of named entity recognition (NER) (Tjong Kim Sang and De Meulder, 2003), sentiment analysis (Kim, 2014), and natural language understanding (Wang et al., 2019a) tasks.",
"We choose these lexically diverse tasks as examples of word, sentence, and sentence-pair classification tasks, respectively.",
"For our embeddings, we consider 768-dimensional pretrained BERTBASE word embeddings, 300-dimensional publicly available GloVe embeddings, and 800-dimensional random circulant embeddings.",
"We keep the embedding parameters fixed during training for all embedding types (no fine-tuning), to isolate the benefits of pretraining from the benefits of task training.",
"We use a CNN model (Kim, 2014) for sentiment analysis and a BiLSTM (Akbik et al., 2018; Wang et al., 2019a) for the NER and General Language Understanding Evaluation (GLUE) tasks.",
"For more details on the tasks, models, and training protocols, please see Appendix A. 3.2 Impact of Training Data Volume We show that the amount of downstream training data is a critical factor in determining the relative performance of contextual vs. non-contextual embeddings.",
"In particular, we show in representative tasks in Figure 1 that the performance of the non-contextual embedding models improves quickly as the amount of training data is increased (plots for all tasks in Appendix B).",
"6 As a result of this improvement, we show in Table 1 that across tasks when the full training set is used, the non-contextual embeddings can often (1) perform within 10% absolute accuracy of the contextual 6 We provide theoretical support for why random embeddings perform strongly given sufficient data in Appendix B.3.",
"embeddings, and (2) match the performance of the contextual embeddings trained on 1x-16x less data, while also being orders of magnitude more computationally efficient.",
"In light of this, ML practitioners may find that for certain real-world tasks the large gains in efficiency are well worth the cost of labeling more data.",
"Specifically, in this table we show for each task the difference between the accuracies attained by BERT vs. GloVe and random (note that random sometimes beats GloVe!), as well as the largest integer n { 1 , 4 , 16 , 64 , 256 } such that BERT trained on 1 n of the training set still outperforms non-contextual embeddings trained on the full set.",
"In this section, we aim to identify properties of the language in a dataset for which contextual embeddings perform particularly well relative to noncontextual approaches.",
"Identifying such properties would allow us to determine whether a new task is likely to benefit from contextual embeddings.",
"As a first step in our analysis, we evaluate the different embedding types on the GLUE Diagnostic Dataset (Wang et al., 2019a).",
"This task defines four categories of linguistic properties; we observe that the contextual embeddings performed similarly to the non-contextual embeddings for three categories, and significantly better for the predicate-argument structure category (Matthews correlation coefficients of .33, .20, and .20 for BERT, GloVe, and random, respectively. See Appendix C.2.1 for more detailed results).",
"This category requires understanding how sentence subphrases are composed together (e.g., prepositional phrase attachment, and identifying a verb's subject and object).",
"Motivated by the observation that contextual embeddings are systematically better on specific types of linguistic phenomena, we work to identify simple and quantifiable properties of a downstream task's language which correlate with large boosts in performance from contextual embeddings.",
"In the context of both word-level (NER) and sentence-level (sentiment analysis) classification tasks, we define metrics that measure (1) the complexity of text structure, (2) the ambiguity in word usage, and (3) the prevalence of unseen words (Sec-tion 3.3.1), and then show that contextual embeddings attain significantly higher accuracy than noncontextual embeddings on inputs with high metric values (Section 3.3.2, Table 2).",
"We now present our metric definitions for NER and sentiment analysis, organized by the above three properties (detailed definitions in Appendix C).",
"Complexity of text structure We hypothesize that language with more complex internal structure will be harder for non-contextual embeddings.",
"We define the metrics as follows: NER : We consider the number of tokens spanned by an entity as its complexity metric (e.g., George Washington spans 2 tokens), as correctly labeling a longer entity requires understanding the relationships between the different tokens in the entity name.",
"Sentiment analysis : We consider the average distance between pairs of dependent tokens in a sentence's dependency parse as a measure of the sentence's complexity, as long-range dependencies are typically a challenge for NLP systems.",
"Ambiguity in word usage We hypothesize that non-contextual embeddings will perform poorly in disambiguating words that are used in multiple different ways in the training set.",
"We define the metrics as follows: NER : We consider the number of labels (person, location, organization, miscellaneous, other) a token appears with in the training set as a measure of its ambiguity (e.g., Washington appears as a person, location, and organization in CoNLL-2003).",
"Sentiment analysis : As a measure of a sentence's ambiguity, we take the average over the words in the sentence of the probability that the word is positive in the training set, and compute the entropy of a coin flip with this probability.",
"7 Prevalence of unseen words We hypothesize that contextual embeddings will perform significantly better than non-contextual embeddings on words which do not appear at all in the training set for the task.",
"We define the following metrics: NER : For a token in the NER input, we consider the inverse of the number of times it was seen in the training set (letting 1 / 0 := ).",
"Sentiment analysis : Given a sentence, we consider as our metric the fraction of words in the sentence that were never seen during training.",
"In Table 2 we show that for each of the metrics defined above, the accuracy gap between BERT and random embeddings is larger on inputs for which the metrics are large.",
"In particular, we split each of the task validation sets into two halves, with points with metric values below the median in one half, and above the median in the other.",
"We see that in 19 out of 21 cases, the accuracy gap between BERT and random embeddings is larger on the slice of the validation set corresponding to large metric values, validating our hypothesis that contextual embeddings provide important boosts in accuracy on these points.",
"In Appendix C.2.2, we present a similar table comparing the performance of BERT and GloVe embeddings.",
"We see that the gap between GloVe and BERT errors is larger above the median than below it in 11 out of 14 of the complexity and am-7 For sentiment tasks with C -labels ( C = 6 for the TREC dataset), we consider the entropy of the average label distribution 1 n (cid:80) ni =1 p ( y | w i ) RC over the sentence words w i .",
"biguity results, which is consistent with our hypothesis that context is helpful for structurally complex and ambiguous language.",
"However, we observe that GloVe and BERT embeddingswhich can both leverage pretrained knowledge about unseen wordsperform relatively similarly to one another above and below the median for the unseen metrics.",
"The original work on ELMo embeddings (Peters et al., 2018) showed that the gap between contextual and non-contextual embeddings narrowed as the amount of training data increased.",
"Our work builds on these results by additionally comparing with random embeddings, and by studying the linguistic properties of tasks for which the contextual embeddings give large gains.",
"Our work is not the first to study the downstream performance of embeddings which do not require any pretraining.",
"For example, in the context of neural machine translation (NMT) it is well-known that randomly-initialized embeddings can attain strong performance (Wu et al., 2016; Vaswani et al., 2017); the work of Qi et al. (2018) empirically compares the performance of pretrained and randomly-initialized embeddings across numerous languages and dataset sizes on NMT tasks, showing for example that the pretrained embeddings typically perform better on similar language pairs, and when the amount of training data is small (but not too small).",
"Furthermore, as mentioned in Section 2, random embeddings were considered as a baseline by Limsopatham and Collier (2016), to better understand the gains from using generic vs. domain-specific word embeddings for text classification tasks.",
"In contrast, our goal for using random embeddings in our study was to help clarify when and why pretraining gives gains, and to expose an additional operating point in the trade-off space between computational cost, data-labeling cost, and downstream model accuracy.",
"We compared the performance of contextual embeddings with non-contextual pretrained embeddings and with an even simpler baselinerandom embeddings.",
"We showed that these non-contextual embeddings perform surprisingly well relative to the contextual embeddings on tasks with plentiful labeled data and simple language.",
"While much recent and impressive effort in academia and industry has focused on improving state-of-the-art performance through more sophisticated, and thus increasingly expensive, embedding methods, this work offers an alternative perspective focused on realizing the trade-offs involved when choosing or designing embedding methods.",
"We hope this work inspires future research on better understanding the differences between embedding methods, and on designing simpler and more efficient models.",
"We gratefully acknowledge the support of DARPA under Nos.",
"FA87501720095 (D3M), FA86501827865 (SDH), and FA86501827882 (ASED); NIH under No.",
"U54EB020405 (Mo-bilize), NSF under Nos.",
"CCF1763315 (Be-yond Sparsity), CCF1563078 (Volume to Ve-locity), and 1937301 (RTML); ONR under No.",
"N000141712266 (Unifying Weak Supervision); the Moore Foundation, NXP, Xilinx, LETI-CEA, Intel, IBM, Microsoft, NEC, Toshiba, TSMC, ARM, Hitachi, BASF, Accenture, Ericsson, Qualcomm, Analog Devices, the Okawa Foundation, American Family Insurance, Google Cloud, Swiss Re, the Stanford Graduate Fellowship in Science and Engineering, and members of the Stanford DAWN project: Teradata, Facebook, Google, Ant Financial, NEC, VMWare, and Infosys.",
"The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views, policies, or endorsements, either expressed or implied, of DARPA, NIH, ONR, or the U.S. Government."
] | [
"method",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"result",
"method",
"result",
"result",
"abstain",
"method",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"result",
"abstain",
"method",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"other",
"abstain",
"objective",
"other",
"other",
"objective",
"method",
"result",
"result",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"Mamoru Komachi Tokyo Metropolitan University [email protected]",
"Abstract",
"An event-noun is a noun that has an argument structure similar to a predicate.",
"Recent works, including those considered state-of-the-art, ignore event-nouns or build a single model for solving both Japanese predicate argument structure analysis (PASA) and event-noun argument structure analysis (ENASA).",
"However, because there are interactions between predicates and event-nouns, it is not sufficient to target only predicates.",
"To address this problem, we present a multi-task learning method for PASA and ENASA.",
"Our multitask models improved the performance of both tasks compared to a single-task model by sharing knowledge from each task.",
"Moreover, in PASA, our models achieved state-of-the-art results in overall F1 scores on the NAIST Text Corpus.",
"In addition, this is the first work to employ neural networks in ENASA.",
"Japanese predicate argument structure analysis (PASA) examines semantic structures between the predicate and its arguments in a text.",
"The identifi-cation of the argument structure such as who did what to whom? is useful for natural language processing that requires deep analysis of complicated sentences such as machine translation and recognizing textual entailment.",
"PASA is a task targeted at predicates such as verbs and adjectives.",
"However, there are also many nouns that have event-related arguments in a sentence.",
"We call these nouns that refer to events event-nouns , for example, a verbal noun ( sahen nouns) such as houkoku report or a deverbal noun (nominalized forms of verbs) such as sukui rescue.",
"Figure 1 shows examples of PASA and event-noun argument structure analysis (ENASA).",
"In the NAIST Text Corpus (Iida et al., 2007), both predicates and event-nouns have one of three core",
"case roles, nominative (NOM), accusative (ACC), and dative (DAT) as an argument.",
"According to Iida et al. (2007), predicates have almost no argument in the same bunsetsu 1 phrase.",
"However, in the case of event-nouns, approximately half of the accusative and dative arguments appear in the same bunsetsu phrase.",
"Accordingly, although PASA and ENASA are semantically highly related, they are syntactically different tasks.",
"However, most previous studies focused on predicates only; hence, there are few studies that focus 1 Functional chunk in Japanese.",
"It consists of one or more content words (noun, verb, adjective, etc.) followed by zero or more function words (postposition, auxiliary verb, etc.).",
"A verb phrase in Japanese thus cannot bear noun arguments in the same bunsetsu.",
"on event-nouns (Komachi et al., 2007; Taira et al., 2008).",
"To identify the semantic units of a sentence and to correctly understand syntactic relations, it is not sufficient to target only PASA.",
"Thus, we propose a multi-task learning model that effectively leverages ENASA and improves PASA.",
"Our proposed model is based on an end-to-end multilayer bi-directional recurrent neural network (RNN) used in recent works, and the model has networks that distinguish task-independent information and task-specific information.",
"1. This is the first attempt to design a multi-task learning framework for PASA and ENASA, and we show that our models improve the performance of both tasks.",
"2. Although our model is a simple model that does not consider the interactions between multiple predicates, it achieves a state-of-the-art result on the NAIST Text Corpus (NTC) in PASA by combining syntactic information as one of the features.",
"3. For ENASA, this is the first work to employ neural networks to effectively incorporate PASA.",
"Many machine learning-based methods have been studied in Japanese PASA.",
"Traditional models take pointwise approaches that construct independent models for each core case role (NOM, ACC, DAT).",
"Taira et al. (2008) proposed a supervised model that learns features of each case using decision lists and support vector machines.",
"Imamura et al. (2009) proposed a model that combines a maximum entropy model with a language model trained from large-scale newspaper articles.",
"Hayashibe et al. (2011) designed three models exploiting argument position and type and determined the maximum likelihood output using pairwise comparison.",
"However, the joint approach that optimizes the scores of all predicate-argument pairs in a sentence simultaneously showed better results than the pointwise approach.",
"Yoshikawa et al. (2011) proposed a model that considers dependency between multiple predicate-argument relations using Markov logic networks.",
"Ouchi et al. (2015) jointly optimized the combinations among multiple predicates and arguments in a sentence using a bipartite graph.",
"Except for (Taira et al., 2008), these studies focused on the analysis of predicates while there are few studies that focus on event-nouns.",
"Komachi et al. (2007) decomposed ENASA into two tasks: event-hood determination and argument identification; they proposed a supervised method using lexico-syntactic patterns.",
"Event-hood determination is the most important characteristic that semantically differentiates ENASA from PASA.",
"It is a task to determine whether a noun refers to an event (e.g., houkoku can refer to either to report or the outcome of reporting action, a report).",
"Since the previous ENASA models adopted the pointwise approach with a single model, they did not explore the effective features in each task.",
"In contrast, our models simultaneously optimize three core case roles.",
"Moreover, the proposed models allow us to distinguish between task-shared and task-specific features using multi-task learning.",
"Some neural models have achieved higher performance than traditional machine learning models in Japanese PASA.",
"Shibata et al. (2016) replaced Ouchi et al. (2015)'s scoring function with feed forward neural networks.",
"Matsubayashi and Inui (2017) represented a dependency path between a predicate and its argument with path embeddings and showed that even the local model without multiple predicates can outperform a global model.",
"Moreover, some end-to-end models have been proposed in Japanese PASA.",
"Ouchi et al. (2017) proposed an end-to-end model based on the model using eight-layer bi-directional long short-term memory (LSTM) proposed by Zhou and Xu (2015) and considered the interaction of multiple predicates simultaneously using a Grid RNN.",
"Matsubayashi and Inui (2018) combined self-attention with Ouchi et al. (2017)'s model to directly capture interaction among multiple predicate-arguments.",
"In particular, the model improved the performance of arguments that have no syntactic dependency with predicates and achieved a state-of-the-art result on Japanese PASA.",
"Semantic role labeling (SRL) is a similar task to Japanese PASA.",
"Recently, several end-to-end models using neural networks showed high performance in English SRL (Zhou and Xu, 2015; He et al., 2017; Tan et al., 2018).",
"Strubell et al. (2018) proposed a multi-task learning model that jointly learned dependency parsing, part-of-speech tagging, predicate detection, and SRL based on multi-head self-attention.",
"Ouchi et al. (2018) proposed a span-based SRL model using bi-directional LSTMs and achieved state-of-the-art results.",
"The authors scored all possible spans for each label and selected correct spans satisfying constraints when decoding.",
"In terms of the event-noun research, Gerber and Chai (2010) used pointwise mutual information (PMI) as a feature for 10 event-nouns with high frequency and iden-tified semantic roles using a logistic regression model.",
"There were several LSTM models that also achieved high accuracy gains in Chinese SRL ( Wang et al., 2015; Roth and Lapata, 2016; Sha et al., 2016; Marcheggiani et al., 2017; Qian et al., 2017).",
"For event-nouns, Li et al. (2009) showed that combining effective features in verbal SRL with nominal SRL can improve results.",
"Although the authors did not demonstrate that verbal SRL also improves performance in combination with nominal SRL, we show that our model improves performance in both PASA and ENASA.",
"Japanese predicate (event-noun) argument structure analysis is a task to extract arguments for certain predicates (event-nouns) and assign three case labels, NOM, ACC and DAT (Iida et al., 2007).",
"Arguments are divided into four categories (Taira et al., 2008) according to the positions with their predicates (event-nouns).",
"Inter-zero Zero anaphoric arguments and their predicate (event-noun) are not in the same sentence.",
"A sentence w = w 1 ; w 2 ; (cid:1) (cid:1) (cid:1) ; w T and a predicate (event-noun) p = p 1 ; p 2 ; (cid:1) (cid:1) (cid:1) ; p q are given as input.",
"Iida et al. (2006), Imamura et al. (2009), and Sasano and Kurohashi (2011) also analyze Inter-zero, which is a difficult task because the whole document must be searched.",
"Following existing research (Ouchi et al., 2015, 2017; Matsubayashi and Inui, 2017, 2018; Taira et al., 2008), we only focus on three categories where arguments and their predicate (event-noun) are in the same sentence.",
"In addition, we exclude the Bunsetsu category from the PASA evaluation following Ouchi et al. (2017) and Matsubayashi and Inui (2018).",
"Our single model is based on an end-to-end approach (Zhou and Xu, 2015; Ouchi et al., 2017; Matsubayashi and Inui, 2018).",
"Additionally, we add new features.",
"Figure 2 shows the network architecture of our base model.",
"Each word w t 2 [ w 1 ; (cid:1) (cid:1) (cid:1) ; w T ] is converted to a feature representation x t 2 [ x 1 ; (cid:1) (cid:1) (cid:1) ; x T ] at the input layer.",
"We use six types of features.",
"The feature representation x t is defined as follows: x t = x as t (cid:8) x posi t (cid:8) x dep t (cid:8) x type t (cid:8) x task p (1) where ( (cid:8) ) indicates concatenation of vectors.",
"Argument Structure Predicate (event-noun) w p and argument candidates w t are converted to the vectors x as t 2 R 2 d w by the word embedding matrix.",
"Position This is a feature that represents the positional relation between w p and w t .",
"The feature is calculated by subtracting the word index of argument candidates from the word index of predicates (event-nouns).",
"We use two types of units to represent relative position: word unit p word t and bunsetsu unit p bunsetsu t , which are converted to the word positional vector p word t 2 R d p and the bunsetsu positional vector p bunsetsu t 2 R d p , respectively, by the word and bunsetsu positional embed-3407 Figure 2: End-to-end single model.",
"ding matrices.",
"We concatenate these two vectors and obtain the positional vectors x posi t 2 R 2 d p .",
"Dependency This is a feature that represents the dependency relation between w p and w t .",
"We set five types of dependency relations:",
"i).",
"Argument candidates depend on the predicate (event-noun).",
"ii).",
"The predicate (event-noun) depends on the argument candidates.",
"iii).",
"No dependency relations between the predicate (event-noun) and argument candidates.",
"iv).",
"The predicate and candidate arguments are in the same bunsetsu.",
"v).",
"The event-noun and candidate arguments are in the same bunsetsu.",
"The dependency relation d t is converted to the dependency vector x dep t 2 R d d by the dependency relation embedding matrix.",
"The dependency type in Figure 2 shows how to make dependency features in Figure 1b as an example.",
"We define the dependency type from the syntactic information annotated in the NTC.",
"In previous work, dependency features are used differently from our study.",
"Imamura et al. (2009) used a binary feature that represents whether or not there is a dependency relation between the predicate and its arguments.",
"We employ more fine-grained relation types to adapt to event-nouns.",
"Matsubayashi and Inui (2017) represented the interactions between a predicate and its arguments using path embedding.",
"In contrast, we define different types for a predicate and event-noun to distinguish event-nouns from predicates and learn embeddings to find the associated latent structures.",
"Event-hood Type This is a binary feature to flag all predicates (event-nouns) in a sentence inspired by Matsubayashi and Inui (2018).",
"The purpose of this feature is to prevent predicates from becoming arguments and to help some event-nouns become arguments.",
"The event-hood type vector x type t 2 R 2 of a candidate indicates [0,1] if the candidate is a predicate, [1,0] if the candidate is an event-noun, and [0,0] otherwise.",
"The predicate and event-noun are annotated in the NTC.",
"Task Label This is a binary feature vector x task p 2 R 1 that indicates 1 if the task is predicate argument structure analysis; otherwise, 0.",
"We use the gated recurrent unit (GRU) ( Cho et al., 2014) for RNN.",
"The RNN layers are made up of L layers of stacked bi-directional GRU.",
"Additionally, we apply the residual connections (He et al., 2016) following Ouchi et al. (2017); Matsubayashi and Inui (2018).",
"At each time step t , the hidden state h lt 2 R d h in the l 2 [1 ; (cid:1) (cid:1) (cid:1) ; L ] th layer is calculated as follows: h lt = { g l ( h l (cid:0) 1 t ; h lt (cid:0) 1 ) ( l = odd) g l ( h l (cid:0) 1 t ; h l t +1 ) ( l = even) (2) where g l ( (cid:1) ) denotes the l -th layer GRU function.",
"In the output layer, we input each hidden state h",
"Then, we obtain the output vector o t using the softmax function: o t = softmax( W o h Lt + b o ) (3) where W o 2 R 4 (cid:2) d h is the parameter matrix, and b o 2 R 4 is the bias term.",
"The output vector represents the probability for each argument candidate 3408",
"over four labels, [NOM, ACC, DAT, ELSE].",
"ELSE denotes that the candidate argument does not have a case label.",
"In testing, the maximum probability label is selected as the output label.",
"We train the model using the cross-entropy loss function.",
"Multi-task learning has been successfully applied to various natural language processing tasks ( Collobert et al., 2011; Sgaard and Goldberg, 2016; Luong et al., 2016; Hashimoto et al., 2017; Liu et al., 2017; Stoyanov et al., 2018; Marasovic and Frank, 2018; Strubell et al., 2018).",
"One of the advantages of multi-task learning is that it learns better representation, which is robust against task-dependent noise by increasing training data.",
"In this paper, we introduce multitask learning to PASA and ENASA for the first time.",
"We propose three methods to extend the end-to-end single model to the multi-task learning model in the input layer, RNN layer, and output layer.",
"Figure 3 shows the proposed models.",
"Our final model combines all three methods (Figure 3e).",
"Even if the surface form is the same, the contexts are different for predicates and event-nouns.",
"For example, the event-noun houkoku report in Figure 1b has an argument in the same bunsetsu unlike predicates.",
"Moreover, the event-noun also has a nominative argument role for the predicate mijikai short.",
"Therefore, given this, we prepare a task-specific word embedding matrix that addresses the task-specific distribution of words.",
"The predicate is converted to PASA-specific vectors x p t 2 R d w by the PASA-specific predicate embedding matrix.",
"Similarly, the event-noun is converted to ENASA-specific vectors x n t 2 R d w by the ENASA-specific event-noun embedding matrix.",
"These matrices are randomly initialized and can be learned during training.",
"The feature vector x t is defined as follows: x t = { x t (cid:8) x p t (PASA) x t (cid:8) x n t (ENASA) (4) 4.2 Multi RNN Layer Previous work (Sgaard and Goldberg, 2016; Hashimoto et al., 2017) proposed hierarchical multi-task learning models that exploited features obtained from easy tasks for difficult tasks.",
"These studies showed that performance improves when low-layer RNN representations are trained in easy tasks and high-layer RNN are leveraged for difficult tasks.",
"Therefore, we construct a network that hierarchically overlaps a task-specific RNN on a task-independent RNN.",
"Lower RNN layers learn task-independent knowledge representations.",
"Then, the task-specific RNN adjusts the representations for each task.",
"At each time step t , the 3409 hidden state m l t 2 R d h in the l 2 [1 ; (cid:1) (cid:1) (cid:1) ; L ] -th layer is calculated as follows: m l t = { g l ( m l (cid:0) 1 t ; m l t (cid:0) 1 ) ( l = odd) g l ( m l (cid:0) 1 t ; m l t +1 ) ( l = even) (5) g l ( (cid:1) ) = { g l p ( (cid:1) ) (PASA) g l n ( (cid:1) ) (ENASA) (6) where g l ( (cid:1) ) , g l p ( (cid:1) ) , and g l n ( (cid:1) ) denote the l -th layer GRU functions.",
"The position of arguments is different with respect to predicates and event-nouns.",
"For example, predicates seldom have arguments in the same bunsetsu.",
"In contrast, event-nouns often have arguments in the same bunsetsu, compound nouns, for example.",
"Therefore, it is intuitive and natural to divide the output layer into task-independent and task-specific layers.",
"The task-specific output vectors are calculated as follows: o p t = W po h t + b po (7) o n t = W no h t + b no (8) g t = (cid:27) ( W g h t + b g ) (9) where W po ; W no ; W g 2 R 4 (cid:2) d h are the parameter matrices, and b po ; b no ; b g 2 R 4 are the bias terms.",
"h t is the hidden state of the last layer.",
"We combine task-specific output vectors o p t ; o n t with task-independent output vector o t by the gate g t .",
"c t = { g t o t + (1 (cid:0) g t ) o p t (PASA) g t o t + (1 (cid:0) g t ) o n t (ENASA) (10) o t = softmax( c t ) (11) where ( ) denotes the element-wise product.",
"The output vector o t represents the probability of [NOM, ACC, DAT, ELSE].",
"We use NTC 1.5 for our experiments.",
"We divide the dataset into training, development, and test sets in the same way as Taira et al. (2008).",
"We use morphological and syntactic information, such as the word boundaries, the bunsetsu boundaries and the dependency relations provided in the NTC.",
"case label in a sentence, we set an argument that only has a dependency relation with a predicate as a correct answer and assign the ELSE label to other arguments.",
"If there is no dependency relation, we set an argument with the shortest distance j w p (cid:0) w t j as a correct answer.",
"If the distance is equal, an argument on the left side of a predicate is considered a correct answer.",
"In NTC 1.5, if there is a predicate phrase, such as verbal noun + suru , suru is annotated as a predicate word.",
"We consider the verbal noun as the predicate word at the preprocessing step to match the surface of a predicate with that of an event-noun.",
"Take the predicate houkoku-suru to report and an event-noun houkoku report as an example.",
"Although w p before preprocessing are suru and houkoku , w p are unified to houkoku after preprocessing.",
"We use pre-trained embeddings 2 for the initial values of the word embedding matrix.",
"The initial values of the other embedding matrices are sampled according to a uniform distribution of [-0.25,0.25].",
"We convert words appearing more than once in the training set into word vectors and the remaining words into the unknown word vector.",
"We adopt AdaDelta ( = 10 (cid:0) 6 (cid:26) = 0 : 95 ) as the optimization method.",
"We set the number of epochs to 20 and evaluate the model with the highest F1 scores on the development set.",
"Table 1 shows the hyperparameters.",
"We evaluate each model with the NTC 1.5 test.",
"The experimental results for the argument structure analysis of predicates and event-nouns are shown in Tables 2 and 3. 2 http://www.asahi.com/shimbun/medialab/word embedding 3410 Dep Zero Method ALL SD ALL NOM ACC DAT ALL NOM ACC DAT Ouchi+ 17 81.42 88.17 88.75 93.68 64.38 47.12 50.65 32.35 7.52 M&I 17 83.50 (cid:6) 0 : 17 89.89 91.19 95.18 61.90 51.79 54.69 41.8 17 M&I 18 83.94 (cid:6) 0 : 12 90.26 90.88 94.99 67.57 55.55 57.99 48.9 23 Single 83.62 (cid:6) 0 : 17 90.09 90.45 94.84 69.77 51.87 54.73 43.48 11.40 Multi-input 83.88 (cid:6) 0 : 11 90.27 90.65 95.12 69.86 53.01 55.82 44.68 10.77 Multi-RNN 83.91 (cid:6) 0 : 23 90.17 90.58 95.07 67.94 53.31 55.85 45.71 9.97 Multi-output 83.77 (cid:6) 0 : 20 90.13 90.68 94.89 68.16 53.93 56.73 43.79 9.45 Multi-ALL 83.82 (cid:6) 0 : 10 90.15 90.68 95.06 67.56 53.50 56.37 45.36 8.70 Multi-RNN+ DEP 84.55 (cid:6) 0 : 11 90.69 91.28 95.25 70.07 51.56 54.29 42.67 1.85 Multi-output+ DEP 84.73 (cid:6) 0 : 11 90.82 91.46 95.29 70.69 52.29 55.14 42.15 1.81 Multi-ALL+ DEP 84.75 (cid:6) 0 : 16 90.88 91.40 95.37 71.02 52.35 55.10 42.54 2.32 M&I 17 (ens. of 5) 84.07 90.24 91.59 95.29 62.61 53.66 56.47 44.7 16 M&I 18 (ens. of 10) 85.34 91.26 91.84 95.57 70.8 58.07 60.21 52.5 26 Multi-RNN+ DEP (ens. of 5) 85.85 91.61 92.11 95.87 72.63 53.41 55.96 46.10 0 Multi-output+ DEP (ens. of 5) 85.83 91.52 92.12 95.69 72.72 54.35 57.02 45.95 0 Multi-ALL+ DEP (ens. of 5) 86.01 91.63 92.15 95.80 72.95 54.99 57.84 45.20 0 Table 2: F1 scores on the PASA test set.",
"Predicate Argument Structure Analysis The first set of rows in Table 2 shows the results of previous models.",
"Ouchi+ 17 is the model from the Multi-Seq model in (Ouchi et al., 2017).",
"M&I 17 is the model in (Matsubayashi and Inui, 2017).",
"M&I 18 is the model from the MP-POOL-SELFATT model in (Matsubayashi and Inui, 2018).",
"The second set of rows in Table 2 shows the results of the proposed models.",
"These models do not use the dependency feature.",
"Compared with the single model, all multi-task learning models improved the overall F1 scores.",
"Among them, Multi-RNN improved the overall F1 score from the single model by 0.29 points.",
"In previous work, Ouchi et al. (2017); Matsubayashi and Inui (2018) see improvements of 0.27 and 0.55 F1 points in their baseline models by considering multiple predicate-argument interactions.",
"Therefore, we show that multi-task learning with ENASA achieved comparable effects as these studies in PASA.",
"The third set of rows shows the results of proposed models using all features including the dependency feature.",
"Multi-ALL+ DEP achieved the best F1 score among all the models including previous state-of-the-art models.",
"In particular, the dependency feature was effective for Dep arguments.",
"On the other hand, the performance for Zero arguments was poor.",
"This result suggests that the dependency feature causes the model to optimize mainly for Dep arguments since Dep arguments are more numerous than Zero arguments.",
"The fourth set of rows shows the results of ensemble models.",
"Overall, our proposed model outperformed the previous ensemble model by 0.67 points in the overall F1 score.",
"Moreover, our models are simple models that independently analyze each predicate in a sentence.",
"Nevertheless, our models achieved higher results than Ouchi et al. (2017); Matsubayashi and Inui (2018).",
"Although recent works have researched the method whereby multiple predicate-argument interactions are considered simultaneously, how to use syntactic in-3411",
"Event-noun Argument Structure Analysis The first set of rows in Table 3 shows the results of a previous model in event-noun argument structure analysis.",
"Taira+ 08 is the model from (Taira et al., 2008).",
"Since its scores are from NTC 1.4, the model cannot be directly compared to our models.",
"Compared with the single model, all multi-task models improved the overall F1 scores.",
"However, Multi-ALL+ DEP compared unfavorably with Multi-ALL even though it was the best PASA architecture.",
"Therefore, this implies that the dependency type feature between the predicate and its argument is not effective in ENASA.",
"In Figure 4, we compare the PASA results from test sets for each model.",
"In Examples",
"(a),",
"(b) and",
"(d), the single model failed to predict correct arguments but the Multi-RNN model correctly predicted arguments.",
"In Example",
"(a), the single model incorrectly predicted that arguments do not exist in this sentence.",
"Comparing the training set of each task, although the number of event-nouns is approximately one-third of the number of predicates, the number of kessei organize (event-nouns) is approximately twice the number of kessei organize (predicates).",
"Accordingly, we showed that the Multi-RNN model effectively leverages the information of event-nouns using multi-task learning.",
"In Example",
"(b), the single model incorrectly 3412 predicted that the NOM argument does not exist, but the multi-RNN predicted the correct arguments.",
"Comparing the training set, there is sayuu determine (predicate) but not sayuu determine (event-noun).",
"However, there are some kagi key (arguments of predicates) in the PASA training set, and there is one kagi key (argument of event-noun) in the ENASA training set (Example",
"(c)).",
"Moreover, in Example",
"(c), dakai break (event-noun) depends on kagi key like sayuu determine (predicate) in Example",
"(b); however, no predicate depends on kagi key in the training set.",
"Accordingly, the Multi-RNN model also leverages the arguments of event-nouns and the positional relations between event-nouns and their arguments.",
"Example",
"(d) is an interesting case in which a predicate kaihi avoid and its argument sekinin responsibility are located in the same bunsetsu.",
"Although this argument type (Bunsetsu) is excluded from the evaluation target in PASA, it is common as a compound noun in ENASA.",
"Therefore, the single model wrongly predicted that the ACC argument does not exist, but multi-RNN was able to predict the answer using the specific knowledge of event-nouns.",
"In contrast, in Example",
"(e), the single model correctly predicted the answer, but the multi-RNN model failed to predict the correct arguments.",
"Multi-RNN incorrectly predicted that the DAT argument does not exist in this sentence.",
"However, ni , a postpositional particle located after an argument, often indicates a dative case.",
"Nevertheless, multi-RNN often predicted a wrong DAT argument by ignoring ni .",
"Therefore, for DAT analysis, the information of event-nouns adversely affects PASA.",
"In Example",
"(f), the Multi-ALL+ DEP model correctly predicted the answer, but the Multi-ALL model failed.",
"Specifically, Multi-ALL+ DEP correctly predicted that the ACC argument is yakuwari role, which is dependent on hatasu play.",
"However, the Multi-ALL incorrectly predicted that the ACC argument is kaimei solution.",
"Similarly, Multi-ALL without syntactic information made many mistakes, including attributive modification, such as Figure 1c.",
"Table 4 shows the results of the two PASA models for attributive modification instances.",
"Multi-ALL+ DEP considerably outper-ALL NOM ACC DAT Multi-ALL 80.31 83.37 72.16 19.48 Multi-ALL+ DEP 81.83 84.67 74.41 28.31 Table 4: F1 scores on the PASA test set with respect to attributive modifications.",
"formed Multi-ALL for all cases using dependency features.",
"Therefore, these results suggest that the dependency type feature is effective for PASA with respect to attributive modifications.",
"We design a multi-task learning model for predicate and event-noun argument structure analysis.",
"The experiment results show that the multi-task models outperform the single-task model on the NAIST Test Corpus for both tasks.",
"Moreover, our model achieves a state-of-the-art result for PASA.",
"In addition, this is the first work to employ neural networks for ENASA.",
"In future work, we plan to consider multiple predicates and event-nouns."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"method"
] |
[
"We often talk about events that impact us positively or negatively.",
"For example I got a job is good news, but I lost my job is bad news.",
"When we discuss an event, we not only understand its affective polarity but also the reason why the event is beneficial or detrimental.",
"For example, getting or losing a job has affective polarity primarily because it impacts us finan-cially.",
"Our work aims to categorize affective events based upon human need categories that often explain people's motivations and desires: PHYSIOLOGICAL , HEALTH , LEISURE , SOCIAL , FINANCIAL , COGNITION , and FREEDOM .",
"We create classification models based on event expressions as well as models that use contexts surrounding event mentions.",
"We also design a co-training model that learns from unlabeled data by simultaneously training event expression and event context classifiers in an iterative learning process.",
"Our results show that co-training performs well, producing substantially better results than the individual classifiers.",
"Recent research has focused on identifying affective events in text, which are activities or states that positively or negatively affect the people who experience them.",
"Recognizing affective events in text is challenging because they appear as factual expressions and their affective polarity is often implicit.",
"For example, I broke my arm and I got fired are usually negative experiences, while I broke a record and I went to a concert are typically positive experiences.",
"Several NLP techniques have been developed to recognize affective events, including patient polarity verb bootstrapping (Goyal et al., 2010, 2013), implicature rules (Deng and Wiebe, 2014), label propagation (Ding and Riloff, 2016), pattern-based learning (Vu et al., 2014; Reed et al., 2017), and semantic consistency optimization (Ding and Riloff, 2018).",
"Our research aims to probe deeper and understand not just the polarity of affective events, but the reason for the polarity.",
"Events can impact people in many ways, and understanding why an event is beneficial or detrimental is a fundamental aspect of language understanding and narrative text comprehension.",
"Additionally, many applications could benefit from understanding the nature of affective events, including text summarization, conversational dialogue processing, and mental health therapy or counseling systems.",
"As an illustration, a mental health therapy system can benefit from understanding why someone is in a negative state.",
"If the triggering event for depression is I broke my leg then the reason is about the per-son's Health, but if the triggering event is I broke up with my girlfriend then the reason is based on Social relationships.",
"We hypothesize that the polarity of affective events can often be attributed to a relatively small set of human need categories.",
"Our work is motivated by theories in psychology that explain people's motivations, desires, and overall well-being in terms of categories associated with basic human needs, such as Maslow's Hierarchy of Needs (Maslow et al., 1970) and Fundamental Human Needs (Max-Neef et al., 1991).",
"Drawing upon these works, we propose that the polarity of affective events often arises from 7 types of human needs: PHYSIOLOGICAL , HEALTH , LEISURE , SOCIAL , FINANCIAL , COGNITION , and FREEDOM .",
"For example, I broke my arm has negative polarity because it negatively impacts one's Health, I got fired is negative because it negatively impacts one's Finances, and I am con-fused is negative because it reflects a problem related to Cognition.",
"We explore this hypothesis and tackle the chal-1919 lenge of categorizing affective events in text with respect to these 7 human need categories.",
"As our evaluation data, we use events extracted from personal blog posts and manually labeled with affective polarity in previous work (Ding and Riloff, 2018).",
"These affective events were then subsequently annotated for the human need categories.",
"In this paper, we design several types of classification models that learn from both labeled and unlabeled data.",
"First, we present supervised learning models that use lexical and embedding features for the words in event expressions, as well as models that learn from the sentence contexts surrounding mentions of event expressions.",
"Next, we explore self-training and co-training models that exploit both labeled and unlabeled data for training.",
"The most effective system is a co-training model that uses two classifiers with two different views in an iterative learning process: one classifier only uses the words in an event expression, and the other classifier only uses the contexts surrounding instances of an event expression.",
"Our results show that this co-training model effectively uses unlabeled data to substantially improve results compared to classifiers trained only with labeled data, yielding gains in both precision and recall.",
"Recently, there has been growing interest in recognizing the affective polarity of events.",
"For example, Goyal et al. (2013) developed a bootstrapped learning method to learn patient polarity verbs , which impart affective polarities to their patients.",
"Li et al. (2015) designed methods to extract verb expressions that imply negative opinions from reviews.",
"Rashkin et al. (2016) recently proposed connotation frames to incorporate the connotative polarities for a verb's arguments from the writer's and other event entities' perspectives.",
"Li et al. (2014) proposed a bootstrapping approach to extract major life events from tweets using congratulation and condolence speech acts.",
"Most of these major life events are affective although their work did not identify polarity.",
"Another group of researchers have studied +/effect events (Deng et al., 2013; Choi and Wiebe, 2014) which they previously called bene-factive/malefactive events.",
"Their work mainly focused on inferring implicit opinions through implicature rules (Deng and Wiebe, 2014, 2015).",
"context graph model to identify affective events using label propagation.",
"Reed et al. (2017) demonstrated that automatically acquired patterns could benefit the recognition of first-person related affective sentences.",
"Most recently, Ding and Riloff (2018) developed a semantic consistency model to induce a large set of affective events using three types of semantic relations in an optimization framework.",
"(We use their annotated affective event data set in our work.)",
"All of this previous work only identifies affective events and their polarities.",
"In contrast, our work aims to identify the reason for the affective polarity of an event.",
"The human need categories are inspired by two prior theories.",
"The first one is Maslow's Hierarchy of Needs (Maslow et al., 1970) which was developed to study people's motivations and personalities.",
"The second one is Fundamental Human Needs (Max-Neef et al., 1991) which was developed to help communities identify their strengths and weaknesses.",
"The human need categories are also related to the concept of goals, which has been proposed by (Schank and Abelson, 1977) to understand narrative stories.",
"Goals could be very specific to a character in a particular narrative story.",
"However, but many types of goals originate from universal needs and desires shared by most people (Max-Neef et al., 1991).",
"In addition, our work is also related to research on wish detection (Goldberg et al., 2009), desire fulfillment (Chaturvedi et al., 2016), and modelling protagonist goals and desires (Rahimtoroghi et al., 2017).",
"Self-training is a semi-supervised learning method to improve performance by exploiting unlabeled data.",
"Self-training has been successfully used in many NLP applications such as information extraction (Ding and Riloff, 2015) and syntactic parsing (McClosky et al., 2006).",
"Co-training (Blum and Mitchell, 1998) uses both labeled and unlabeled data to train models that have two different views of the data.",
"Co-training has been previously used for many NLP tasks including spectral clustering (Kumar and Daume, 2011), word sense disambiguation (Mihalcea, 2004), coreference resolution (Phillips and Riloff, 2002), and sentiment analysis (Wan, 2009; Xia et al., 2015).",
"The goal of our research is to categorize affective events based on 7 categories of human needs.",
"To facilitate this work, we build upon a large data set 1920 Physiological Health Leisure Social Finance Cognition Freedom Emotion None 19 (4%) 52 (10%) 75 (14%) 108 (20%) 29 (5%) 26 (5%) 7 (1%) 128 (24%) 98 (18%) Table 1: Distribution of Human Need Categories (each cell shows the frequency and percentage).",
"created for prior research (Ding and Riloff, 2018) which aims to identify affective events.",
"We will refer to this data as the AffectEvent dataset.",
"We will briefly describe this data and the human need category annotations that we added on top of it.",
"The AffectEvent dataset contains events extracted from a personal story corpus that was created by applying a personal story classifier (Gor-don and Swanson, 2009) to 177 million blog posts.",
"The personal story corpus contains 1,383,425 personal story blogs.",
"StanfordCoreNLP (Manning et al., 2014) was used for POS and NER tagging and SyntaxNet (Andor et al., 2016) for parsing.",
"Each event is represented using a frame-like structure to capture the meanings of different types of events.",
"Each event representation contains four components: h Agent , Predicate , Theme , PP i .",
"The Predicate is a simple verb phrase corresponding to an action or state.",
"The Agent is a named entity, nominal, or pronoun, and is extracted using syntactic heuristics rather than semantic role labeling.",
"We use Theme loosely to allow a NP or adjective to fill this role.",
"The PP component is composed of a preposition and a NP.",
"All words in the event are lemmatized, and active and passive voices are normalized to have the same representation.",
"See (Ding and Riloff, 2018) for more details of the event representation.",
"Table 2 shows some examples of extracted events.",
"Affective events impact people in a positive or negative way for a variety of reasons.",
"We hypothesized that the polarity of most affective events arises from the satisfaction or violation of basic human needs.",
"Psychologists have developed theories that explain people's motivations, desires, and overall well-being in terms of categories associated with basic human needs, such as Maslow's Hierarchy of Needs (Maslow et al., 1970) and Fundamental Human Needs (Max-Neef et al., 1991).",
"Based upon this work, we defined 7 human need categories, which are briefly described below.",
"Physiological Needs maintain our body's basic functions (e.g., air, food, water, sleep).",
"Health Needs are to be physically healthy and safe.",
"Leisure Needs are to have fun, to be relaxed, to have leisure time, to appreciate and enjoy beauty.",
"Social Needs are to have good social relations (e.g., family, friendship), to have good self-worth and self-esteem, and to be respected by others.",
"Financial Needs are to obtain and protect financial income, to acquire and maintain valuable possessions, to have a job and satisfying work.",
"Cognition Needs are to obtain skills, information, and knowledge, to receive education, to improve one's intelligence, and to mentally process information correctly.",
"Freedom Needs are the ability to move or change positions freely, and to access things or services in a timely manner.",
"We also defined two categories for event expressions that represent explicit emotions and opinions ( Emo-tions/Sentiments/Opinions ) and events that do not fall into any other categories ( None of the Above ).",
"We added manual annotations for human need categories on top of the manually annotated positive and negative affective events in the AffectEvent dataset.",
"Three people were asked to assign a human need category label to each of the 559 affective events in the AffectEvent test set.",
"Annotators achieved good pairwise inter-annotator agreement ( .65) on this task.",
"The Cohen's kappa scores were =.69, =.66 and =.65.",
"We assigned a single category to each event because most of the affective events fell into just one category in our preliminary study, even though some cases could legitimately be argued 1921 for multiple categories.",
"We discuss this issue further in Section 5.4 The distribution of human need categories is shown in Table",
"1. Since very few affective events were found to belong to the Freedom category, this category was merged into None.",
"Additionally, 17 events received three different labels from the annotators, so they were discarded.",
"The majority label was then assigned to the remaining events, yielding a gold standard data set of 542 affective events with human need category labels.",
"Some of the annotated examples are shown in Table",
"2. A more detailed description of the human need category definitions, data set, and manual annotation effort is described in (Ding et al., 2018).",
"This data set is freely available for other researchers to use.",
"In the next section, we present classification models designed to tackle this human needs categorization task.",
"Automatically categorizing affective events in text based on human needs is a new task, so we investigated several types of approaches.",
"First, we designed supervised classifiers to categorize affective events based upon the words in the event expressions, which we will refer to as Event Expression Classifiers .",
"We explored lexical features, word embedding features, and semantic category features, along with several types of machine learning algorithms.",
"Our task is to determine the human need category of an affective event based on the meaning of the event itself, independent of any specific context.",
"1 But we hypothesized that collecting the contexts around instances of the events could also provide valuable information to infer human need categories.",
"So we also designed Event Context Classifiers to use the sentence contexts around event mentions as features.",
"Our gold standard data set is relatively small, so supervised learning that relies entirely on manually labeled data may not have sufficient coverage to perform well across the human need categories.",
"However, the AffectEvent dataset contains a very large set of events that were extracted from the same blog corpus, but not manually labeled with 1 We view this as assuming the most common interpretation of an event, which would be the default in the absence of context.",
"affective polarity.",
"Consequently, we explored two weakly supervised learning methods to exploit this large set of unlabeled events.",
"First, we tried self-training to iteratively improve the event expression classifier.",
"Second, we designed a co-training model that takes advantage of both an event expression classifier and an event context classifier to learn from the unlabeled events.",
"These two types of classifiers provide complementary views of an event, so new instances labeled by one classifier can be used as valuable new data to benefit the other classifier, in an iterative learning cycle.",
"The most obvious approach is to use the words in event expressions as features for recognizing human need categories (e.g., { ear, be, better } for the event < ear, be, better > ).",
"We experimented with both lexical (string) features and pre-trained word embedding features.",
"For the latter, we used GloVe (Pennington et al., 2014) vectors (200d) pretrained on 27B tweets.",
"For each event expression, we compute its embedding as the average of its words' embeddings.",
"We also designed semantic features using the lexical categories in the LIWC lexicon (Pen-nebaker et al., 2007) to capture a more general meaning for each word.",
"LIWC is a dictionary of words associated with psychologically meaning-ful lexical categories, some of which are directly relevant to our task, such as AFFECTIVE , SOCIAL , COGNITIVE , and BIOLOGICAL PROCESS .",
"We identify the LIWC category of the head word of each phrase in the event representation and use them as Semantic Category features.",
"We experimented with three types of supervised classification models: logistic regression (LR), support vector machines (SVM), and recurrent neural network classifiers (RNN).",
"One advantage of the RNN is that it considers the word order in the event expression, which can be important.",
"In our experiments, we used the Scikit-learn implementation (Pedregosa et al., 2011) for the LR classifier, and LIBSVM (Chang and Lin, 2011) with a linear kernel for the SVM classifier.",
"For the RNN, we used the example LSTM implementation from Keras (Chollet et al., 2015) github, which was developed to build a sentiment classifier.",
"We used the default parameters in our experiments 2 .",
"2 LR and SVM use the one-vs-rest (ovr) scheme, while RNN is a single multi-class classifier.",
"The event dataset was originally extracted from a large collection of blog posts, which contain many instances of the events in different sentences.",
"We hypothesized that the contexts surrounding instances of an event can also provide strong clues about the human need category associated with the event.",
"Therefore, we also created Event Context Classifiers to exploit the sentence contexts around event mentions.",
"We explored several designs for event context classifiers, which are explained below.",
"Context SentBOW : For each event in the training set, we first collect all sentences mentioning this event and assign the event's human need category as the label for each sentence.",
"Each sentence is then used as a training instance for the event context classifier.",
"We use a bag-of-words representation for each sentence.",
"Context SentEmbed : This variation labels sentences exactly the same way as the previous model.",
"But each sentence is represented as a dense embedding vector, which is computed as the average of the embeddings for each word in the sentence.",
"We used GloVe (Pennington et al., 2014) vectors (200d) pretrained on 27B tweets.",
"Context AllBOW : Instead of treating each sentence as a training instance, for this model we aggregate all of the sentences that mention the same event to create one giant context for the event.",
"Each event corresponds to one training instance in this model, which is represented using bag-of-word features.",
"Context AllEmbed : This variation aggregates the sentences that mention an event exactly like the previous model.",
"But each sentence is represented as a dense embedding vector.",
"First, we compute an embedding vector for each sentence as the average of the embeddings of its words.",
"Then we compute a single context embedding by averaging all of the sentence embeddings.",
"In the data, some events appear in many sentences, while others appear in just a few sentences.",
"To maintain balance, we randomly sample 10 sentences for each event to use as its contexts.",
"To predict the human need category of an event, we first apply the event context classifier to contexts that mention the event, which produces a probability distribution over the human need categories.",
"For each category, we compute its mean probability.",
"Finally, we assign the event with the human need category that has the highest mean probability (i.e. argmax).",
"Our labeled data set is relatively small, but as mentioned previously, the AffectEvent dataset contains a large set of unlabeled events as well.",
"So we designed a self-training model to try to iteratively improve the event expression classifier by exploiting the unlabeled event data.",
"The self-training process works as follows.",
"Initially, the event expression classifier is trained using the manually labeled events.",
"Then the classifier is applied to the unlabeled events and assigns a human need category to each event with a confidence value.",
"For each human need category, we select the unlabeled event that has been assigned to that category with the highest confidence.",
"Therefore, each category will have one additional labeled event at each iteration.",
"The newly labeled events are added to the labeled data set, and the classifier is re-trained for the next iteration.",
"The sentence contexts in which an event appears contain complementary information to the event expression itself.",
"So we designed co-training models to exploit these complementary types of classifiers to iteratively learn from unlabeled data.",
"Figure 1 shows the architecture of our co-training model.",
"Initially, an event expression classifier and an event context classifier are independently trained on the manually labeled training data.",
"Each classifier is then applied to the large collection of unlabeled events EU .",
"For each hu-1923 man need category, we then select the event that has been assigned to the category with the highest confidence value as a new instance to label.",
"Consequently, each category will receive two additional labeled events at each iteration, one from the event expression classifier and another one from the event context classifier.",
"3 Both sets of newly labeled events are then added to the labeled set EL , and each of the classifiers is re-trained on the expanded set of labeled data.",
"Because the classifiers have different views of the events, the new instances labeled by one classifier serve as fresh training instances for the other, unlike self-training with a single classifier where it is learning entirely from its own predictions.",
"The following section describes the co-training algorithm in more detail.",
"Our co-training algorithm is shown in Algorithm",
"1. The input to the algorithm are the sets of labeled events EL and unlabeled events EU .",
"Each event is associated with both an event expression and the set of sentences in which it occurs in the blogs corpus.",
"For each iteration, the event expression classifier is first trained using the labeled events EL with the event expression view.",
"Then, we construct an event context view X con for each event in the labeled set EL .",
"The context sentences are used differently depending on the type of context model (described in Section 4.2).",
"An event context classifier is then trained using the context view X con .",
"Both classifiers are then independently applied to the unlabeled events EU .",
"For each human need category, each classifier selects one event to label based on its most confident prediction.",
"All of the newly labeled events are then added to the labeled training set EL , and the process repeats.",
"The co-training process simultaneously trains two classifiers, so here we explain how we use the resulting classifiers after the co-training process has finished.",
"For each event e in the test set, we apply both the event expression classifier and the event context classifier, which each produce a probability distribution over the human need categories.",
"Then we explore two different methods to combine the two probability distributions for each test 3 The event expression classifier first selects from unlabeled events, then the event context classifier does the selection.",
"This ensures that there are 16 new events in total at each iteration.",
"event: (1) sum , we compute the final probability vector p ( e ) by applying the element-wise summarization operation to the two predicted probability vectors; (2) product , we compute the final p ( e ) as the element-wise product of the two vectors.",
"Then, the final probability vector is normalized to make sure the sum of probabilities over all classes is",
"1. Finally, we predict an event's human need category as the one with the highest probability.",
"We conducted experiments to evaluate the methods described in Section 4.",
"For all of our experiments, the results are reported based on 3-fold cross-validation on the 542 affective events manually labeled with human need categories.",
"We show the average results over 3-folds in the following sections.",
"For development, we used a distinct set of events labeled during preliminary studies.",
"We did not tune any of the models, using only their default parameter settings.",
"We present experimental results in terms of precision, recall, and F1 score, macro-averaged over the human need categories.",
"Table 4 shows the results 4 for the event expression classifiers.",
"We also evaluated the ability of the LIWC lexicon (Pennebaker et al., 2007) to label the event expressions.",
"We manually aligned the relevant LIWC categories with our human need categories, as shown in Table 3.",
"Then we labeled each event by identifying the human need category of each word in the event phrase and assign-4 Since we report the average precision, recall, F1 score over 3-folds, the F1 score can be smaller than both precision and recall in some cases.",
"ing the most frequent category to the event.",
"5 If no words were assigned to our categories, we labeled the event as None.",
"The top row of Table 4 shows that LIWC achieved 39% recall but only 47.7% precision.",
"The reason is that some categories in LIWC are more generalized compared with the definitions of our corresponding categories.",
"For example, the words abandon and damage belong to the Affect category (corresponding to our Emotion category) in LIWC.",
"However, based on our definition the event my house was damaged actually belongs to the Finance category.",
"In this way, the Emotion category is overly generalized which leads to low precision for this class.",
"The LR and SVM rows in Table 4 show the performance of the logistic regression (LR) and support vector machine (SVM) classifiers, respectively.",
"We evaluated classifiers with bag-of-words features (BOW) and classifiers with event embedding features (Embed), computed as the average of the embeddings for all words in the event expression.",
"We also tried adding semantic category features from LIWC to each feature set, denoted as +SemCat.",
"The results show that the Embed features performed best for both the LR and SVM classifiers.",
"Adding the SemCat features improved upon the bag-of-word representations, but not the embeddings.",
"The last two rows of Table 4 show the performance of two RNN classifiers, one using lexical words as input (RNN Words ) and one using pretrained word embeddings as input (RNN EmbedSeq ).",
"The RNN EmbedSeq system takes the sequence of word embeddings as input rather than the average embeddings.",
"As with the other classifiers, the word embedding feature representations performed best, achieving an F1 score 54.4%, which is comparable to the F1 score of the LR Embed system.",
"However, the RNN's precision was only 58%, compared to 64.2% for the logistic regres-5 For ties, we remove a component one by one in the order of Agent, PP, Theme until we obtain a majority label.",
"sion model, with only 2% higher recall that does not fully compensate for the lower precision.",
"Neural net models often need large training sets, so the relatively small size of our training data may not be ideal for an RNN.",
"Overall, we concluded that the logistic regression classifier with event embedding features (LR Embed ) achieved the best performance because of its F1 score (54.8%) and higher precision (64.2%).",
"Table 5 shows the performance 4 of the event context classifiers described in Section 4.2.",
"Since logistic regression worked best in the previous experiments, we only evaluated logistic regression classifiers in our remaining experiments.",
"The results show that using each context sentence as an individual training instance (Context SentBOW and Context SentEmbed ) substantially outperformed the classifiers that merged all the context sentences as a single training instance (Context AllBOW and Context AllEmbed ).",
"Overall, the best performing system Context SentEmbed achieved an F1 score of 44.3% with 59.1% Precision.",
"It is worth noting that the precision of the best contextual classifier was only 5% below that of the best event expression classifier, while there was a 10% difference in their recall.",
"Since they achieved (roughly) similar levels of precision and represent complementary views of events, a co-training 1925 framework seemed like a logical way to use them together to gain additional benefits from unlabeled event data.",
"We also created a classifier that combined event expression features and event context features together.",
"But combining them did not improve performance.",
"In this section, we evaluate the weakly supervised self-training and co-training methods that additionally use unlabeled data.",
"To keep the number of unlabeled events manageable, we only used events in the AffectEvent dataset that had frequency 100, which produced an unlabeled data set of 23,866 events.",
"We used the best performing event expression classifier (LR Embed ) in these models, and the co-training framework includes the best performing event context classifier (Context SentEmbed ) as well.",
"We also experimented with the sum and product variants for co-training (described in Section 4.4.2), which are denoted as CoTrain sum and CoTrain prod .",
"We ran both the self-training and co-training methods for 20 iterations.",
"Figure 2 tracks the performance of the self-training and co-training models after each iteration, in terms of F1 score.",
"The flat line shows the F1 score for the best classifier that uses only labeled data (LR Embed ).",
"Both types of models yield performance gains from iteratively learning with the unlabeled data, but the co-training models perform substantially better than the self-training model.",
"Even after just 5 iterations, co-training achieves an F1 score over 58%, and by 20 iterations performance improves to > 60%.",
"Table 6 shows the results for these models after 20 iterations, which was an arbitrary stopping criterion, and after 17 iterations, which happened to produce the best results for all three systems.",
"The first two rows show the results of the best performing event context classifier (Context SentEmbed ) and best performing event expression classifier (LR Embed ) from the previous experiments, for the sake of comparison.",
"Table 6 shows that after 20 iterations, the CoTrain prod model performed best, yielding an F1 score of 61% compared to 54.8% for the LR Embed model.",
"Furthermore, we see gains in both recall and precision.",
"All three systems performed best after 17 iterations, so we show those results as well to give an idea of additional gains that would be possible if we could find an optimal stopping criterion.",
"Our data set was small so we did not feel that we had enough data to fine-tune parameters, but we see the potential to further improve performance given additional tuning data.",
"Table 7 shows a breakdown of the performance across the individual human need categories for two models: the best event expression classifier and the best co-training model (CoTrain prod after 17 iterations).",
"We see that the co-training model outperformed the LR Embed model on every category.",
"Co-training improved performance the most for the Finance and Cognition categories, yielding F1 score gains of +12% and +16%, respectively, and notably improving both recall and precision.",
"We manually examined our system's predictions to better understand its behavior.",
"We found that most of the correctly classified Physiological 1926 LR Embed CoTrain Prod Category Pre Rec F1 Pre Rec F1 Physiological 82 57 67 81 68 74 Health 65 40 49 68 50 57 Leisure 62 59 60 69 63 66 Social 61 72 66 68 79 73 Finance 61 31 40 67 44 52 Cognition 75 31 42 92 46 58 Emotion 60 75 66 64 74 69 None 47 49 48 48 52 50 Table 7: Breakdown of results across Human Need categories.",
"events were related to food, while the correctly classified Cognition events were primarily about learning and understanding.",
"Our method missed many events for the Health, Finance, and Cognition classes.",
"For Health, many medical symptoms were not recognized, such as my face looks pale and I puked .",
"For Finance, the system missed events related to possessions (e.g., engine stopped running and my clock is wrong ) and jobs (e.g., I went to resign ).",
"We also took a closer took at which categories were confused with other categories.",
"Figure 3 shows the confusion matrix between CoTrain Prod and the gold annotations.",
"Each cell shows the total number of confusions across the 3-folds of cross-validation.",
"The category names are abbreviated as Physiological (Phy), Health (Hlth), Leisure (Leis), Social (Socl), Finance (Fnc), Cognition (Cog), and Emotion (Emo).",
"#Tot denotes the total number of events in each row or column.",
"The co-training model had difficulty distinguishing the None category from other classes, presumably because None does not have its own semantics but is used for affective events that do not belong to any of the other categories.",
"We also see that the system often confuses Emotion with Leisure and Social.",
"This happens because many event expressions contain words that refer to emotions.",
"Our guidelines instructed annotators to focus on the event and assign the Emotion label only when no event is described beyond an emotion (e.g., I was thrilled ).",
"Consequently, the gold label of I love journey is Leisure and I'm worried about my mom is Social, but both were classified by the system as Emotion.",
"In future work, it may be advantageous to allow event expressions to be labeled as both an explicit Emotion and a Human Need category based on the target of the emotion.",
"In this work, we introduced a new challenge to recognize the reason for the affective polarity of events in terms of basic human needs.",
"We designed four types of classification methods to categorize affective events according to human need categories, exploiting both labeled and unlabeled data.",
"We first evaluated event expression and event context classifiers, trained using only labeled data.",
"Then we designed self-training and co-training methods to additionally exploit unlabeled data.",
"A co-training model that simultaneously trains event expression and event context classifiers produced substantial performance gains over the individual models.",
"However, performance on the human need categories still has substantial room for improvement.",
"In future work, obtaining more human annotations will be useful to build a better human needs categorization system.",
"In addition, applying and analyzing the human needs of affective events in narrative stories and conversations is a fruitful and interesting direction for future research.",
"This material is based in part upon work supported by the National Science Foundation under Grant Number IIS-1619394.",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.",
"We are very grateful to Tianyu Jiang and Yuanyuan Gao for participating in the manual annotation effort."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"method",
"objective",
"objective",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"In the Transformer model, self-attention combines information from attended embeddings into the representation of the focal embedding in the next layer.",
"Thus, across layers of the Transformer, information originating from different tokens gets increasingly mixed.",
"This makes attention weights unreliable as explanations probes.",
"In this paper, we consider the problem of quantifying this flow of information through self-attention.",
"We propose two methods for approximating the attention to input tokens given attention weights, attention rollout and attention flow , as post hoc methods when we use attention weights as the relative relevance of the input tokens.",
"We show that these methods give complementary views on the flow of information, and compared to raw attention, both yield higher correlations with importance scores of input tokens obtained using an ablation method and input gradients.",
"Attention (Bahdanau et al., 2015; Vaswani et al., 2017) has become the key building block of neural sequence processing models, and visualizing attention weights is the easiest and most popular approach to interpret a model's decisions and to gain insights about its internals (Vaswani et al., 2017; Xu et al., 2015; Wang et al., 2016; Lee et al., 2017; Dehghani et al., 2019; Rocktaschel et al., 2016; Chen and Ji, 2019; Coenen et al., 2019; Clark et al., 2019).",
"Although it is wrong to equate attention with explanation (Pruthi et al., 2019; Jain and Wallace, 2019), it can offer plausible and meaningful interpretations (Wiegreffe and Pinter, 2019; Vashishth et al., 2019; Vig, 2019).",
"In this paper, we focus on problems arising when we move to the higher layers of a model, due to lack of token identifiability of the embeddings in higher layers (Brun-ner et al., 2020).",
"We propose two simple but effective methods to compute attention scores to input tokens (i.e., token attention ) at each layer, by taking raw attentions (i.e., embedding attention ) of that layer as well as those from the precedent layers.",
"These methods are based on modelling the information flow in the network with a DAG (Directed Acyclic Graph), in which the nodes are input tokens and hidden embeddings, edges are the attentions from the nodes in each layer to those in the previous layer, and the weights of the edges are the attention weights.",
"The first method, attention rollout , assumes that the identities of input tokens are linearly combined through the layers based on the attention weights.",
"To adjust attention weights, it rolls out the weights to capture the propagation of information from input tokens to intermediate hidden embeddings.",
"The second method, attention flow , considers the attention graph as a flow network.",
"Using a maximum flow algorithm, it computes maximum flow values, from hidden embeddings (sources) to input tokens (sinks).",
"In both methods, we take the residual connection in the network into account to better model the connections between input tokens and hidden embedding.",
"We show that compared to raw attention, the token attentions from attention rollout and attention flow have higher correlations with the importance scores obtained from input gradients as well as blank-out , an input ablation based attribution method.",
"Furthermore, we visualize the token attention weights and demonstrate that they are better approximations of how input tokens contribute to a predicted output, compared to raw attention.",
"It is noteworthy that the techniques we propose in this paper, are not toward making hidden embeddings more identifiable, or providing better attention weights for better performance, but a new set of attention weights that take token identity problem into consideration and can serve as a better diagnostic tool for visualization and debugging.",
"In our analysis, we focus on the verb number prediction task, i.e., predicting singularity or plurality of a verb of a sentence, when the input is the sentence up to the verb position.",
"We use the subject-verb agreement dataset (Linzen et al., 2016).",
"This task and dataset are convenient choices, as they offer a clear hypothesis about what part of the input is essential to get the right solution.",
"For instance, given the key to the cabinets as the input, we know that attending to key helps the model predict singular as output while attending to cabinets",
"(an agreement attractor , with the opposite number)",
"is unhelpful.",
"We train a Transformer encoder, with GPT-2 Transformer blocks as described in",
"(Radford et al., 2019; Wolf et al., 2019)",
"(without masking).",
"The model has 6 layers, and 8 heads, with hid-den/embedding size of 128.",
"Similar to Bert",
"(De-vlin et al., 2019)",
"we add a CLS token and use its embedding in the final layer as the input to the classifier.",
"The accuracy of the model on the subject-verb agreement task is 0 .",
"96 .",
"To facilitate replication of our experiments we will make the implementations of the models we use and algorithms we introduce publicly available at https: //github.com/samiraabnar/attention_flow .",
"We start by visualizing raw attention in Figure 1a",
"(like Vig 2019).",
"The example given here is correctly classified.",
"Crucially, only in the first couple of layers, there are some distinctions in the attention patterns for different positions, while in higher layers the attention weights are rather uniform.",
"Figure 2",
"(left)",
"gives raw attention scores of the CLS token over input tokens",
"(x-axis)",
"at different layers",
"(y-axis), which similarly lack an interpretable pattern.These observations reflect the fact that as we go deeper into the model, the embeddings are more contextualized and may all carry similar information.",
"This underscores the need to track down attention weights all the way back to the input layer and is in line with findings of Serrano and Smith",
"(2019), who show that attention weights do not necessarily correspond to the relative importance of input tokens.",
"To quantify the usefulness of raw attention weights, and the two alternatives that we consider in the next section, besides input gradients, we employ an input ablation method, blank-out , to estimate an importance score for each input token.",
"Blank-out replaces each token in the input, one by one, with UNK and measures how much it affects the predicted probability of the correct class.",
"We compute the Spearman's rank correlation co-efficient between the attention weights of the CLS embedding in the final layer and the importance scores from blank-out.",
"As shown in the first row of Table 1, the correlation between raw attention weights of the CLS token and blank-out scores is rather low, except for the first layer.",
"As we can see in Table 2 this is also the case when we compute the correlations with input gradients.",
"Attention rollout and attention flow recursively compute the token attentions in each layer of a",
"given model given the embedding attentions as input.",
"They differ in the assumptions they make about how attention weights in lower layers affect the flow of information to the higher layers and whether to compute the token attentions relative to each other or independently.",
"To compute how information propagates from the input layer to the embeddings in higher layers, it is crucial to take the residual connections in the model into account as well as the attention weights.",
"In a Transformer block, both self-attention and feed-forward networks are wrapped by residual connections, i.e., the input to these modules is added to their output.",
"When we only use attention weights to approximate the flow of information in Transformers, we ignore the residual connections.",
"But these connections play a significant role in tying corresponding positions in different layers.",
"Hence, to compute attention rollout and attention flow, we augment the attention graph with extra weights to represent residual connections.",
"Given the attention module with residual connection, we compute values in layer l +1 as V l +1 = V l + W att V l , where W att is the attention matrix.",
"Thus, we have V l +1 =",
"( W att + I )",
"V l .",
"So, to account for residual connections, we add an identity matrix to the attention matrix and re-normalize the weights.",
"This results in A = 0 .",
"5 W att + 0 .",
"5 I , where A is the raw attention updated by residual connections.",
"Furthermore, analyzing individual heads requires accounting for mixing of information between heads through a position-wise feed-forward network in Transformer block.",
"Using attention rollout and attention flow, it is also possible to analyze each head separately.",
"We explain in more details in Appendix A.1.",
"However, in our analysis in this paper, for simplicity, we average the attention at each layer over all heads.",
"Attention rollout Attention rollout is an intuitive way of tracking down the information propagated from the input layer to the embeddings in the higher layers.",
"Given a Transformer with L layers, we want to compute the attention from all positions in layer l i to all positions in layer l j , where j < i .",
"In the attention graph, a path from node v at position k in l i , to node u at position m in l j , is a series of edges that connect these two nodes.",
"If we look at the weight of each edge as the proportion of information transferred between two nodes, we can compute how much of the information at v is propagated to u through a particular path by multiplying the weights of all edges in that path.",
"Since there may be more than one path between two nodes in the attention graph, to compute the total amount of information propagated from v to u , we sum over all possible paths between these two nodes.",
"At the implementation level, to compute the attentions from l i to l j , we recursively multiply the attention weights matrices in all the layers below.",
"In this equation, A is attention rollout, A is raw attention and the multiplication operation is a matrix multiplication.",
"With this formulation, to compute input attention we set j = 0 .",
"Attention flow In graph theory, a flow network is a directed graph with a capacity associated with each edge.",
"Formally, given G =",
"( V, E )",
"is a graph, where V is the set of nodes, and E is the set of edges in G ; C = { c uv R | u, v where e u,v E u",
"(cid:54)",
"= v } denotes the capacities of the edges and s, t V are the source and target",
"(sink)",
"nodes respectively; flow is a mapping of edges to real num-bers, f : E R , that satisfies two conditions:",
"(a)",
"capacity constraint : for each edge the flow value should not exceed its capacity, | f uv c uv | ;",
"(b)",
"flow conservation : for all nodes except s and t the input flow should be equal to output flow sum of the flow of outgoing edges should be equal to sum of the flow of incoming edges.",
"Given a flow network, a maximum flow algorithm finds a flow which has the maximum possible value between s and t",
"(Cormen et al., 2009).",
"Treating the attention graph as a flow network, where the capacities of the edges are attention weights, using any maximum flow algorithm, we can compute the maximum attention flow from any node in any of the layers to any of the input nodes.",
"We can use this maximum-flow-value as an approximation of the attention to input nodes.",
"In attention flow, the weight of a single path is the minimum value of the weights of the edges in the path, instead of the product of the weights.",
"Besides, we a tt e n ti on r o ll ou t a tt e n ti on fl o w Figure 3: Attention maps for the CLS token .",
"can not compute the attention for node s to node t by adding up the weights of all paths between these two nodes, since there might be an overlap between the paths and this might result in overflow in the overlapping edges.",
"It is noteworthy that both of the proposed methods can be computed in polynomial time.",
"O",
"( d n 2 )",
"for attention rollout and O",
"( d 2 n 4 )",
"for attention flow, where d is the depth of the model, and n is the number of tokens.",
"Now, we take a closer look at these three views of attention.",
"Figure 1 depicts raw attention, attention rollout and attention flow for a correctly classified example across different layers.",
"It is noteworthy that the first layer of attention rollout and attention flow are the same, and their only difference with raw attention is the addition of residual connections.",
"As we move to the higher layers, we see that the residual connections fade away.",
"Moreover, in contrast to raw attention, the patterns of attention rollout and attention flow become more distinctive in the higher layers.",
"Figures 2 and 3 show the weights from raw attention, attention rollout and attention flow for the CLS embedding over input tokens",
"(x-axis)",
"in all 6 layers",
"(y-axis)",
"for three examples.",
"The first example is the same as the one in Figure 1.",
"The second example is the article on NNP large systems <?> .",
"The model correctly classifies this example and changing the subject of the missing verb from article to article s flips the decision of the model.",
"The third example is here the NNS differ in that the female <?> , which is a miss-classified example and again changing NNS",
"(plural noun)",
"to NNP",
"(singular proper noun)",
"flips the decision of the model.",
"For all cases, the raw attention weights are almost uniform above layer three",
"(discussed before).",
"In the case of the correctly classified example, we observe that both attention rollout and attention flow assign relatively high weights to both the subject of the verb, article' and the attractor, sys-tems.",
"For the miss-classified example, both attention rollout and attention flow assign relatively high scores to the NNS token which is not the subject of the verb.",
"This can explain the wrong prediction of the model.",
"The main difference between attention rollout and attention flow is that attention flow weights are amortized among the set of most attended tokens, as expected.",
"Attention flow can indicate a set of input tokens that are important for the final decision.",
"Thus we do not get sharp distinctions among them.",
"On the other hand, attention rollout weights are more focused compared to attention flow weights, which is sensible for the third example but not as much for the second one.",
"Furthermore, as shown in Table 1 and 2 both attention rollout and attention flow, are better correlated with blank-out scores and input gradients compared to raw attention, but attention flow weights are more reliable than attention rollout.",
"The difference between these two methods is rooted in their different views of attention weights.",
"Attention flow views them as capacities, and at every step of the algorithm, it uses as much of the capacity as possible.",
"Hence, attention flow computes the maximum possibility of token identities to propagate to the higher layers.",
"Whereas attention rollout views them as proportion factors and at every step, it allows token identities to be propagated to higher layers exactly based on this proportion factors.",
"This makes attention rollout stricter than attention flow, and so we see that attention rollout provides us with more focused attention patterns.",
"However, since we are making many simplifying assumptions, the strictness of attention rollout does not lead to more accurate results, and the relaxation of attention flow seems to be a useful property.",
"At last, to illustrate the application of attention flow and attention rollout on different tasks and different models, we examine them on two pretrained BERT models.",
"We use the models available at https://github.com/huggingface/ transformers .",
"Table 3 shows the correlation of the importance score obtained from raw attention, attention rollout and attention flow from a DistillBERT (Sanh et al., 2019) model fine-tuned to solve SST-2 (Socher et al., 2013), the sentiment analysis task from the glue benchmark (Wang et al., 2018).",
"Even though for this model, all three methods have very low correlation with the input gradients, we can still see that attention rollout and attention flow are slightly better than raw attention.",
"Furthermore, in Figure 4, we show an example of applying these methods to a pre-trained Bert to see how it resolves the pronouns in a sentence.",
"What we do here is to feed the model with a sentence, masking a pronoun.",
"Next, we look at the prediction of the model for the masked word and compare the probabilities assigned to her and his.",
"Then we look at raw attention, attention rollout and attention flow weights of the embeddings for the masked pronoun at all the layers.",
"In the first example, in Figure 4a, attention rollout and attention flow are consistent with each other and the prediction of the model.",
"Whereas, the final layer of raw attention does not seem to be consistent with the prediction of the models, and it varies a lot across different layers.",
"In the second example, in Figure 4b, only attention flow weights are consistent with the prediction of the model.",
"Translating embedding attentions to token attentions can provide us with better explanations about models' internals.",
"Yet, we should be cautious about our interpretation of these weights, because, we are making many simplifying assumptions when we approximate information flow in a model with the attention weights.",
"Our ideas are simple and task/architecture agnostic.",
"In this paper, we insisted on sticking with simple ideas that only require attention weights and can be easily employed in any task or architecture that uses self-attention.",
"We should note that all our analysis in this paper is for a Transformer encoder, with no casual masking.",
"Since in Transformer decoder, future tokens are masked, naturally there is more attention toward initial tokens in the input sequence, and both attention rollout and attention flow will be biased toward these tokens.",
"Hence, to apply these methods on a Transformer decoder, we should first normalize based on the receptive field of attention.",
"Following this work, we can build the attention graph with effective attention weights (Brunner et al., 2020) instead of raw attentions.",
"Furthermore, we can come up with a new method that adjusts the attention weights using gradient-based attribution methods (Ancona et al., 2019).",
"We thank Mostafa Dehghani, Wilker Aziz, and the anonymous reviewers for their valuable feedback and comments on this work.",
"The work presented here was funded by the Netherlands Organization for Scientific Research (NWO), through a Gravitation Grant 024.001.006 to the Language in Interaction Consortium."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"other",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"objective",
"objective",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"other",
"abstain",
"result",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"other",
"other"
] |
[
"Existing goal-oriented dialogue datasets focus mainly on identifying slots and values.",
"However, customer support interactions in reality often involve agents following multi-step procedures derived from explicitly-defined company policies as well.",
"To study customer service dialogue systems in more realistic settings, we introduce the Action-Based Conversations Dataset (ABCD), a fully-labeled dataset with over 10K human-to-human dialogues containing 55 distinct user intents requiring unique sequences of actions constrained by policies to achieve task success.",
"We propose two additional dialog tasks, Action State Tracking and Cascading Dialogue Success, and establish a series of baselines involving large-scale, pre-trained language models on this dataset.",
"Empirical results demonstrate that while more sophisticated networks outperform simpler models, a considerable gap (50.8% absolute accuracy) still exists to reach human-level performance on ABCD.",
"1 1 Introduction The broad adoption of virtual assistants and customer service chatbots in recent years has been driven in no small part by the usefulness of these tools, whereby actions are taken on behalf of the user to accomplish their desired targets (Ama-zon, 2019; Google, 2019).",
"Research into task-oriented dialogue has concurrently made tremendous progress on natural language understanding of user needs (Wu et al., 2019; Rastogi et al., 2020b; Liang et al., 2020).",
"However, selecting actions in real life requires not only obeying user requests, but also following practical policy limitations which may be at odds with those requests.",
"For example, while a user may ask for a refund on their purchase, an agent should only honor such a request if it is valid with regards to the store's return policy.",
"Described in actions, before an agent 1 All code and data will be available at this location.",
"can [Oer Refund] , they must first [Validate Purchase] .",
"Furthermore, resolving customer issues often concerns multiple actions completed in succession with a specific order since prior steps may influence future decision states.",
"(See Figure 1)",
"To more closely model real customer service agents, we present the Action-Based Conversations Dataset (ABCD) consisting of 10,042 conversations containing numerous actions with precise procedural requirements.",
"These actions differ from typical dialogue acts because tracking them necessitates striking a balance between external user requests and internally-imposed guidelines.",
"Thus, the major difference between ABCD and other dialogue datasets, such as MultiWOZ (Budzianowski et al., 2018), is that it asks the agent to adhere to a set of policies while simultaneously dealing with customer requests.",
"containing asymmetric speakers compelled the design of a novel Expert Live Chat system.",
"Our dataset includes asymmetric speakers because, unlike customers, agents must undergo extensive training to be able to navigate the Agent Guidelines during real-time conversations.",
"This makes a naive pairing process untenable since arbitrary matching might lead to chats containing two users who share the same role.",
"Based on the unique aspects of ABCD, we propose two new tasks.",
"To start, Action State Tracking (AST) closely mirrors the format of Dialogue State Tracking where the user intent is inferred from the dialogue history.",
"AST then differs since the correct state must also be reconciled with the requirements outlined in the Agent Guidelines.",
"As a second task, Cascading Dialogue Success (CDS) extends this notion across the entire conversation.",
"At each turn, the agent decides to take an action, respond with an utterance or end the chat.",
"As needed, the agent should also predict the right action or select the best utterance.",
"For each task, we build various models to establish baseline performance and to highlight the importance of each constraint.",
"Experiments show that in addition to conversation history, conditioning on the Agent Guidelines further boosts performance, with top models relying on both aspects to reach 31.9% accuracy.",
"Additional results show removing action context hurts performance, implying the importance of taking into account the sequential nature of actions.",
"Lastly, human evaluation reaches 82.7%, demonstrating ample room for future improvement.",
"The contribution of this work is three-fold: (1) We provide a novel, large-scale dataset containing context-dependent, procedural actions along with corresponding Agent Guidelines.",
"(2) We establish a new technique called Expert Live Chat for capturing natural dialogue between two unequal interlocutors.",
"(3) We propose two metrics, Action State Tracking and Cascading Dialogue Success, for measuring dialogue comprehension with policy constraints.",
"Finally, we build on pretrained neural models to serve as baselines for these tasks.",
"Traditional Dialogue Datasets In recent years, dialogue datasets have grown in size from hundreds of conversations to the tens of thousands (Henderson et al., 2014; Budzianowski",
"et al., 2018; Peskov et al., 2019).",
"Unlike open-domain chatbots often built for entertainment, task-oriented dialogue systems trained on such datasets are intended for solving user issues.",
"The resolution of these issues implicitly requires taking actions, where an action is a non-utterance decision that depends on both user and system inputs.",
"Despite the tremendous number of dialogues, examples in previous benchmarks fixate on the single knowledge base (KB) lookup action where the agent searches for an item that matches the user's desires and is available in the KB.",
"By sticking to this sole interaction, conversations can be generated through rules (Weston et al., 2016), paraphrased from templates (Byrne et al., 2019) or taken from static text scenarios (Zhang et al., 2018), leading to dialogues that are predominantly homogeneous in nature.",
"Many datasets have scaled to more domains as well (Eric et al., 2017; Budzianowski et al., 2018; Peskov et al., 2019) Since each new domain introduces a KB lookup requiring different slot-values, the number of unique actions grows as a linear function of the number of domains covered.",
"Rather than expanding wider, ABCD instead focuses deeper by increasing the count and diversity of actions within a single domain.",
"Exploring Other Avenues Multiple aspects are explored by conversational datasets attempting to mimic reality.",
"Rashkin et al. (2019) studies the ability of a dialogue model to handle empathy, while Zhou et al. (2018) focuses on commonsense reasoning.",
"Another approach is to augment dialogues with multi-modality including audio (Castro et al., 2019) or visual (Das et al., 2017a) components.",
"Other researchers have explored grounding conversations with external data sources such as personas (Zhang et al., 2018), online reviews (Ghazvininejad et al., 2018) or large knowledge bases (Dinan et al., 2019).",
"Intricate dialogues can also appear when studying collaboration (He et al., 2017; Kim et al., 2019) or negotiation (Lewis et al., 2017; He et al., 2018) which strongly encourage interaction with the other participant.",
"In comparison, ABCD aims to make dialogue more realistic by considering distinct constraints from policies.",
"actions following strict guidelines naturally emerge in dialogue research geared towards real-world appli-Subflows",
"appli-Subflows recover-username, 1 recover-password, 1 reset-2fa, 1 status-service-added, 2 status-service-removed, 2 status-shipping-question, 2 status-credit-missing, 2 manage-change-address, 2 manage-change-name, 2 manage-change-phone, 2 manage-payment-method, 2 status-mystery-fee, 3 status-delivery-time, 3 status-payment-method, 3 status-quantity, 3 manage-upgrade, 3 manage-downgrade, 3 manage-create, 3 manage-cancel, 3 refund-initiate, 4 refund-update, 4 refund-status, 4 return-stain, 4 return-color, 4 return-size, 4 bad-price-competitor, 5 bad-price-yesterday, 5 out-of-stock-general, 5 out-of-stock-one-item, 5 promo-code-invalid, 5 promo-code-out-of-date, 5 mistimed-billing-already-returned, 5 mistimed-billing-never-bought, 5 status, 6 manage, 6 missing, 6 cost, 6 boots, 7 shirt, 7 jeans, 7 jacket, 7 pricing, 8 membership, 8 timing, 8 policy, 8 status-active, 9 status-due-amount, 9 status-due-date, 9 manage-pay-bill, 9 manage-extension, 9 manage-dispute-bill, 9 credit-card, 10 shopping-cart, 10 search-results, 10 slow-speed 10 Actions verify-identity, ask-the-oracle, validate-purchase, make-password, promo-code, subscription-status, offer-refund, make-purchase, record-reason, enter-details, shipping-status, update-order, pull-up-account, update-account, send-link, notify-team, membership, search-faq, try-again, log-out-in, instructions, search-jeans, search-shirt, searchboots, search-jacket, search-pricing, search-membership, search-timing, search-policy, select-faq",
"Hybrid Code Networks encode business logic through masking templates since various behaviors become nonsensical in certain situations (Williams et al., 2017).",
"Research from Moiseeva et al. (2020) studies multi-purpose virtual assistants that attempt to distinguish among thirteen explicit actions.",
"The closest prior work to ABCD is the Schema Guided Dialogue (SGD) dataset, which contains dozens of API calls that can be interpreted as individual actions sending commands to a SQL engine (Rastogi et al., 2020b).",
"The functionality of these actions is occasionally restricted to reflect constraints of real-life services.",
"The action restrictions within ABCD are made explicit by the Agent Guidelines manual.",
"In this section, we describe the task setting of ABCD by following along with the example dialog shown in Figure",
"1. 3.1 Customer During data collection, customers are given a simple prompt (such as You want to keep your subscription another year.) instead of step-by-step instructions, which reflects how real-world customers innately understand their own issue, but only have a rough idea of how to resolve said issue.",
"Accordingly, customers within ABCD remain oblivious towards what values apply to which actions, nor are they aware that actions exist in first place.",
"This ambiguity forces the agent and customer to collaboratively uncover the correct latent intent through back and forth communication, naturally leading to longer dialogues.",
"Following the standard dialog setup, the agent starts by parsing the dialogue history to capture the customer intent, which in Figure 1 is a subscription extension.",
"ABCD then diverges as the next step involves interpreting the Agent Guidelines, a document representing the internal policies of a company in the online retail domain (See Table 1).",
"Using the guidelines, the trained agent should find the one unique subflow corresponding to the customer intent.",
"Each subflow in turn is defined by exactly one unique sequence of actions.",
"While identifying a subflow may seem straightforward, information asymmetry prevents the customers from directly revealing the name of their intent.",
"For example, a customer might inquire about the status of their recent purchase, but an agent has over a dozen different subflows related to order statuses, so selecting the right one suddenly becomes highly non-trivial.",
"In our case, the agent eventually figures out the correct subflow and begins to execute actions, which consists of recording values given by the customer, namely the customer's full name or account ID in order to [Pull up Account] .",
"As the third action, the guidelines instruct the agent to ask for the customer's membership level.",
"After the customer supplies this information, the agent enters the guest value into the agent dashboard by clicking the [Membership] button.",
"Buttons have variable slots that may or may not need to be filled, depending on the context (See Table 1 for a full list).",
"Dialogue success demands that agents execute a chain of such actions in the right order with the right values, while simultaneously engaging the customer in natural language conversation.",
"There are three reasons that make carrying out a series of actions more difficult than the task lets on.",
"To start, the permitted actions in a given state are determined not only by Agent Guidelines, but also by the user's desire, which may be in conflict.",
"For example, the customer in Figure 1 wanted to extend their subscription, but the guidelines prevented the agent from doing so.",
"Secondly, actions must be completed in order.",
"This procedural requirement comes from the realization that completing actions out of order (or with missing steps) do not make sense in many real-world scenarios.",
"For example, it is critical to [Verify Identity] before resetting someone's password, not after.",
"Finally, actions themselves induce stochastic outcomes, preventing agents from memorizing patterns of subflow resolution.",
"As an example, [Ask the Oracle] often determines if a customer complaint was valid.",
"In the case of a company error, the agent is compelled to immediately resolve the issue, whereas a misunderstanding made by the customer warrants a different set of responses.",
"This section outlines how we collect and annotate our dataset with context-dependent actions.",
"Managing complex guidelines requires filtering for top agents, which we do by certifying Mechanical Turk (MTurk) workers through an extensive 20-question quiz touching on all aspects of task completion.",
"Keeping the bar high, we set a minimum threshold of 80% accuracy of the quiz which resulted in a low 20% pass rate.",
"After passing the exam, we offered the answer key to agents which further improved understanding.",
"We also created short, 10-minute tutorial videos to showcase how to handle the most difficult aspects of the task.",
"A group chat app was also deployed to offer live feedback for agents, simulating how supervisors coach customer service representatives in real life.",
"Finally, we carefully designed an incentive structure that rewards agents for correctly identifying the user intent to encourage clarification behavior.",
"(Appendix A covers more details.) 4.2 Expert Live Chat Rather than utilizing Wizard-of-Oz techniques (such as in MultiWOZ), we developed Expert Live Chat which contains three unique aspects: (1) Conversations are conducted continuously in real-time.",
"(2) Users involved are not interchangeable.",
"(3) Players are informed that all participants are human no wizard behind the scenes.",
"Normal human conversations occur in real-time, but coordinating multiple users in this manner is resource-intensive, so other datasets often employed workarounds to avoid this difficulty.",
"For example, other works have applied rules (Bordes et al., 2017), templates (Byrne et al., 2019) or paraphrasing (Shah et al., 2018) to produce conversations.",
"Wizard-of-Oz (WoZ) techniques incorporate humans into the mix by allowing one of them to play the system role as a wizard behind the scenes (Kelley, 1984).",
"In particular, (Budzianowski et al., 2018) decomposed dialogues into individual turns, where for each turn a new author is responsible for reading the context and generating the next plausible response.",
"Despite the time-consuming nature, some datasets have produced synchronous dialogues between two humans (Lewis et al., 2017).",
"However, the skill sets of ABCD workers are notably unequal, exacerbating the matching problem.",
"Expert Live Chat matches a highly trained agent with a knowledgeable, yet otherwise average customer in real-time.",
"Since the backgrounds are uneven, unlike other datasets with concurrent users (Lewis et al., 2017; Zhang et al., 2018; Das et al., 2017b), incoming Turkers cannot simply be randomly assigned a role.",
"In other words, having twenty participants does not necessarily equate to ten conversations since it's possible that only a quarter of them are qualified as agents.",
"When such an imbalance inevitably arises, one group must wait until someone from the other side becomes available.",
"However, leaving either side waiting for too long leads to serious consequences since idle time directly affects their pay rate.",
"To minimize the likelihood of such an outcome, we first ensure that a reasonable pool of agents are always available.",
"Then, we increase the number of active customers by methodically inviting a subset of customers one batch at a time.",
"To do so, we established a qualification exam for customers to ensure their availability during a specified time period.",
"Finally, we also redesigned the chat application to make the waiting room experience more Figure 2: The Agent Dashboard is split into three sections.",
"palatable.",
"(See Appendix B for full breakdown.)",
"With these changes, we successfully increased the pairing rate from 18 out of 80 active users up to 72 out of 83, an increase of nearly 400%, while maintaining wait times under 10 minutes.",
"Besides pairing, we increased the likelihood of collecting rich dialogues without the need for extensive instructions by optimizing the chat experience itself.",
"In particular, we observed the greatest gains by grounding the conversation to the relatable scenario of online shopping, which provided immediate context to participants without requiring any extra training.",
"For example, the Agent Dashboard was arranged to closely reflect actual agent workspaces (Figure 2).",
"On the customer side, scenarios in the Customer Panel included an image of the product being discussed, along with other meta-data such as the brand or price to match a true shopping experience as much as possible (Appendix H).",
"We also explicitly told customers the other speaker was human to encourage natural responses over confined commands meant for machines.",
"Most importantly, customers were given dynamically generated, natural-language prompts that did not include information about the values needed to resolve their issue.",
"As a general framework, Expert Live Chat can be applied in any real-world scenario involving an expert and novice.",
"Indeed, increasing the verisimilitude of the experience is precisely what allowed higher quality dialogues to be generated by the workers.",
"The flows and subflows are automatically annotated since we have the provenance of each intent when generating the customer prompt.",
"Additionally, given the ground truth subflow of each conversation, we can deterministically map them to the correct section within the Agent Guidelines outlining the correct actions.",
"Calculating accuracy then becomes a simple exercise to align the predicted actions with the ones required by the manual.",
"In this way, we capture a key benefit of machine-generated text (Shah et al., 2018) without sacrificing the benefit of engaging real users.",
"We validate all dialogues to pass quality thresholds such as including a minimum number of actions and avoiding copy/paste behavior.",
"After filtering, we end up with 10,042 total conversations with an average of 22.1 turns the highest turn count among all compared datasets.",
"Unsurprisingly, ABCD includes more actions per dialogue than other datasets, by at least a factor of two.",
"ABCD also contains a lower absolute number of tokens, but also has the highest variance in the number of tokens per turn.",
"(See Table",
"2.) Since each subflow represents a unique customer intent, ABCD contains 55 user intents evenly distributed through the dataset.",
"By interpreting buttons as domains, the dataset contains 30 domains and 231 associated slots, compared to 7 domains and 24 slots within MultiWOZ (Budzianowski et al., 2018).",
"By grounding to the relatable scenario of chatting with customer support of an online retail company, speakers often showcase various forms of natural dialogue, such as offering diverse reasons for shopping or asking detailed follow-up questions.",
"Furthermore, the unconstrained nature of Expert Live Chat allows users to chat with each other in a free-form style.",
"Dialogues exhibited normal texting behavior such as users speaking for many turns in a row or fixing typos with a star in the subsequent line.",
"Other examples of linguistic phenomenon can be observed in Table 5.",
"The novel features in ABCD brings two new dialog tasks, Action State Tracking and Cascading Dialogue Success.",
"We also build baseline systems that are variants of standard dialogue models and report their results on ABCD.",
"Action State Tracking (AST) aims at detecting the pertinent intent by interpreting customer utterances while taking into account constraints from the Agent Guidelines, an aspect not considered in traditional dialog state tracking (DST).",
"For example, a conceivable dialogue task might entail helping a customer [Reset Password] once this intent has been identified.",
"In contrast, the appropriate next step within AST is governed by the Agent Guidelines, which might require [Verify Identity] of the customer first, or any number of other actions, before executing the password reset.",
"Each series of actions is considered a unique subflow that belongs to a number of high-level conversational flows.",
"Each individual action includes the active button b to click and its corresponding slots s and values v .",
"The task consists of executing an action, which constitutes a single agent turn.",
"More specifically, given a context C t = [ x 1 , x 2 , . . . , x t ] where x t can be a customer utterance x ct , an agent utterance x at , or a prior action x bt , a model should predict the button of the current action as well as the relevant slots and values, if any exist { x bt +1 = ( b, s, v ) B S V} .",
"This structure is designed to mimic DST where each user intent is broken down into domains, slots and values ( d, s, v ) .",
"For both AST and DST, the higher level domain or button can have varying slots.",
"The reverse is also true a given slot can be associated with multiple domains or buttons.",
"Lastly, both contain values that can be enumerable (i.e. payment types or shipping statuses) or non-enumerable (phone numbers or email ad-dresses).",
"Following the pattern set by Rastogi et al. (2020b), enumerable values are given in the ontology to be accessible by a model, whereas the non-enumerable items are not.",
"Despite the similar structure, AST deviates from DST since predicting the right action requires not only parsing the customer utterance, but also adhering to Agent Guidelines.",
"Suppose a customer is entitled to a discount which will be offered by issuing a [Promo Code] .",
"The customer might request 30% off, but the guidelines stipulate only 15% is permitted, which would make 30 a reasonable, but ultimately flawed slot-value.",
"To measure a model's ability to comprehend such nuanced situations, we adopt overall accuracy as the evaluation metric for AST.",
"Since the appropriate action often depends on the situation, we propose the Cascading Dialogue Success (CDS) task to measure a model's ability to understand actions in context.",
"Whereas AST assumes an action occurs in the current turn, CDS gives an agent the additional options of responding with an utterance or ending the conversation.",
"Moreover, proficiency is no longer measured as success over isolated turns but rather as success over sequences of consecutive turns.",
"Formally, given C t = [ x 1 , x 2 , . . . , x t ] as a context composed of utterances x c , x a U and actions x b A , a model should predict all remaining steps x >t along with their realized forms.",
"Possible next steps are to take an action, respond with text or end the task.",
"When the next step is an action x bt +1 , the model should predict the button with its slots and values as in AST.",
"If the agent speaks in the next step x at +1 , the model should rank the true utterance highest, as measured by recall metrics.",
"1 Finally, the model should recognize when to end the conversation.",
"Rewarding the model only when it predicts every step correctly is counter-productive because minor variations in sentence order do not alter overall customer satisfaction.",
"Therefore, CDS is scored using a variation on Cascading Evaluation (Suhr et al., 2019).",
"Rather than receiving a single score for each conversation, cascaded evaluation allows the model to receive partial credit whenever it successfully predicts each successive step in the chat.",
"This score is calculated on every turn, and the model is evaluated based on the percent of remaining steps correctly predicted, averaged across all available turns.",
"(See Appendix C for more details.) 6.3 Baseline Models We also run several baselines on these new tasks.",
"The backbone of all our baseline systems is a pre-trained Transformer-based model acting as a context encoder.",
"More specifically, given the dialogue history as a series of utterances, we first join the utterances together with a [SEP] token and then tokenize the entire input using Word-Piece (Schuster and Nakajima, 2012).",
"Next, we feed the entire input into a BERT model and perform a learned pooling on the hidden states in the final layer, which results in a fixed-length latent vector h enc R 128 (Wolf et al., 2019).",
"Afterwards, we attach a variety of prediction heads conditioned on the h enc vector to generate the final output.",
"Details of the prediction heads for the two proposed tasks are described next.",
"We break down Action State Tracking (AST) into two sub-problems, button-slot prediction and value-filling.",
"Given the ontology, button prediction is a straightforward classification task over 231 known options, so the prediction head is just a linear classifier with a softmax activation for normalization: P b slot = Softmax ( W a h (cid:62) enc + b a ) .",
"the task into predicting enumerable and non-enumerable values.",
"The ontology lists out all | E | enumerable values, so the prediction head p enum simply maps the hidden state h enc into the appropriate dimensions.",
"To handle non-enumerable values, we follow the insight from (Ma et al., 2019) which notes that practically all such values are stated by the customer in conversation, so a model can copy these values from the tokenized context.",
"During pre-processing, we extract up to | N | unique tokens from the natural language customer utterances, where p copy then represents the distribution over these possible options.",
"2 We imitate the TRADE architecture from (Wu et al., 2019), where conditioned on the action, the model chooses to either copy from the context p copy or select from the enumerable entities p enum based on a gating mechanism.",
"The gate is conditioned on the hidden state h enc as well as a learned context vector c i .",
"Concretely, p enum = Softmax ( W e h (cid:62) enc + b e ) R | E | p copy = Softmax ( W c h (cid:62) enc + b c ) R | N | c i = W (cid:62) c p copy R hid p gate = ( W g [ h enc ; c i ]) R 1 P val = [ p gate p copy ; (1 p gate ) p enum ] R | E + N | where represents the Sigmoid function and [ ; ] is the concatenation operation.",
"The final value predictions are the argmax of P val which merge the probabilities of p enum and p copy together.",
"For Cascading Dialogue Success (CDS), we also tackle next step selection, utterance ranking, and intent classification.",
"Next step selection is a choice between retrieve utterance , take action and end conversation .",
"Intent classification consists of choosing from the 55 available subflows.",
"Given this basic setting, both tasks use the same setup of a linear layer followed by a softmax, albeit with their own respective weights WNS R 3 hid and WIC R 55 hid .",
"When the next step is to take action , the AST model is reused to determine the button-slot and value.",
"When end conversation is selected, all future predictions are ignored, much like an <EOS> symbol signifies stopping.",
"This leaves us with utterance ranking, which is only evaluated when retrieve utterance is chosen as the next step.",
"Our ranker reproduces the design 2 Choosing larger | N | leads to higher recall, but lower precision.",
"from (Guu et al., 2020), where the encoded context h ctx is compared against each encoded candidate response h cand to produce a ranking score.",
"To embed each j th candidate d j we first create its input d inputj .",
"Following standard practice, we prepend the candidate text d j with [CLS] , separate the individual utterances u i within the candidate response using a [SEP] token, and append a final [SEP] token afterwards.",
"(Devlin et al., 2019).",
"This input d inputj is then fed into a static pretrained BERT model to get an initial hidden state, which is finally projected using a learned weight W d j R 128 hid to produce h cand .",
"To obtain h ctx we start with the hidden state h enc from before and apply a projection matrix WUR R 128 hid to reach the desired dimensionality.",
"d input j = [ CLS ] u 1 [ SEP ] u 2 [ SEP ] ... [ SEP ] u n [ SEP ] h cand = W d j BERT base ( d input j ) (cid:62) R 128 h ctx = WUR h (cid:62) enc R 128 f ( x i , d j ) = h (cid:62) ctx h cand P rank j = exp ( f ( x i , d j )) d (cid:48) j exp f ( x i , d (cid:48) j ) The final rank is given by normalizing each j th score against all other candidate scores.",
"We use the training objective from (Henderson et al., 2019) to calculate the loss: J = M =100 (cid:88) j =1 P ( x i , d j ) M (cid:88) i =1 log M (cid:88) j =1 exp f ( x i ,d j ) where M is the size of the total candidate set.",
"We performed experiments on the two newly proposed tasks, AST and CDS.",
"AST consists of two subtasks, button-slot prediction and value-filling, while CDS builds on this with three additional subtasks of next step selection, utterance ranking, and intent classification.",
"For both tasks, we experimented with two types of frameworks, a pipeline version and an end-to-end version.",
"The pipeline version trains each subtask separately while the end-to-end optimizes all tasks jointly (Liang et al., 2020; Rastogi et al., 2020a; Ham et al., 2020).",
"The pipeline model uses a BERT model trained with the RAdam optimizer (Liu et al., 2020).",
"To test the performance of different pretrained models under the end-to-end framework, we Metric Pipeline BERT AlBERT RoBERTa B-Slot 86.7% 89.9% 90.9% 93.6% Value 42.1% 61.6% 61.0% 67.2% Action 32.3% 59.5% 59.2% 65.8% Table 3: Metrics for Action-State Tracking.",
"experiment with three additional encoders, AlBERT (Lan et al., 2020), RoBERTa (Liu et al., 2019) and RoBERTa-Large.",
"AlBERT model has an inter-sentence coherence task and a lighter memory footprint compared to BERT, while RoBERTa model has substantially more data and hyper-parameter tuning in pretraining than BERT.",
"In the future, we also plan to include GPT-based models, such as DialoGPT (Zhang et al., 2020) in our comparison.",
"For both tasks, moving from the pipeline architecture to a jointly trained method displayed noticeable improvement in accuracy.",
"As hinted at in prior works (Liang et al., 2020), we suspect the group effort gives each subtask extra supervision from other subtasks for more data efficient training.",
"In the AST task, we found steady improvements as we move from the older to the newer models with vanilla BERT at 59.5% accuracy and RoBERTa doing the best at 65.8%.",
"For the CDS task, we found a similar trend where RoBERTa-Large outperforms BERT, but only by a mere 0.6%.",
"We hypothesize this small gap between models is due to the fact that none were particularly trained on dialogue data which impacts their ability to produce a useful encoding (Wu and Xiong, 2020).",
"Separately, we evaluate CDS subtask difficulty by asking human volunteers to select the correct label from a list of possible options.",
"As an example, workers would be presented with 55 different classes for Intent Classification and asked to choose the right one.",
"Since humans typically struggle when choosing from large collections of items, fine-tuned models performed roughly on par or better compared to humans in this unnatural setting.",
"On the other hand, human evaluation for the overall CDS task was judged by measuring the success rate in a standard conversational scenarios where behavioral instincts are activated, so humans were able to excel on this environment.",
"We perform an ablation study to test the significance of the key features in ABCD.",
"Recall, actions are characterized by their dual nature of requiring signals from both the customer and the company guidelines.",
"To that end, we provided the ground truth intent to measure the impact of the customer side.",
"Conversely, we also test the company side by masking out invalid buttons based on the insight that the Agent Guidelines are useful for narrowing down the range of possible actions.",
"In both situations, we would expect that providing such oracle guidance would boost performance.",
"Lastly, note that the appropriate action depends on the outcomes of prior actions, so for a final experiment we removed prior actions and their explanations from the context to test their impact on task success.",
"(See Appendix E for details.)",
"We observe that supplying the intent information to the BERT model causes a noticeable boost in dialog success, bringing the score to 32.3%.",
"However, augmenting the model with knowledge of the guidelines unexpectedly dropped performance down to 30.6%.",
"Further analysis revealed the imperfect intent classifier would occasionally mask out valid buttons, leaving only incorrect ones to choose from.",
"As a result, the downstream action predictor would be prevented from doing its job, causing errors to accumulate.",
"To test this hypothesis, we ran another model (Intent+Guide) which had access to guidelines along with an oracle intent classifier.",
"This model reached the peak observed performance of 32.7%, highlighting the importance of both components.",
"As a final result, removing action information away from action-based conversations unsurprisingly causes a major performance drop (Table 4).",
"In conclusion, we have presented ABCD which includes over 10K dialogues that incorporate procedural, dual-constrained actions.",
"Additionally, we established a scalable method for collecting live human conversations with unequal partners.",
"We found that pre-trained models perform decent on Action State Tracking, but there is a large gap between humans agents and the top systems for Cascading Dialogue Success.",
"We plan to incorporate GPT-related models (Hosseini-Asl et al., 2020), as alternate forms of preprocessing have shown promise in other NLP tasks.",
"Other techniques could also be used to incorporate speaker info, action semantics and other meta-data.",
"Wholly new systems that attend to the Agent Guidelines in a fully differentiable manner are also worth exploring.",
"By grounding dialogues to in-depth scenarios with explicit policies, we hope to have pushed towards a better understanding of dialogue success.",
"The authors would like to thank Tao Lei, Felix Wu and Anmol Kabra for their feedback and support.",
"We would also like to thank the anonymous NAACL 2021 reviewers for pointing out specific areas of confusion in our submission, which we have tried our best to clarify.",
"This paper presents a new dataset which was collected through the use of crowdworkers.",
"All agent workers were compensated a fair wage based on their local standard of living, where their location was determined during the vetting process.",
"(Please refer to Appendix A for more details.)"
] | [
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain"
] |
[
"Data-to-text generation focuses on generating fluent natural language responses from structured meaning representations (MRs).",
"Such representations are compositional and it is costly to collect responses for all possible combinations of atomic meaning schemata, thereby necessitating few-shot generalization to novel MRs. In this work, we systematically study the compositional generalization of the state-of-the-art T5 models in few-shot data-to-text tasks.",
"We show that T5 models fail to generalize to unseen MRs, and we propose a template-based input representation that considerably improves the model's generalization capability.",
"To further improve the model's performance, we propose an approach based on self-training using fine-tuned BLEURT for pseudo-response selection.",
"On the commonly-used SGD and Weather benchmarks, the proposed self-training approach improves tree accuracy by 46%+ and reduces the slot error rates by 73%+ over the strong T5 baselines in few-shot settings.",
"1 1 Introduction Data-to-text generation (Duek et al., 2020; Shen et al., 2020) is a critical component in today's task-oriented dialog systems for producing fluent natural language responses to users' requests.",
"The task takes structured meaning representations (MRs) as input for natural language text response generation.",
"Such representations are compositional, which allows for the combination of atomic meaning units in various ways to express the rich semantics encoded in languages.",
"Recently, large pre-trained language models (LMs) have shown impressive results on many language understanding and generation Work performed during an internship at Google.",
"1 Our code and data is available at github.com/googleresearch/google-research/tree/master/compgen_d2t 250 500 750 1000 Few-shot train split size 20 40 60 80 T r ee A cc u r a c y Semantic repr.",
"tasks (Howard and Ruder, 2018; Peters et al., 2018; Devlin et al., 2019; Raffel et al., 2020), however it remains unclear how well these LMs generalize compositionally to novel semantic representations.",
"There have been many studies revealing that large LMs often memorize the patterns from training data, while generalizing poorly to novel patterns.",
"Compositionality in languages (Banarescu et al., 2013; Konstas et al., 2017) further aggravates such issues as the number of novel structural combinations exponentially increases with the number of atomic semantic units.",
"In recent years, we have seen progress on benchmarking and measuring compositional generalization for languages (An-dreas, 2019), from perspectives including specialized architectures (Lake, 2019; Rao et al., 2019) and learning strategies (Andreas, 2020; Akyrek et al., 2021).",
"However, most of these works study the generalization for NLU tasks like question answering (Keysers et al., 2020) and semantic pars-4205 Query: Is it jacket weather?",
"ing (Kim and Linzen, 2020).",
"To the best of our knowledge, compositional generalization for natural language generation is still an under-explored problem, which is the focus of this work.",
"To answer the question of whether pre-trained LMs still suffer from lack of compositional generalization, we start with an empirical evaluation of T5 (Raffel et al., 2020), the state-of-the-art model on data-to-text generation tasks (Kale and Rastogi, 2020b).",
"In our study, we use the Weather dataset (Balakrishnan et al., 2019) consisting of tree-structured compositional MRs along with tree-structured output responses (see Figure 2 for",
"(a) naive MR and",
"(c) target response).",
"For evaluation, we compute the tree accuracy (Balakrishnan et al., 2019) which measures exact match between input and generated tree-structures.",
"In this study we observe 47% 80% (across different few-shot train splits) drop in the tree accuracy when evaluated on validation splits containing unseen tree-structures in comparison to splits containing seen tree-structures (Figure 1).",
"Furthermore, simply increasing the model size from T5small to T5large does not close the generalization gap (Table 2), af-firming our hypothesis that even strong seq-to-seq LMs fail to generalize compositionally.",
"Inspired by Kale and Rastogi (2020a), we examine whether template-guided MRs are effective over naive MRs for tackling compositional generalization in data-to-text tasks.",
"We introduce a simple template engine that traverses the compositional MR in a top-down manner and converts it to a text representation (Figure",
"2(b)).",
"We hypothesize that such a template-guided setup reduces the change in representation between LM pre-training and finetuning.",
"With template-guided MRs, we report up to 2 x increase in the tree accuracy over naive MRs on the validation split with unseen structures, demonstrating improved model generalization.",
"We also propose to self-train the generation model to further boost performance by mitigating data sparsity in the low-data regime without requiring additional manual annotation.",
"Concretely, we augment the limited labeled MRs with unlabeled novel MRs to iteratively bootstrap the model.",
"To fil-ter out noisy pseudo responses during self-training, we repurpose BLEURT (Sellam et al., 2020), a learned metric, to be a quality estimator.",
"We synthetically generate datasets for finetuning BLEURT with the goal of identifying hallucinations, missing slot-values, and ungrammatical responses.",
"In sum, our overall approach improves the tree accuracy on unseen structures of the FewShotWeather dataset by 12 .",
"3% 46 .",
"4% over strong T5 baselines.",
"On unseen schemata of the FewShotSGD dataset, we reduce the slot error rate by 54 .",
"4% 73 .",
"0% .",
"In this section, we are interested in investigating the following with respect to data-to-text tasks:",
"(Q1) Do current state-of-the-art generation models compositionally generalize?",
"(Q2)",
"What is an effective semantic representation for tackling compositional generalization?",
"Problem Setup Data-to-text generation is the task of generating natural language text y from meaning representation (MR) x .",
"In the context of task-oriented dialog systems, the choice of MR ranges from a flat list of slot-value pairs (Duek et al., 2018) to a more expressive tree structure.",
"Balakrishnan et al. (2019) defines tree-structured MRs consisting of arguments, dialog acts, and discourse relations, which we use in this work.",
"They report significant gains in the naturalness of the generated responses with tree-structured MRs on the Weather domain dataset.",
"Figure 2",
"(a) visualizes an instantiation of such a tree-structured MR where the argument LOCATION is made up of a sub-argument ( CITY ), the dialog act RECOMMEND consists of three arguments ( ATTIRE _ NOT , LOCATION , DATE _ TIME ), and the discourse relation JUSTIFY captures the relationship between two dialog acts ( RECOMMEND , INFORM ).",
"We consider linearized versions of tree-structured MR x and output response y .",
"Generating the tree structure in the output enables us to compute the tree accuracy which helps to assess the structural correctness of the predicted response.",
"FewShotWeather Dataset Due to the compositional nature of MRs, it is costly to collect responses for all combinations of discourse relations, dialog acts and arguments.",
"In order to keep data labeling costs under control, we simulate a more realistic few-shot (or limited labeled data) setup.",
"In the original Weather dataset, we have 25 , 390 training examples spanning 4 , 690 unique tree-structured MRs. An unique tree-structured MR is defined as a novel composition of discourse relations, dialog acts and argument names.",
"Basically, they constitute non-terminals of a tree (Figure",
"2(a) without terminals or argument values like extremely humid, light rain, today, Palo Alto, jacket, and cold).",
"For the Weather dataset (Balakrishnan et al., 2019), we construct 4 few-shot splits: 1shot-250, 1shot-500, 1shot-750, and 1shot-1000, where 1shot-X denotes training split to include one example per unique tree-structured MR and in total X unique tree-structured MRs. Further, all X examples in 1shotX are included while constructing 1shot-Y splits, where X < Y .",
"We also make sure each discourse relation, dialog act and argument name is represented at least once in our few-shot splits.",
"However, all combinations of these may not exist, thus allowing us to simulate structural shifts and evaluate compositional generalization.",
"Based upon these splits, we construct two evaluation sets: seen tree-structures (overlapping with tree-structured MRs from 1shot-250) and unseen tree-structures (disjoint with tree-structured MRs from 1shot-1000) (see Section 4.1 for more details).",
"Henceforth, all of the above splits constitute the FewShotWeather dataset.",
"We release these splits for future studies.",
"To answer (Q2), we use linearized tree structures as input to the T5 model ( naive representation ).",
"However, T5 based models are pre-trained on normal text as input, thereby creating a representation discrepancy between pre-training and fine-tuning.",
"To alleviate this discrepancy, we introduce a simple template engine that recursively traverses the compositional MR in a top-down manner to generate a structure-aware text representation ( template guided representation ).",
"to convert naive representation (Figure",
"2(a)) to template guided representation (Figure",
"2(b)) are listed in Table 1.",
"Each template, consisting of a name and a body, is invoked if a node in the MR (e.g., DG_INFORM) matches its name.",
"A template can also invoke other templates or some utility functions.",
"For example, template 3 could invoke templates 4 or 5 based on the returned value of the utility function IsSet($condition) (namely, whether the argument $condition is set or not).",
"Such a template engine requires developing only a linear number of templates with respect to the number of meaning units to convert a compositional MR to a text representation, without writing a template for each unique MR (4,690 unique MRs in the dataset).",
"In our study, we fine-tune the T5-small model using different few-shot train splits and report tree accuracy on validation splits.",
"We observe that current state-of-the-art generation models undergo a significant drop in performance when evaluated on unseen tree structures.",
"Specifically, with naive input representation, we observe 47% 80% (across different few-shot train splits) drop in tree accuracy, thus, providing evidence to answer (Q1) that the current model does not generalize to novel MRs. On experimentation with template guided MRs and 1shot-250 train split, the tree accuracy on validation unseen split increases from 8.77 to 26.3 ( 2 x increase over naive MRs), thus, answering (Q2) favorably (Figure 1).",
"However, across different few-shot train splits, template-guided MRs still undergo a significant 41% 65% drop in tree accuracy on the unseen split compared to the seen split.",
"Recent studies (Kaplan et al., 2020; Tay et al., 2021) show that model scale can affect the performance on several pre-training and downstream tasks.",
"To understand how model scale affects the generalization to unseen structures, we consider three T5 variants: T5-small (77M), T5-base (120M), and T5-large (800M).",
"We fine-tune each of these models on the full training data (16,816 examples corresponding to 1000 unique tree-structured MRs from 1shot-1000 split) and convincingly answer (Q3): Increasing the model (and dataset) size does not close the performance gap between seen and unseen splits (Table 2).",
"Surprisingly, we observe that the T5-small model performs similarly or better than its larger counterparts.",
"We use T5-small for the remaining experiments.",
"As discussed earlier, the compositional nature of MRs makes it difficult to collect responses for all combinations.",
"However, with access to data simulators (Rastogi et al., 2020), it is feasible to automatically generate large amounts of unlabeled MRs. Given limited labeled MRs, S = { x i , y i } ni =1 , and assuming access to unlabeled MRs, U = { x i } mi =1 , we investigate self-training (Scudder, 1965), a semi-supervised learning approach to effectively use U to improve compositional generalization.",
"Self-training starts from a model trained on labeled data S , iteratively applies the current model to generate pseudo-labels on unlabeled data U , and then re-trains the current model on the augmented version of S and (subset of) U .",
"For self-training to be effective, one needs to carefully select confident pseudo labels to alleviate the risk of reinforcing the model's mistakes (He et al., 2020).",
"This issue gets further exacerbated in the context of generation tasks, where neural models are prone to hallucinate additional content not supported by the input (Maynez et al., 2020).",
"With recent developments in learned evaluation metrics that penalize the model for hallucination, fluency, etc., we pose the question: Can we repurpose those metrics to assess the quality of pseudo-responses during self-training?",
"Formally, given a pair of template guided MR (source) and model predicted response (candidate), we want a model that estimates the response quality by looking for hallucinations, fluency, coverage of argument value-pairs.",
"Ideally, to learn such a model we require a large amount of positive and negative text pairs.",
"To alleviate this requirement, we propose synthesizing the examples using the limited labeled task dataset.",
"Furthermore, we initialize our quality estimation model using a pre-trained BLEURT (Sellam et al., 2020), which is shown to be sample efficient and robust to data shifts as a learned evaluation metric.",
"Soruce (text-to-text input): there will be light freezing fog with a temperature high of 74 low of 61 at next friday Positive candidate (target response): next friday will have a high of 74 , a low of 61 , and a light freezing fog",
"Negative candidates: [retrieving similar examples] next friday will be cloudy with a high of 74 , a low of 61 , and thunderstorms and rain [pairing with reference] there will be light freezing fog with a temperature high of 74 low of 61 at next friday [swapping words] next friday will of have a high of will 74 , a low of 61 , and a light freezing fog [repeating phrases] next friday will have a high of 74 , a low of 61 of 61 , and a light freezing fog [dropping phrases] next friday will have a high of 74 , a low of 61 , and a light freezing fog [flipping digits] next friday will have a high of 78 , a low of 61 , and a light freezing fog",
"Once we have a fine-tuned BLEURT model, we use it to select pseudo-responses using a selection threshold for self-training.",
"We synthetically generate the dataset for finetuning BLEURT using the labeled dataset available for each of our experiments.",
"Template guided inputs and ground truth target responses are paired as positive examples (rating: 1 . 0 ).",
"We use the following transformations on the target responses to create negative examples (rating: 0 . 0 ): Retrieving similar examples: For every input x , we rank all other inputs from the dataset using the BLEU score and select top-k examples below a certain threshold ( 90 . 0 ).",
"Target responses corresponding to these top-k examples are paired with x to construct negative examples.",
"Intuitively, these responses partially overlap with input x in terms of the content and inform a fine-tuned model to handle hallucinations.",
"Pairing with reference: Template guided inputs need not be grammatically correct.",
"Pairing the input x with itself as a response provides grammatically incorrect negative examples.",
"Swapping, repeating and dropping phrases, flipping digits: Using these methods, we prepare a fine-tuned BLEURT for structurally inconsistent behaviors of the NLG system.",
"Figure 3 visualizes an instantiation of different transformations to construct negative examples.",
"FewShotWeather The original Weather dataset (Balakrishnan et al., 2019) has 25 , 390 training examples.",
"Each example consists of a user query, the tree-structured MR, the tree-structured annotated response and metadata.",
"As discussed in Section 2, we create new canonical subsets for compositional generalization experiments, FewShotWeather with 1shot-250 (approx. 1% of original training data), 1shot-500, 1shot-750, and 1shot-1000 splits.",
"We repurpose all the remaining 24 k training examples as unlabeled examples for self-training.",
"Our evaluation splits have 1 , 087 / 1 , 121 (val/test) examples with seen tree-structures, and 1 , 095 / 1 , 170 (val/test) examples with novel tree-structures.",
"We report tree accuracy and BLEU-4 (Papineni et al., 2002) for the FewShotWeather dataset.",
"FewShotSGD The original multi-domain Schema Guided Dialogue (SGD) dataset (Rastogi et al., 2020) has 160 k examples spanning across 20 domains (e.g., Banks, Travel, Weather, etc.).",
"For each of these domains, there are different services with a total of 45 different schemata.",
"Schema here refers to the combination of intents and slots, which change with services and domains.",
"Further, not all domains and services are observed during training.",
"Therefore, we use this dataset to study generalization to unseen schemata.",
"Specifically, we use the few-shot variant of the dataset, FewShotSGD, as introduced by Kale and Rastogi (2020a).",
"The FewShotSGD benchmark consists of k -shot splits (5/10/20/40), where k denotes the number of dialogues selected per train domain.",
"The few-shot train splits have 558/1,075/2,140/4,312 (5/10/20/40-shot) examples.",
"Evaluation splits have 13,748/10,216 (val/test) examples with seen schema, and 10,386/26,568 (val/test) examples with novel schema.",
"Following Kale and Rastogi (2020a), we report BLEU-4 and slot error rate (SER) (Duek and Jurcicek, 2019).",
"SER measures the fraction of examples where at least one slot was incorrectly copied from the input (lower SER is better).",
"For each of the experiments we fine-tune the off-the shelf T5.1.1.small checkpoint 2 .",
"It has 6 layers each in encoder and decoder with a total of 77 M parameters.",
"We set the maximum sequence length to 512 , batch size to 16 and a constant learning rate of 0 .",
"001 for Adafactor optimizer (Shazeer and Stern, 2018).",
"All models are fine-tuned on a 4x4 TPU slice, each taking around 2-3 hours to fin-ish 5000 steps.",
"We evaluate models after every 200 steps and retain the checkpoint yielding best tree accuracy (for FewShotWeather) or BLEU (for FewShotSGD) on the held-out validation seen split.",
"During inference, we set the beam size to 4 and length penalty = 0 .",
"6 .",
"While constructing the fine-tuning dataset for BLEURT, we generate up to 4 different negative candidates for each of the 6 transformations.",
"We upsample the positive examples to be half the total number of negative examples and retain random 10% of total examples for validation set.",
"For finetuning the BLEURT model, we start with publicly available BLEURT-20-D12 (Sellam et al., 2020).",
"We set the maximum sequence length to 512 , batch size to 32 , a learning rate 1e-6, and fine-tune for 100 k steps.",
"We use the held-out validation set to select the best checkpoint for self-training.",
"In this section, we compare the performance of BLEURT based pseudo-response selection strategy with that of vanilla self-training.",
"For each experiment, we randomly sample an equal number of examples for vanilla self-training and the BLEURT model to explicitly control for the sample complexity.",
"We run 3 iterations of the self-training unless explicitly specified and set the BLEURT score selection threshold to 0 .",
"99 .",
"We study the performance on a dataset (FewShotWeather) with tree-structured outputs as well as show the generality of our method on a dataset (FewShotSGD) without explicit tree-structured outputs.",
"Note that naive T5 fine-tuning with template guided input representation constitutes a strong baseline for few-shot experiments as shown by Kale and Rastogi (2020a).",
"We include results from this baseline under None pseudo-response selection strategy as it does not involve self-training.",
"Unseen tree structures (FewShotWeather) Table 3 reports the performance of different methods as a function of the number of labeled examples.",
"We observe that the performance for all methods improves with more training data.",
"Across all few-shot splits, we observe that BLEURT based self-training improves over vanilla self-training both in terms of tree accuracy and BLEU.",
"Empirically, we see that relative gains in tree accuracy (over the 4210 Model Self-No. of FewShotWeather training training Seen structures Unseen structures iteration examples BLEU Tree Acc. BLEU Tree Acc. Baseline 250 69 . 16 73 . 68 50 . 40 29 . 83 Vanilla 1 + 14 , 742 69 . 25 73 . 77 51 . 87 31 . 37 2 + 4 , 170 68 . 72 73 . 06 51 . 92 31 . 11 BLEURT-250 1 + 14 , 742 69 . 64 83 . 85 52 . 10 41 . 03 2 + 4 , 170 69 . 59 84 . 12 52 . 34 43 . 68 BLEURT-1000 1 + 14 , 021 70 . 95 84 . 83 52 . 13 45 . 47 2 + 4 , 772 70.47 85.64 53.08 47.44 Table 4: Model performance over multiple self-training iterations with FewShotWeather 1shot-250 train split. BLEURT-X denotes BLEURT model fine-tuned using 1shot-X train split. We observe that BLEURT model fine-tuned with larger datasets further enhances the self-training performance, especially on unseen structures. T5-small baseline) from vanilla self-training are comparable on both unseen and seen splits (e.g., 7 . 15% v.s. 6 . 72% , 1shot-500).",
"On the other hand, BLEURT based self-training significantly improves the relative performance on the unseen split in comparison to seen splits (e.g., 18 . 72% vs. 10 . 5% , 1shot-500), thus showcasing the effectiveness of selecting quality pseudo-responses for improving performance on unseen tree-structures.",
"Unseen schema (FewShotSGD) Table 3 reports the performance on the FewShotSGD dataset.",
"Similar to results on the FewShotWeather dataset, we observe that the performance improves with more training data.",
"Further, the performance of the baseline T5-small model is comparable to seen and unseen schemata.",
"These gains can be attributed to the benefits of using template guided MRs. In comparison to vanilla self-training, BLEURT based approach improves the overall performance across all few-shot splits on both seen and unseen schema.",
"For example, with 5-shot experiments, BLEURT based selection strategy reduces the SER on unseen schema from 19.93 to 5.39 ( 73% improvement) in comparison to the baseline T5 model.",
"On the other hand, vanilla self-training reduces the SER only by 3.97 ( 20% ), thus showcasing the effectiveness of the proposed approach in filtering pseudo-responses with missing slot-value pairs.",
"These results confirm that BLEURT based self-training is a generic method and can be plugged in to existing methods to improve the few-shot generalization capabilities of existing SOTA generation models.",
"Performance with respect to self-training iterations We iteratively self-train the model starting from a T5-small baseline and continue adding unlabeled examples up to 3 iterations.",
"From Table 4 and 9, we see that model performance improves across the self-training iterations.",
"However, the number of additional examples added decreases over iterations, thus suggesting that 2 3 iterations might be enough to obtain benefits from self-training.",
"Quality of fine-tuned BLEURT models For all our experiments, we use the few-shot labeled datasets for fine-tuning the BLEURT model.",
"To investigate self-training performance with a BLEURT model fine-tuned on a large dataset, we set up an experiment on the FewShotWeather dataset, where we fine-tune the BLEURT model on a 1shot-1000 train split (BLEURT-1000) and use it for self-training with 1shot-250.",
"From Table 4, we see that self-training with BLEURT-1000 performs significantly better than BLEURT-250, especially on unseen structures, thereby confirming the intuition that self-training is sensitive to the quality of the BLEURT model.",
"Aside from automatic metrics-based evaluation, we also perform a human evaluation study by asking annotators to assess the quality of the generated responses from different models.",
"For each example, human annotators are shown user query, generated response and the ground truth response.",
"They are asked to provide ratings on a scale of 1 (bad), 2 (slightly bad) to 3 (good) along two dimensions: grammaticality , naturalness , rating on a scale of 0 (less) to 1 (adequate) for informativeness , and binary rating for accuracy .",
"Similar to (Balakrish-nan et al., 2019), grammaticality evaluates the response for subject-verb agreement, repetitions, and grammatical completeness.",
"Naturalness measures whether the response sounds coherent and natural by the response itself.",
"Informativeness measures whether the response contains the right amount 4211 Model Gram Nat Info Acc FewShotWeather (Seen split) Baseline 2.59 2.55 0.81 0.94 BLEURT 2.66 1 2.63 1 0.80 0.93 Full 2.66 1 2.61 0.80 0.95 FewShotWeather (Unseen split) Baseline 2.43 2.41 0.75 0.79 BLEURT 2.50 1 2.46 1 0.76 0.80 Full 2.53 1 2.50 1 0.79 1 0.86 1 , 2 FewShotSGD (Seen split) Baseline 2.72 2.66 2 0.79 0.76 BLEURT 2.69 2.59 0.81 0.88 1 Full 2.83 1 , 2 2.74 1 , 2 0.81 0.94 1 , 2 FewShotSGD (Unseen split) Baseline 2.70 2.61 0.77 0.72 BLEURT 2.67 2.60 0.79 0.86 1 Full 2.83 1 , 2 2.73 1 , 2 0.82 1 , 2 0.94 1 , 2 Table 5: Human evaluation results comparing different models.",
"of relevant information to the user query and accuracy evaluates the response for hallucinations (incorrectly added slots), missing slots by comparing it against the reference.",
"For each evaluation split (seen/unseen), we randomly select 200 examples and collect ratings from 3 different annotators.",
"For the FewShotWeather/SGD datasets, we consider models trained with 1shot-250/5-shot splits and compare them with models fine-tuned on the full dataset.",
"In total, we collect 7 , 200 annotations, each with 3 ratings.",
"Table 5 reports results for human evaluation study.",
"FewShotWeather Similar to automatic metrics, we see a drop in human ratings on the unseen split (compared to seen split), confirming the model's lack of generalization to novel MRs. On both the evaluation splits, our approach outperforms the baseline model with significant results on grammaticality and naturalness ratings.",
"Moreover, the responses from the self-trained model are comparable (in terms of the human ratings) with that of the model fine-tuned with the full dataset, demonstrating the effectiveness of our approach.",
"FewShotSGD Apart from generating natural responses, model responses must be factually grounded in the input data and address user queries.",
"On FewShotSGD, we see that our approach significantly improves informativeness and accuracy rating over the baseline model.",
"Surprisingly, we see a drop on naturalness when evaluated on seen schemata.",
"In Table 6 (and Tables 7, 8 in Appendix A) we visualize the sample responses generated using different models for unseen test splits.",
"We consider three models: T5-small baseline, BLEURT based self-training, and model trained with full data.",
"For the FewShotWeather/ FewShotSGD datasets, we consider models trained with 1shot-250/ 5-shot train splits.",
"We see that the baseline model fails to generate responses that are coherent and factually grounded in the input.",
"They hallucinate to generate novel concepts like cloudy hail , drop relevant details like cafe located in Emeryville , and are repetitive in nature.",
"We also report the BLEURT score along with human ratings per sample and see that they are reflective of the response quality.",
"Data-to-Text Generation While early research focused on rule-based methods (Reiter and Dale, 2000), more recent work has relied heavily on neural methods (Wen et al., 2015; Marcheggiani and Perez-Beltrachini, 2018).",
"Some recent works (Kale and Rastogi (2020b), Peng et al. (2020), Kale and Roy (2020)) showed that transfer learning from pre-trained language models can improve generalization capabilities and sample efficiency.",
"In other lines of work, Ferreira et al. (2019); Moryossef et al. (2019) find that pipelined neural approaches with explicit planning steps can outperform their end-to-end counterparts, while Kale and Rastogi (2020a) and Du et al. (2020) showed the benefits of schema and template guided input representations.",
"Inspired by Kale and Rastogi (2020a) we propose a simple and generic way to produce text-to-text representation, and study how it impacts compositional generalization.",
"Self-training for NLG He et al. (2020) revisits the problem of self-training for NLG.",
"They found that noise (from perturbing the input space) helps in self-training and propose a noisy version of self-training by augmenting vanilla training with the inputs from a reconstruction model.",
"Building on this idea, the contemporary work (Heidari et al., 2021) on few-shot data-to-text generation proposes to self-train the model and shows efficacy 4212 Fields BLEURT Gram Nat Info Acc Input or output response User query --What will the temperature be tomorrow morning Template --There will be temperatures between 76 and 80 tomorrow morning there will be partly cloudy tomorrow morning Reference --The temperature for tomorrow morning will be between 76 and 80 fahrenheit along with partly cloudy skies Predictions Baseline -0.002 2.17 1.67 0.67 1.0 Expect partly cloudy skies and tomorrow morning.",
"on the Weather dataset.",
"Another contemporary work (Li et al., 2021) proposes to use constrained decoding to generate valid pseudo-responses for self-training and show convincing benefits.",
"However, our work focuses on compositional generalization, rather than the pure few-shot learning setup.",
"We propose a BLEURT-based self-training method, which is more generic than pseudo-response selection methods that rely on output structures.",
"We systematically study the problem of compositional generalization for data-to-text generation and show that existing state-of-the-art generation models do not generalize to unseen structures.",
"We propose a simple and generic way to produce template guided text representation for response generation, and demonstrate its effectiveness on both seen and unseen structures.",
"Further, we introduce a generic self-training approach that leverages fine-tuned BLEURT for pseudo response selection and show significant improvements over vanilla self-training on existing few-shot data-to-text generation benchmarks.",
"While our method requires only a small number of templates to start with, we still need to manually generate them for every unseen MR. Automatically generating templates by priming GPT-style models is an interesting line of future work.",
"Furthermore, the effectiveness of our self-training method is highly dependent on the quality of the underlying BLEURT model (see Table 4).",
"Given BLEURT based quality estimator is a learned model, it may be susceptible to data distribution shifts.",
"We leave such analysis to future work.",
"Another interesting future direction is to investigate the effectiveness of our approach to languages other than English.",
"To study compositional generalization for data-to-text tasks, we introduce data splits based on the already existing, publicly available, and widely used compositional weather dataset (Balakrishnan et al., 2019).",
"We release our data splits to facilitate the development of new methods and consistent evaluation of them in comparison with exist-4213 ing works.",
"In terms of use-case scenarios, we focus on task-oriented dialogue generation by using large pre-trained language models.",
"These models are known to exhibit and potentially amplify social biases found in the training data, such as gender biases (Dinan et al., 2020), and are capable of generating toxic or otherwise unsafe content (Wei-dinger et al., 2021).",
"Our method helps these models generate higher quality responses than considered baselines when evaluated in terms of grammaticality, naturalness, informativeness, and accuracy.",
"However, our work does not explicitly focus on mitigating social biases, unsafe content, or other potential ethical or social harms that might result from dialogue generation.",
"Therefore, we caution against the deployment of our system in environments where any such biases can negatively impact the individuals interacting with our system without further assessment of the safety of this system in that environment."
] | [
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"method",
"result",
"abstain",
"method",
"method",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"result",
"objective",
"result",
"method",
"result",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain"
] |
[
"Deep and large pre-trained language models are the state-of-the-art for various natural language processing tasks.",
"However, the huge size of these models could be a deterrent to using them in practice.",
"Some recent works use knowledge distillation to compress these huge models into shallow ones.",
"In this work we study knowledge distillation with a focus on multilingual Named Entity Recognition (NER).",
"In particular, we study several distillation strategies and propose a stage-wise optimization scheme leveraging teacher internal representations, that is agnostic of teacher architecture, and show that it outperforms strategies employed in prior works.",
"Additionally, we investigate the role of several factors like the amount of unlabeled data, annotation resources, model architecture and inference latency to name a few.",
"We show that our approach leads to massive compression of teacher models like mBERT by upto 35 x in terms of parameters and 51 x in terms of latency for batch inference while retaining 95% of its F 1 -score for NER over 41 languages.",
"Motivation: Pre-trained language models have shown state-of-the-art performance for various natural language processing applications like text classification, named entity recognition and question-answering.",
"A significant challenge facing practitioners is how to deploy these huge models in practice.",
"For instance, models like BERT Large (Devlin et al., 2019), GPT 2 (Radford et al., 2019), Megatron (Shoeybi et al., 2019) and T5 (Raffel et al., 2019) have 340 M , 1 .",
"5 B , 8 .",
"3 B and 11 B parameters respectively.",
"Although these models are trained offline, during prediction we need to traverse the deep neural network architecture stack involving a large number of parameters.",
"This significantly increases latency and memory requirements.",
"Knowledge distillation (Hinton et al., 2015; Ba and Caruana, 2014) earlier used in computer vision provides one of the techniques to compress huge neural networks into smaller ones.",
"In this, shallow models (called students) are trained to mimic the output of huge models (called teachers) based on a transfer set.",
"Similar approaches have been recently adopted for language model distillation.",
"Limitations of existing work: Recent works (Liu et al., 2019; Zhu et al., 2019; Tang et al., 2019; Turc et al., 2019) leverage soft logits from teachers as optimization targets for distilling students, with some notable exceptions from concurrent work.",
"Sun et al. (2019); Sanh (2019); Aguilar et al. (2019); Zhao et al. (2019) additionally use internal teacher representations as additional signals.",
"However, these methods are constrained by architectural considerations like embedding dimension in BERT and transformer architecture.",
"This makes it difficult to massively compress models (without being able to reduce network width) or adopt alternate architecture.",
"For instance, we observe BiLSTMS as students to be more accurate than Transformers for low latency configurations.",
"Some concurrent works (Turc et al., 2019); (Zhao et al., 2019) adopt pre-training or dual training to distil students of arbitrary architecture.",
"However, pre-training is expensive in terms of time and computational resources.",
"Additionally, most of the above works are geared for distilling language models for GLUE tasks (Wang et al., 2018).",
"There has been some limited exploration of such techniques for sequence tagging tasks like NER (Izsak et al., 2019; Shi et al., 2019) or multilingual tasks (Tsai et al., 2019).",
"However, these works also suffer from similar drawbacks as mentioned before.",
"XtremeDistil: Multilingual pre-TRainEd ModEl Distillation",
"works and propose a new scheme outperforming prior ones.",
"In this, we leverage teacher internal representations to transfer knowledge to the student.",
"However, in contrast to prior work, we are not restricted by the choice of student architecture.",
"This allows representation transfer from Transformer-based teacher model to BiLSTM-based student model with different embedding dimensions and disparate output spaces.",
"We also propose a stagewise optimization scheme to sequentially transfer most general to task-specific information from teacher to student for better distillation.",
"Overview of our task: Unlike prior works mostly focusing on GLUE tasks in a single language, we employ our techniques to study distillation for massive multilingual Named Entity Recognition (NER) over 41 languages.",
"Prior work on multilingual transfer on the same (Rahimi et al., 2019) (MM-NER) requires knowledge of source and target language whereby they judiciously select pairs for effective transfer resulting in a customized model for each language.",
"In our work, we adopt Multilingual Bidirectional Encoder Representations from Transformer (mBERT) as our teacher and show that it is possible to perform language-agnostic joint NER for all languages with a single model that has a similar performance but massively compressed in contrast to mBERT and MMNER.",
"The closest one to this work is that of (Tsai et al., 2019) where mBERT is leveraged for multilingual NER.",
"We discuss this in details and use their strategy as a baseline.",
"We show our distillation strategy to be better leading to a higher compression and faster inference.",
"We also investigate several unexplored dimensions of distillation like the impact of unlabeled transfer data and annotation resources, choice of multilingual word embeddings, architectural variations and inference latency.",
"Our techniques obtain massive compression of teacher models like mBERT by upto 35 x in terms of parameters and 51 x in terms of latency for batch inference while retaining 95% of its performance for massive multilingual NER, and matching or outperforming it for classification tasks.",
"Overall, our work makes the following contributions : Method: We propose a distillation method leveraging internal representations and parameter projection that is agnostic of teacher architecture.",
"Inference: To learn model parameters, we propose stage wise optimization schedule with gradual unfreezing outperforming prior schemes.",
"Experiments: We perform distillation for multilingual NER on 41 languages with massive compression and comparable performance to huge models 1 .",
"We also perform classification experiments on four datasets where our compressed models perform at par with significantly larger teachers.",
"Study: We study the influence of several factors on distillation like the availability of annotation resources for different languages, model architecture, quality of multilingual word embeddings, memory footprint and inference latency.",
"Problem Statement: Consider a sequence x = (cid:104) x k (cid:105) with K tokens and y = (cid:104) y k (cid:105) as the corresponding labels.",
"Consider D l = {(cid:104) x k,l (cid:105) , (cid:104) y k,l (cid:105)} to be a set of n labeled instances with X = {(cid:104) x k,l (cid:105)} denoting the instances and Y = {(cid:104) y k,l (cid:105)} the corresponding labels.",
"Consider D u = {(cid:104) x k,u (cid:105)} to be a transfer set of N unlabeled instances from the same domain where n (cid:28) N .",
"Given a teacher T ( t ) , we want to train a student S ( s ) with being trainable parameters such that | s | (cid:28) | t | and the student is comparable in performance to the teacher based on some evaluation metric.",
"In the following section, the superscript t' always represents the teacher and s' denotes the student.",
"Model compression and knowledge distillation: Prior works in the vision community dealing with huge architectures like AlexNet and ResNet have addressed this challenge in two ways.",
"Works in model compression use quantization (Gong et al., 2014), low-precision training and pruning the network, as well as their combination (Han et al., 2016) to reduce the memory footprint.",
"On the other hand, works in knowledge distillation leverage student teacher models.",
"These approaches include using soft logits as targets (Ba and Caruana, 2014), increasing the temperature of the softmax to match that of the teacher (Hinton et al., 2015) as well as using teacher representations (Romero et al., 2015) (refer to (Cheng et al., 2017) for a survey).",
"Recent and concurrent Works: Liu et al. (2019); Zhu et al. (2019); Clark et al. (2019) leverage en-sembling to distil knowledge from several multitask deep neural networks into a single model.",
"Sun et al. (2019); Sanh (2019);Aguilar et al. (2019) train student models leveraging architectural knowledge 1 Code and resources available at: https://aka.ms/ XtremeDistil of the teacher models which adds architectural constraints (e.g., embedding dimension) on the student.",
"In order to address this shortcoming, more recent works combine task-specific distillation with pre-training the student model with arbitrary embedding dimension but still relying on transformer architectures (Turc et al., 2019); (Jiao et al., 2019); (Zhao et al., 2019).",
"Izsak et al. (2019); Shi et al. (2019) extend these for sequence tagging for Part-of-Speech (POS) tagging and Named Entity Recognition (NER) in English.",
"The one closest to our work Tsai et al. (2019) extends the above for multilingual NER.",
"Most of these works rely on general corpora for pre-training and task-specific labeled data for distillation.",
"To harness additional knowledge, (Turc et al., 2019) leverage task-specific unlabeled data.",
"(Tang et al., 2019; Jiao et al., 2019) use rule-and embedding-based data augmentation.",
"The Student: The input to the model are E dimensional word embeddings for each token.",
"To capture sequential information in the sentence, we use a single layer Bidirectional Long Short Term Memory Network (BiLSTM).",
"Given a sequence of K tokens, a BiLSTM computes a set of K vectors h ( x k ) = [ h ( x k ); h ( x k )] as the concatenation of the states generated by a forward ( h ( x k )) and backward LSTM ( h ( x k )) .",
"Assuming the number of hidden units in the LSTM to be H , each hidden state h ( x k ) is of dimension 2 H .",
"Probability distribution for the token label at timestep k is given by: p ( s ) ( x k ) = softmax ( h ( x k ) W s ) (1) where W s R 2 H.C and C is number of labels.",
"Consider one-hot encoding of the token labels, such that y k,l,c = 1 for y k,l = c , and y k,l,c = 0 otherwise for c C .",
"The overall cross-entropy loss computed over each token obtaining a specific label in each sequence is given by: LCE = (cid:88) x l ,y l D l (cid:88) k (cid:88) c y k,c,l log p ( s ) c ( x k,l ) (2) We train the student model end-to-end minimizing the above cross-entropy loss over labeled data.",
"The Teacher: Pre-trained language models like ELMO (Peters et al., 2018), BERT (Devlin et al., 2019) and GPT (Radford et al., 2018, 2019) have shown state-of-the-art performance for several tasks.",
"We adopt BERT as the teacher specifically, the multilingual version of BERT (mBERT) with 179 MM parameters trained over 104 languages with the largest Wikipedias.",
"mBERT does not use any markers to distinguish languages during pre-training and learns a single language-agnostic model trained via masked language modeling over Wikipedia articles from all languages.",
"Tokenization : Similar to mBERT, we use WordPiece tokenization with 110 K shared WordPiece vocabulary.",
"We preserve casing, remove accents, split on punctuations and whitespace.",
"Fine-tuning the Teacher : The pre-trained language models are trained for general language modeling objectives.",
"In order to adapt them for the given task, the teacher is fine-tuned end-to-end with task-specific labeled data D l to learn parameters t using cross-entropy loss as in Equation 2. 4 Distillation Features Teacher fine-tuning gives us access to task-specific representations for distilling the student.",
"To this end, we use different kinds of teacher information.",
"Logits as logarithms of predicted probabilities provide a better view of the teacher by emphasizing on the different relationships learned by it across different instances.",
"Consider p t ( x k ) to be the classification probability of token x k as generated by the fine-tuned teacher with logit ( p t ( x k )) representing the corresponding logits.",
"Our objective is to train a student model with these logits as targets.",
"Given the hidden state representation h ( x k ) for token x k , we can obtain the corresponding classification score (since targets are logits) as: r s ( x k ) = W r h ( x k ) + b r (3) where W r RC 2 H and b r RC are trainable parameters and C is the number of classes.",
"We want to train the student neural network end-to-end by minimizing the element-wise mean-squared error between the classification scores given by the student and the target logits from the teacher as: LLL = 1 2 (cid:88) x u D u (cid:88) k || r s ( x k,u ) logit ( p t ( x k,u ; t )) || 2 (4) 4.2 Internal Teacher Representations Hidden representations: Recent works (Sun et al., 2019; Romero et al., 2015) have shown the hidden state information from the teacher to be helpful as a hint-based guidance for the student.",
"Given a large collection of task-specific unlabeled data, we can transfer the teacher's knowledge to the student via its hidden representations.",
"However, this poses a challenge in our setting as the teacher and student models have different architectures with disparate output spaces.",
"Consider h s ( x k ) and z tl ( x k ; t ) to be the representations generated by the student and the l th deep layer of the fine-tuned teacher respectively for a token x k .",
"Consider x u D u to be the set of unlabeled instances.",
"We will later discuss the choice of the teacher layer l and its impact on distillation.",
"Projection: To make all output spaces compatible, we perform a non-linear projection of the parameters in student representation h s to have same shape as teacher representation z tl for each token x k : z s ( x k ) = Gelu ( W f h s ( x k ) + b f ) (5) where W f R | z tl | 2 H is the projection matrix, b f R | z tl | is the bias, and Gelu (Gaussian Error Linear Unit) (Hendrycks and Gimpel, 2016) is the non-linear projection function.",
"| z tl | represents the embedding dimension of the teacher.",
"This transformation aligns the output spaces of the student and teacher and allows us to accommodate arbitrary student architecture.",
"Also note that the projections (and therefore the parameters) are shared across tokens at different timepoints.",
"Multilingual word embeddings: A large number of parameters reside in the word embeddings.",
"For mBERT a shared multilingual WordPiece vocabulary of V = 110 K tokens and embedding dimension of D = 768 leads to 92 MM parameters.",
"To have massive compression, we cannot directly incorporate mBERT embeddings in our model.",
"Since we use the same WordPiece vocabulary, we are likely to benefit more from these embeddings than from Glove (Pennington et al., 2014) or FastText (Bojanowski et al., 2016).",
"We use a dimensionality reduction algorithm like Singular Value Decomposition (SVD) to project the mBERT word embeddings to a lower dimensional space.",
"Given mBERT word embedding ma-Algorithm 1: Multi-stage distillation.",
"trix of dimension V D , SVD finds the best E dimensional representation that minimizes sum of squares of the projections (of rows) to the subspace.",
"We want to optimize the loss functions for representation LRL , logits LLL and cross-entropy LCE",
"These optimizations can be scheduled differently to obtain different training regimens as follows.",
"In this, we optimize the following losses jointly:",
"1 | D l | (cid:88) { x l ,y l } D l LCE ( x l , y l )+ 1 | D u | (cid:88) { x u ,y u } D u (cid:18) L RL ( x u , y u )+ L LL ( x u , y u ) (cid:19) (7)",
"where , and weigh the contribution of different losses.",
"A high value of makes the student focus more on easy targets; whereas a high value of leads focus to the difficult ones.",
"The above loss is computed over two different task-specific data segments.",
"The first part involves cross-entropy loss over labeled data, whereas the second part involves representation and logit loss over unlabeled data.",
"Instead of optimizing all loss functions jointly, we propose a stage-wise scheme to gradually transfer most general to task-specific representations from teacher to student.",
"In this, we first train the student to mimic teacher representations from its l th layer by optimizing RRL on unlabeled data.",
"The student learns the parameters for word embeddings ( w ), BiLSTM ( b ) and projections (cid:104) W f , b f (cid:105) .",
"In the second stage, we optimize for the cross-entropy RCE and logit loss RLL jointly on both labeled and unlabeled data respectively to learn the corresponding parameters W s and (cid:104) W r , b r (cid:105) .",
"The above can be further broken down in two stages, where we sequentially optimize logit loss RLL on unlabeled data and then optimize cross-entropy loss RCE on labeled data.",
"Every stage learns parameters conditioned on those learned in previous stage followed by end-to-end fine-tuning.",
"One potential drawback of end-to-end fine-tuning for stage-wise optimization is catastrophic forget-ting' (Howard and Ruder, 2018) where the model forgets information learned in earlier stages.",
"To address this, we adopt gradual unfreezing where we tune the model one layer at a time starting from the configuration at the end of previous stage.",
"We start from the top layer that contains the most task-specific information and allow the model to configure the task-specific layer first while others remain frozen.",
"The latter layers are gradually unfrozen one by one and the model trained till convergence.",
"Once a layer is unfrozen, it maintains the state.",
"When the last layer (word embeddings) is unfrozen, the entire network is trained end-to-end.",
"The order of this unfreezing scheme (top-to-bottom) is reverse of that in (Howard and Ruder, 2018) and we find this to work better in our setting with the following intuition.",
"At the end of the first stage on optimizing RRL , the student learns to generate representations similar to that of the l th layer of the teacher.",
"Now, we need to add only a few task-specific parameters ( (cid:104) W r , b r (cid:105) ) to optimize for logit loss RLL with all others frozen.",
"Next, we gradually give the student more flexibility to optimize for task-specific loss by tuning the layers below where the number of parameters increases with depth ( |(cid:104) W r , b r (cid:105)| (cid:28) | b | (cid:28) | w | ).",
"loss on a held-out set.",
"Therefore, the model retains best possible performance from any iteration.",
"Algorithm 1 shows overall processing scheme.",
"Dataset Description: We evaluate our model XtremeDistil for multilingual NER on 41 languages and same setting as in (Rahimi et al., 2019).",
"This data is derived from WikiAnn NER corpus (Pan et al., 2017) and partitioned into training, development and test sets.",
"All NER results are reported in this test set for a fair comparison between existing works.",
"We report the average F 1 -score ( ) and standard deviation between scores across 41 languages for phrase-level evaluation.",
"Refer to Figure 2 for language codes and corresponding distribution of training labels.",
"We also perform experiments with data from four other domains (refer to Table 1): IMDB (Maas et al., 2011), SST-2 (Socher et al., 2013) and Elec (McAuley and Leskovec, 2013) for sentiment analysis for movie and electronics product reviews, DbPedia (Zhang et al., 2015) and Ag News (Zhang et al., 2015) for topic classification of Wikipedia and news articles.",
"NER Tags: The NER corpus uses IOB2 tagging strategy with entities like LOC, ORG and PER.",
"Following mBERT, we do not use language markers and share these tags across all languages.",
"We Strategy Features Transfer = 0.7MM Transfer = 1.4MM Transfer = 7.2MM D0 Labels per lang.",
"use additional syntactic markers like { CLS, SEP, PAD } and X' for marking segmented wordpieces contributing a total of 11 tags (with shared O').",
"Baselines: A trivial baseline (D0) is to learn models one per language using only corresponding labels for learning.",
"This can be improved by merging all instances and sharing information across all languages (D0-S).",
"Most of the concurrent and recent works (refer to Table 2 for an overview) leverage logits as optimization targets for distillation (D1).",
"A few exceptions also use teacher internal representations along with soft logits (D2).",
"For our model we consider multi-stage distillation, where we first optimize representation loss followed by jointly optimizing logit and cross-entropy loss (D3.1) and further improving it by gradual unfreezing of neural network layers (D3.2).",
"Finally, we optimize the loss functions sequentially in three stages (D4.1) and improve it further by unfreezing mechanism (D4.2).",
"We further compare all strategies while varying the amount of unlabeled transfer data for distillation (hyper-parameter settings in Appendix).",
"Results: From Table 3, we observe all strategies that share information across languages to work better (D0-S vs. D0) with soft logits adding more value than hard targets (D1 vs. D0-S).",
"Interestingly, we observe simply combining representation loss with logits (D3.1 vs. D2) hurts the model.",
"We observe this strategy to be vulnerable to the hyper-parameters ( , , in Eqn. 7) used to combine multiple loss functions.",
"We vary hyper-parameters in multiples of 10 and report best numbers.",
"Stage-wise optimizations remove these hyper-parameters and improve performance.",
"We also observe the gradual unfreezing scheme to improve both stage-wise distillation strategies significantly.",
"Focusing on the data dimension, we observe all models to improve as more and more unlabeled data is used for transferring teacher knowledge to student.",
"However, we also observe the improvement to slow down after a point where additional unlabeled data does not yield significant benefits.",
"Table 4 shows the gradual performance improvement in XtremeDistil after every stage and unfreezing various neural network layers.",
"Performance: We observe XtremeDistil in Table 5 to perform competitively with other models.",
"mBERT-single models are fine-tuned per language with corresponding labels, whereas mBERT is fine-tuned with data across all languages.",
"MMNER results are reported from Rahimi et al. (2019).",
"Figure 2 shows the variation in F 1 -score across different languages with variable amount of training data for different models.",
"We observe all the models to follow the general trend with some aber-(50,100) (50,200) (50,400) (50,600) (100,100)(100,200) (100,400) (100,600) (200,100) (200,200)(200,400)(200,600) (300,100) (300,200)(300,400)(300,600) 0 5 10 15 20 25 30 35 40 84 84.5 85 85.5 86 86.5 87 87.5 88 88.5 89 P a r a m e t e r C o m pr e ss i o n F1 Measure",
"Figure 1b shows the variation in F 1 -scores of XtremeDistil and inference speedup against mBERT with different (linked) parameter configurations as before.",
"As expected, the performance degrades with gradual speedup.",
"We observe that parameter compression does not necessarily lead to an inference speedup.",
"Reduction in the word embedding dimension leads to massive model compression, however, it does not have a similar effect on the latency.",
"The BiLSTM hidden states, on the other hand, constitute the real latency bottleneck.",
"One of the best configurations leads to 35 x compression, 51 x speedup over mBERT retaining nearly 95% of its performance.",
"Parameter compression: XtremeDistil performs at par with MMNER in terms of F 1 -score while obtaining at least 41 x compression.",
"Given L languages, MMNER learns ( L 1 ) ensembled and distilled models, one for each target language.",
"Each of the MMNER language-specific models is comparable in size to our single multilingual model.",
"We learn a single model for all languages, thereby, obtaining a compression factor of at least L = 41 .",
"Figure 1a shows the variation in F 1 -scores of XtremeDistil and compression against mBERT with different configurations corresponding to the embedding dimension ( E ) and number of BiLSTM hidden states ( 2 H ).",
"We observe that reducing the embedding dimension leads to great compression with minimal performance loss.",
"Whereas, reducing the BiLSTM hidden states impacts the performance more and contributes less to the compression.",
"Inference speedup: We compare the runtime inference efficiency of mBERT and our model in a single P100 GPU for batch inference (batch size = 32) on 1000 queries of sequence length 32 .",
"We average the time taken for predicting labels for all the queries for each model aggregated over 100 runs.",
"Compared to batch inference, the speedups are less for online inference (batch size = 1) at 17 x on Intel(R) Xeon(R) CPU (E5-2690 v4 @2.60GHz) (refer to Appendix for details).",
"Models in all prior experiments are trained on 705 K labeled instances across all languages.",
"In this setting, we consider only 100 labeled samples for each language with a total of 4 .",
"1 K instances.",
"From Table 6, we observe mBERT to outperform MMNER by more than 17 percentage points with XtremeDistil closely following suit.",
"Furthermore, we observe our model's performance to improve with the transfer set size depicting the importance of unlabeled transfer data for knowledge distillation.",
"As before, a lot of additional data has marginal contribution.",
"From Table 7 we observe randomly initialized word embeddings to work quite well.",
"Multilingual FastText embeddings (Bojanowski et al., 2016) lead to minor improvement due to 38% overlap between FastText tokens and mBERT wordpieces.",
"English Glove does much better.",
"We experiment with dimensionality reduction techniques and find SVD to work better leading to marginal improvement over mBERT embeddings before reduction.",
"As expected, fine-tuned mBERT embeddings perform better than that from pre-trained checkpoints.",
"Which teacher layer to distil from?",
"The topmost teacher layer captures more task-specific knowledge.",
"However, it may be difficult for a shallow student to capture this knowledge given its limited capacity.",
"On the other hand, the less-deep representations at the middle of teacher model are easier to mimic by shallow student.",
"From Table 8 we observe the student to benefit most from distilling the 6 th or 7 th layer of the teacher.",
"Comparison of student architecture.",
"Recent works leverage both BiLSTM and Transformer as students.",
"In this experiment, we vary the embedding dimension and hidden states for BiLSTM-, and embedding dimension and depth for Transformer-based students to obtain configurations with similar inference latency.",
"Each of 13 configurations in Figure 3 depict F 1 -scores obtained (50,100) (200,100) (300,100) (50,200) (300,200) (50,400) (200,400) (100,400) (300,400) (50,600)(100,600)(200,600) (300,600) (48,2) (144,1) (72,2) (96,2)(132,2) (204,2)(228,2) (240,2) (252,2) (228,3)(240,3)(252,3)(276,3) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 72 74 76 78 80 82 84 86 1 2 3 4 5 6 7 8 9 10 11 12 13 BiLSTM F-score Transformer Fscore BiLSTM Latency Transformer Latency Figure 3: BiLSTM and Transformer F 1 -score (left y-axis) vs. inference latency (right y-axis) in 13 different settings with corresponding embedding dimension and width / depth of the student as ( E, W/D ) .",
"by students of different architecture but similar latency (refer to Table 15 in Appendix for statistics) for strategy D0-S in Table 3. We observe that for low-latency configurations BiLSTMs with hidden states { 2 100 , 2 200 } work better than 2 -layer Transformers.",
"Whereas, the latter starts performing better with more than 3 -layers although with a higher latency compared to the aforementioned BiLSTM configurations.",
"We switch gear and focus on classification tasks.",
"In contrast to sequence tagging, we use the last hidden state of the BiLSTM as the final sentence representation for projection, regression and softmax.",
"Table 9 shows the distillation performance of XtremeDistil with different teachers on four benchmark text classification datasets.",
"We observe the student to almost match the teacher performance for all of the datasets.",
"The performance also improves with a better teacher, although the improvement is marginal as the student capacity saturates.",
"Table 10 shows the distillation performance with only 500 labeled samples per class.",
"The distilled student improves over the non-distilled version by 19 .",
"4 percent and matches the teacher performance for all of the tasks demonstrating the impact of distillation for low-resource settings.",
"Comparison with other distillation techniques: SST-2 (Socher et al., 2013) from GLUE (Wang et al., 2018) has been used as a test bed for other distillation techniques for single instance classification tasks (as in this work).",
"Table 11 shows the accuracy comparison of such methods reported in SST-2 development set with the same teacher.",
"We extract 11 .",
"7 MM sentences from all IMDB movie reviews in Table 1 to form the unlabeled transfer set for distillation.",
"We obtain the best performance on distilling with BERT Large (uncased, whole word masking model) than BERT Base demonstrating a better student performance with a better teacher and outperforming other methods.",
"Teacher hidden representation and distillation schedule: Internal teacher representations help in distillation, although a naive combination hurts the student model.",
"We show that a distillation schedule with stagewise optimization, gradual unfreezing with a cosine learning rate scheduler (D4.1 + D4.2 in Table 3) obtains the best performance.",
"We also show that the middle layers of the teacher are easier to distil by shallow students and result in the best performance (Table 8).",
"Additionally, the student performance improves with bigger and better teachers (Tables 9 and 11).",
"Student architecture: We compare different student architectures like BiLSTM and Transformer in terms of configuration and performance (Figure 3, Table 15 in Appendix), and observe BiLSTM to perform better at low-latency configurations, whereas the Transformer outperforms the former with more depth and higher latency budget.",
"Unlabeled transfer data: We explored data dimension in Tables 3 and 6 and observed unlabeled data to be the key for knowledge transfer from pre-trained teachers to shallow students and bridge the performance gap.",
"We observed a moderate amount of unlabeled transfer samples (0.7-1.5 MM) lead to the best student, whereas larger amounts of transfer data does not result in significant gains.",
"This is particularly helpful for low-resource NER (with only 100 labeled samples per language as in Table 6).",
"Performance trade-off: Parameter compression does not necessarily reduce inference latency, and vice versa.",
"We explored model performance with parameter compression, inference latency and F 1 to show trade-off in Fig. 1 and Table 16 in Appendix.",
"Multilingual word embeddings: Random initialization of word embeddings work well.",
"A better initialization, which is also parameter-efficient, is given by Singular Value Decomposition (SVD) over fine-tuned mBERT word embeddings with the best performance for downstream task (Table 7).",
"Generalization: The outlined distillation techniques and strategies are model-, architecture-, and language-agnostic and can be easily extended to arbitrary tasks and languages, although we only focus on NER and classification in this work.",
"Massive compression: Our techniques demonstrate massive compression ( 35 x for parameters) and inference speedup ( 51 x for latency) while retaining 95% of the teacher performance allowing deep pre-trained models to be deployed in practice.",
"We develop XtremeDistil for massive multi-lingual NER and classification that performs close to huge pre-trained models like MBERT but with massive compression and inference speedup.",
"Our distillation strategy leveraging teacher representations agnostic of its architecture and stage-wise optimization schedule outperforms existing ones.",
"We perform extensive study of several distillation dimensions like the impact of unlabeled transfer set, embeddings and student architectures, and make interesting observations outlined in summary."
] | [
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"objective",
"abstain",
"method",
"result",
"objective",
"result",
"objective",
"objective",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method"
] |
[
"While the recent tree-based neural models have demonstrated promising results in generating solution expression for the math word problem (MWP), most of these models do not capture the relationships and order information among the quantities well.",
"This results in poor quantity representations and incorrect solution expressions.",
"In this paper, we propose Graph2Tree , a novel deep learning architecture that combines the merits of the graph-based encoder and tree-based decoder to generate better solution expressions.",
"Included in our Graph2Tree framework are two graphs, namely the Quantity Cell Graph and Quantity Comparison Graph , which are designed to address limitations of existing methods by effectively representing the relationships and order information among the quantities in MWPs.",
"We conduct extensive experiments on two available datasets.",
"Our experiment results show that Graph2Tree outperforms the state-of-the-art baselines on two benchmark datasets significantly.",
"We also discuss case studies and empirically examine Graph2Tree 's effectiveness in translating the MWP text into solution expressions 1 .",
"Math Word Problem (MWP), which involves automatically answering a mathematical question according to a textual description, is an important natural language understanding task that has been studied by researchers since the 1960s (Bobrow, 1964).",
"A typical MWP is a short narrative that describes a problem and poses a question about an unknown quantity.",
"Table 1 provides an example of a typical MWP where the reader is required to infer the revenue of a store after selling all the teddy * Equal Contribution # Corresponding Author 1 Code could be found at https://github.com/ 2003pro/Graph2Tree Problem : 348 teddy bears are sold for $23 each.",
"There are total 470 teddy bears in a store and the remaining teddy bears are sold for $17 each.",
"How much did the store earn after selling all the teddy bears?",
"Expression : x = 348 23+(470 348) 17 Solution : 10078 Table 1: A math word problem.",
"bears.",
"Earlier studies have attempted to perform the MWP task via statistical machine learning methods (Kushman et al., 2014; Hosseini et al., 2014; Mitra and Baral, 2016; Roy and Roth, 2018) and semantic parsing approaches (Shi et al., 2015; Koncel-Kedziorski et al., 2015; Roy and Roth, 2015; Huang et al., 2017).",
"However, these methods are non-scalable as tremendous efforts are required to design suitable features and expression templates.",
"In recent years, deep learning-based models have been developed to solve MWPs.",
"These deep learning methods are able to automate the learning of features and generalize well by returning new solution expressions that are unseen in the training datasets.",
"Wang et al. (2017) proposed a large-scale MWP dataset and applied a vanilla sequence to sequence (seq2seq) model to translate the language text to a solution expression.",
"Since then, many research efforts mainly focused on improving the generation of solution expressions.",
"Some researchers have proposed seq2seq models to improve solution expression generation using implicit (Wang et al., 2018; Chiang and Chen, 2019) and explicit (Wang et al., 2019; Liu et al., 2019; Xie and Sun, 2019) tree structures.",
"Improving the representation of quantity is a potential approach to achieve better solution expressions.",
"For example, to get the correct solution expression for the problem described in Table 1, an ideal MWP model should be able to associate quantity, i.e., 348 teddy bears, with its price attribute of $23, and understand the arithmetic order by deriving 122 remaining teddy bears, i.e., 470 348 , before associating the price attribute of $17.",
"The existing deep learning models are not effective in capturing such relationships and order information among the quantities in MWPs, thus resulting in an inaccurate representation of the final solution expressions.",
"To enrich the representation of a quantity, the relationships between the descriptive words associated with a quantity need to be modeled.",
"However, such relationships cannot be effectively modeled using recurrent models, which are commonly used in the existing MWP deep learning methods.",
"Inspired by the concept of Quantity Schema (Roy and Roth, 2015) and Qset (Koncel-Kedziorski et al., 2015), we design the Quantity Cell Graph to associate informatively descriptive words to quantity.",
"We first extract associated nouns, verbs, adjectives, units, and rates that describe a quantity in the MWP text.",
"Next, we construct a graph where the extracted descriptive words are represented as neighbor nodes directly linked to a quantity.",
"Finally, a neural network model is used to learn enriched latent representations of the quantities based on the constructed Quantity Cell Graph .",
"The loss of quantities' numerical qualities in existing MWP methods can also result in poor quantity representations.",
"Most of the existing MWP methods often replace quantities with special symbols (e.g., n 1 , n 2 , etc.) (Wang et al., 2017, 2018; Liu et al., 2019).",
"The loss of quantities' numerical qualities could be problematic when generating solution expressions.",
"Take the example in Table 1, without modeling the numerical qualities of quantities, an MWP method may learn a solution expression 384 470 which results in a negative number that is unlikely to occur in MWPs.",
"To address this limitation, we introduce the Quantity Comparison Graph , which was inspired by a numerical machine reading comprehension model proposed by Ran et al. (2019).",
"The intuition of Quantity Comparison Graph is to retain the numerical qualities of the quantity and leverage certain heuristics to represent the relationships among quantities in MWPs such that solution expressions reflect a more realistic arithmetic order.",
"Besides improving the quantity representation, we also aim to improve the solution expression generative process.",
"For longer solution expressions in MWPs, as some quantities are repeatedly used in different arithmetic sub-solution expressions, the existing methods which utilized recurrent neural networks may not be able to learn the underlying reasoning process and arithmetic order.",
"For example, in Table 1, the quantity 348 is being used in 348 23 and (470 348) 17 .",
"To address this limitation, we propose to use a graph encoder to guide the learning of representations of quantities and a tree decoder to explicitly model the multistage reasoning process.",
"Contribution.",
"In this paper, we combine the above-proposed solutions and introduce the Graph2Tree solver to address the existing MWPs methods' limitations.",
"The contributions of this paper are as follows: We construct the Quantity Cell Graph and Quantity Comparison Graph to enrich the quantity representations by capturing relationships between quantities and their attributes and retaining the quantities' numerical qualities.",
"We propose the Graph2Tree to improve the learning of solution expressions' generation.",
"The Graph2Tree model uses a graph transformer to learn the latent quantity representations from our proposed graphs, and a tree structure decoder to generate a solution expression tree.",
"To the best of our knowledge, this is the first graph-to-tree model for MWPs.",
"We conduct extensive experiments on two available large-scale MWPs datasets, and our results show that our proposed Graph2Tree model outperforms state-of-the-art baselines on MWP task.",
"We denote the text of the math word problem as P , where P is a sequence of word tokens and numeric values.",
"We let V p = { v 1 , , v m } denote the word tokens in P and n P = { n 1 , , n l } denote the set of quantities in P .",
"Our goal is to map P to a valid and correct mathematical expression E p .",
"Solving MWPs requires an understanding of quantities in problem and their complex mathematical relationships.",
"MWPs are often expressed in a linear textual sequence form, which is not ideal for learning the quantities' complex interactions.",
"Thus, we propose to formulate the problem into graph form so that the relationships between quantities can be expressed more explicitly.",
"The problem text P is transformed into graph G by augmenting the text sequences with other structural information like dependency parsing and POS tagging.",
"The final mathematical expression E p that we aim to construct can always be represented as a solution expression tree T .",
"T may include constant quantities, operators and quantities in n P .",
"The set of constant quantities V con contains some special values not appeared in text like , 1 .",
"The set of math operators V op contains { + , , , / } .",
"Overall, the target vocabulary of P can be denoted as V dec = V op V con n P ( V dec varies in different problems as n P varies) .",
"The goal of our Graph2Tree model here is to estimate the conditional probability P ( E p | P ) , which can be transformed as P ( T |G , V dec ) .",
"Figure 1 shows our proposed Graph2Tree framework.",
"Graph2Tree first encodes the MWP text input using BiLSTM and simultaneously constructs Quantity Cell Graph and Quantity Comparison Graph .",
"The output of BiLSTM, word-level representations, are used as node representations.",
"Together with the two constructed graphs, the node representations are input into a graph transformer to learn a graph representation of the MWP.",
"The multiGCN component of the graph transformer is modified to learn the graph representation based on the Quantity Cell Graph and Quantity Comparison Graph .",
"This enriches the final graph representation with quantities' relationship information and numerical qualities.",
"Pooling is used to aggregate all nodes into a pool-based graph embedding vector as the graph transformer's output.",
"Finally, the output graph representation and the updated node representations are used as input to a tree-structure decoder to infer the final solution expression tree.",
"There have been some graph-based models (Sahu et al., 2019) intending to grab the complicated relations in text.",
"The graph-based encoder in our Graph2Tree framework is inspired by the graph transformer model (Koncel-Kedziorski et al., 2016; Cai and Lam, 2019).",
"We first discuss the initialization of node representations based on MWPs' input problem text.",
"Next, we introduce the construction of the Quantity Cell Graph and Quantity Comparison Graph .",
"Finally, we discuss the learning of graph representation using the graph transformer module.",
"To initialize the node representations, we first learn the word-level hidden state representations of the input MWP text using a BiLSTM neural network, H = { h 1 , , h N } RN d , N = m + l .",
"Here d denotes the dimension of hidden vectors, m represents the number of words, and l represents the number of quantities.",
"The learned hidden state representations will be used as the input node representations for the graph encoder.",
"We refer all quantities n P and words V p from the problem as nodes in the graph.",
"Next, we define a quantity cell as a subset of nodes in the graph that are associated with a quantity.",
"Formally, each MWP P is transformed into multiple quantity cells QC = { Q 1 , Q 2 , , Q m } , where m is the number of quantities in P .",
"Each quantity cell Q i QC contains a quantity token { n i } and the corresponding attributes { v 1 i , , v qi } .",
"These quantity cells are sub-graph representations of quantity-related information in the MWPs.",
"Dependency parsing, constituency parsing and POS tagging implemented with stanford corenlp toolkit (Manning et al., 2014) are used to extract and construct the quantity cells .",
"A quantity cell in an MWP P consists of the following properties: Quantity.",
"The quantity numeric value.",
"Associated Nouns.",
"We consider the nouns related to the Quantity in the dependency parse tree.",
"Associated Nouns are the nouns related by the num , number and prep of relations.",
"Associated Adjectives.",
"Associated Adjectives are the adjectives related to Quantity or Associated Nouns with the amod relation, which is detected by the dependency parser.",
"Associated Verbs.",
"For each Quantity , we detect the related verbs, Associated Verbs , according to nsubj and dobj relations.",
"Units and Rates.",
"We detect the nouns related to Associated Nouns by prep of as the Unit .",
"The nouns related Associated Nouns which own the key words such as each, every and per are regarded as Rates .",
"If the quantity cell detection process does not grab any attributes, we will use a window centered on Quantity to select neighboring words as the attributes of the Quantity .",
"An example of the quantity cell is illustrated in the left part of Figure",
"1. 3.1.3 Quantity Graph Construction From the quantity cells, we construct two graphs: Quantity Cell Graph and Quantity Comparison Graph .",
"The goal of the Quantity Cell Graph is to associate informative descriptive words to quantity so as to enrich the quantity's representation.",
"Similarly, the goal of the Quantity Comparison Graph is to retain the numerical qualities of the quantity and leverage heuristics to improve representations of the relationships among quantities.",
"Formally, we define the construction of two graphs as follow: Quantity Cell Graph G qcell .",
"For each Quantity Cell Q i = { n i } { v 1 i , , v ni } , the undirected edge e ij between n i and each v j { v 1 i , , v qi } will be added to the graph G qcell .",
"Quantity Comparison Graph G qcomp .",
"For two quantity nodes n i , n j n P , a directed edge e ij = ( n i , n j ) pointing from n i to n j will be added to the graph G qcomp if n i > n j .",
"This heuristic constraint can prevent the subtracting a larger number from a smaller number, which results in a negative number.",
"We represent the two graphs using adjacency matrices.",
"For graph G , an adjacency matrix A RN N is first initialized.",
"If there exists an edge between the i -th and j -th nodes, we need to assign value 1 to corresponding position of the adjacency matrix ( i, j, A i,j ) for this edge.",
"Otherwise, 0 would be assigned.",
"Thus, we compute the adjacency matrix A qcomp for graph G qcomp and A qcell for G qcell .",
"The inputs to the graph transfer module are adjacency matrices of multiple graphs { A k } Kk =1 , A k { A qcomp , A qcell } and initial node embeddings H , where K is the number of graphs and each A k RN N is the adjacency matrix for k -th graph.",
"K graphs are used as we adopt a multi-head structure in our model and they are split evenly between Quantity Cell Graphs and Quantity Comparison Graph .",
"The graph transformer first utilizes graph convolution networks (GCNs) (Kipf and Welling, 2017) to learn the graph node features.",
"For multiple graphs, we use a K -head graph convolution setup.",
"This is similar to the transformer model proposed in Vaswani et al. (2017), where K separate graph convolution networks are used and concatenated before a residual connection is applied.",
"Specifically, a single GCN has its parameter W gk R d d k , where d k = d/K .",
"Given an adjacency matrix A k representing graph structure and a feature matrix X (in the beginning, X is set as H ) meaning the input feature for all nodes, we define learning of GCN as follow: GCN ( A k , X ) = GConv 2 ( A k , GConv 1 ( A k , X )) (1) Here, the GCN contains 2 different graph convolution operations: GConv ( A k , X ) = relu ( A k XTW gk ) (2) For each graphs { A k } Kk =1 , we perform learning of GCN in parallel, yielding d k -dimensional output values.",
"The output values are concatenated and projected, resulting in the final values: Z = K (cid:107) k =1 GCN ( A k , H ) (3) Here, (cid:107) denotes the concatenation of the K GCN heads.",
"Graph transformer then augments this K -head graph convolution network with a feed-forward network, layer-norm layer, and residual connection: Z = Z + LayerNorm ( Z ) (4) Z = Z + LayerNorm ( F F N ( Z )) (5) here, F F N ( x ) is a two-layer feed-forward network with a relu function between layers: F F N ( x ) = max (0 , xW f 1 + b f 1 ) W f 2 + b f 2 (6) The resulting node representations Z represent quantities, entities and relations.",
"In order to learn the global context graph representation, we apply the element-wise min-pooling operation on all learned node representations.",
"Finally, the global feature is fed into a fully connected neural network (FC) to generate the graph representation z g : z g = F C ( MinP ool ( Z )) (7) 3.2 Tree-Based Decoder Inspired by the the Goal-driven Tree Structure (GTS) (Xie and Sun, 2019), we build a tree-based decoder to construct the solution expressions.",
"We set the quantity nodes to be the leaf nodes and each operator node must have two child nodes.",
"As such, the specialized tree decoder generates an equation following the pre-order traversal ordering.",
"As part of the tree construction process, the centermost operator is first produced, followed by the left child node.",
"This process is repeated until the leaf node is produced.",
"Subsequently, we generate the right child nodes recursively.",
"To start the above mentioned tree generation process, our model initializes the root node vector q root according to the global context graph representation z g .",
"For each token y in the target vocabulary V dec of P , the representation for a certain token e ( y | P ) is defined as: e ( y,op ) if y V op e ( y,con ) if y V con z ploc ( y,P ) if y n P (8) The expression trees in our decoder contain three kinds of nodes: operators, constant quantities, and quantities that appeared in P .",
"Constant quantities and quantities in n P are always set to be in leaf nodes position.",
"Operators will always take up the positions of the non-leaf nodes.",
"The quantities' representations in n P are dependent on certain MWPs, i.e., y will take the corresponding z ploc ( y,P ) from Z .",
"The representations of operators and constant quantities are independent, i.e., their representations are obtained by 2 independent embedding matrices M op and M con .",
"We adopt the pre-order traversal manner to construct the expression tree: Step",
"1. The generation starts with a derivation tree with only a root node q root .",
"We use attention module of GTS to encode the node embedding Z into global graph vector G c : G c = GTS Attention( q root , Z ) (9) Step",
"2. This tree decoder applies left subnode generation module to the derivation in a top-down manner, generating new left child node q l conditioned on the parent node q p and global graph G c .",
"Note that the token y is predicted when generating the new node: q l = GTS Left( q p , G c ) y = GTS Predict( q l , G c ) (10) If the generated y is an operator, two empty child node positions are created and we will keep executing Step 2 .",
"This step works like decomposing the whole goal into multi-stage reasoning.",
"If the generated y is a quantity (constant or from n P ), we will get into Step 3 .",
"Step",
"3. The tree decoder switches to use the right sub-node generation module and populate the empty right node position.",
"At every decoding step, we use the left child node q l , global graph vector G c and a sub-tree embedding t l as the input to the right generation module and generate the right child node q r and the corresponding token y r : q r = GTS Right( q l , G c , t l ) y r = GTS Predict( q r , G c ) (11) The addition of the sub-tree embedding works similarly to incorporating a sub-tree copying mechanism.",
"The additional sub-tree embedding t l is computed by using sub-tree embedding component of GTS: t l = GTS SubTree( y l , q l ) (12) If y r is an operator, the next step should go back to Step 2 .",
"If y r is a quantity, we will get into Step 4 .",
"Step",
"4. The model switches to backtracking to find the new empty right node position.",
"If the model cannot find the new empty right node position, the generation is completed.",
"If the empty right node position still exists, go back to Step 2 .",
"For each problem-tree expression example, ( p, T ) , the loss function L ( T, P ) is defined as the a sum of the negative log-likeihoods of probabilities for predicting t -node token y t .",
"Formally, our training goal is to minimize the following loss function: L ( T, P ) = E (cid:88) t =1 log prob ( y t | q t , G c , P ) (13) where q t is the goal vector, G c is the global graph context, E is the number of tokens in T , and prob is computed by distribution computation function in GTS.",
"In this section, we compare our proposed Graph2Tree model with state-of-the-art baselines.",
"We also conduct ablation study and analysis to investigate the effectiveness of various components of our model.",
"Datasets.",
"Two commonly-used MWP datasets are used in our experiments: MAWPS (Koncel-Kedziorski et al., 2016) with 2,373 problems and Math23K (Wang et al., 2017) with 23,162 problems.",
"Baselines.",
"We compare Graph2Tree to an extensive set of baselines and state-of-the-art models: DNS (Wang et al., 2017) uses a vanilla seq2seq model to generate expressions.",
"Math-EN (Wang et al., 2018) benefits from an equation normalization to reduce target space.",
"T-RNN (Wang et al., 2019) applies recursive neural networks over predicted tree-structure templates.",
"S-Aligned (Chiang and Chen, 2019) designs the decoder with a stack to track the semantic meanings of operands.",
"GROUP-ATT (Li et al., 2019) borrows the idea of multihead attentions from Transformer (Vaswani et al., 2017).",
"AST-Dec (Liu et al., 2019) creates an expression tree with a tree LSTM decoder.",
"GTS (Xie and Sun, 2019) develops a tree structured neural networks in a goal-driven manner to generate expression trees.",
"IRE (Sahu et al., 2019) is another baseline that was first proposed in relation extraction and has something in common with our method.",
"Implementation Details and Evaluation Metric.",
"In the Graph2Tree model, we use a word embedding (not pre-trained) with 128 units, a one layer graph transformer with 4 GCNs, each of which has the dimension of the hidden state set to 128 .",
"The dimensions of the hidden state for all the other layers are set to 512 .",
"Our model is trained for 80 epochs.",
"Mini-batch size and dropout rate are set to 64 and 0 .",
"5 , respectively.",
"For optimizer, we use Adam with learning rate set to 0 .",
"001 , 1 = 0 .",
"94 and 2 = 0 .",
"99 , and the learning rate will be halved every 20 epochs.",
"Also, we use a beam size of 5 in beam search.",
"For the Math23K dataset, some methods are evaluated using 5-fold cross-validation, expressed in Math23K*, and others are evaluated using the available test set (expressed as Math23K).",
"We evaluate Graph2Tree on both settings.",
"For the MAWPS dataset, the models are evaluated with 5-fold cross-validation.",
"Following previous works, we use solution accuracy as the evaluation metric.",
"Table 2 shows the solution accuracy of Graph2Tree and various baselines.",
"We observe that Graph2Tree outperforms all baselines in the two MWP datasets.",
"As the code for GTS is made available 2 , we implemented GTS and tested it on all dataset settings.",
"We also statistically test the improvement of Graph2Tree over the strongest baseline (i.e., GTS) and found that the improvement to be significant at 0.01 level using paired t-test.",
"The superior performance of Graph2Tree demonstrates 2 https://github.com/ShichaoSun/math_ seq2tree the importance of enriching quantity's representations in handling the MWP task.",
"To understand the effects of the various components and hyperparameters in our Graph2Tree model, we conduct ablation studies and parameter analysis on the Math23K dataset.",
"We investigate the effects of Quantity Cell Graph and Quantity Comparison Graph in our model.",
"The results of our ablation study are shown in Table",
"3. We find that the Graph2Tree with both Quantity Cell Graph and Quantity Comparison Graph performs the best.",
"We also observe that having either Quantity Cell Graph and Quantity Comparison Graph still outperforms the implementation without either graph (i.e., full-connected graph).",
"More interestingly, we also noted that enriching the quantity representation with either graph would also outperform the baseline GTS model in this task, suggesting the importance of quantity representation in MWP task.",
"From this study, we also infer that improving quantity representation, modeling the relationships among quantities, and retaining their numerical qualities help to achieve better results for the MWP task.",
"Also, if two types of graphs are merged into an integrated graph, the performance drops.",
"We postulate that a possible reason for the inferior performance may be due to the noise introduced by the integration of multiple graphs.",
"The number of GCNs is a tuneable hyperparameter in our Graph2Tree model.",
"Thus, we investigate the effect of the number of GCNs on our model's performance.",
"We varied the number of GCNs from 2, 4, 8.",
"Note that even numbers are used as the GCNs are split evenly to model the Quantity Cell Graph and Quantity Comparison Graph .",
"Table 4 shows the study's results.",
"We observe that the 4-GCN version achieves the best performance.",
"A potential reason could be due to the optimal capacity of information aggregation is achieved using 4 GCNs over the two quantity graphs.",
"To investigate how well our Graph2Tree model performs with the increasing expression complexity as compared to state-of-the-art models using explicit tree decoders, we analyze the increasing number of operators in the test set.",
"From the results shown in Table 5, we note that: (1) Our proposed Graph2tree outperforms the other two models in most cases except that the number of operators equals to",
"5. In other cases with less than 5 operators, our model shown statistically significant improvements over other two models.",
"(2) All the models' performances follow an accuracy descending pattern when the length of expression becomes longer.",
"This is intuitive as longer expressions often associate with more complex questions that are more difficult to solve and have fewer data for training.",
"#Op Pro (%) AST-Dec (%) GTS (%) Our (%) 1 17 .",
"3 82 .",
"7 84 .",
"9 85.5 2 52 .",
"2 74 .",
"5 80 .",
"6 83.7 3 19 .",
"1 59 .",
"9 70 .",
"7 71.7 4 6 .",
"6 42 .",
"4 50 .",
"0 51.5 5 3 .",
"4 44.1 38 .",
"2 38 .",
"2 6 0 .",
"9 55.6 44 .",
"4 55.6 Table 5: Accuracy for increasing length of templates.",
"# Op is the number of operators in expressions.",
"Pro denotes the proportion of MWPs for different expression lengths.",
"One of the primary goals of our Graph2Tree model is to address the situation where the wrong arithmetic order leads to incorrect solution expression generation.",
"We evaluate this aspect of our model by investigating how Graph2Tree has improved the arithmetic order errors.",
"We first retrieve the MWPs with incorrectly predicted expressions.",
"As we are interested in arithmetic order errors, we check that the incorrectly predicted expressions' length is equal to their corresponding ground truth Case 1 : The class organized students to climb the mountain.",
"expressions' length.",
"In total, we retrieved 103 incorrect predicted expressions for Graph2Tree and 119 for GTS.",
"Next, we manually count the number of incorrectly predicted expression attributed to arithmetic order error among the initially retrieve set.",
"We found that Graph2Tree has generated 7 expressions with arithmetic order error, while GTS has generated 27 arithmetic order error expressions.",
"This suggests that Graph2Tree is able to signifi-cantly improve the arithmetic order in MWP task.",
"Finally, we perform a case study on the solution expressions generated by GTS and Graph2Tree .",
"Selected case studies are shown in Table",
"6. In Case 1, there are essential words, i.e., each, group, and students around the quantity 15, and students around the quantity 76.",
"However, GTS predicts operator + between these two quantities with obviously different units as GTS is unable to model quantity representation effectively using BiLSTM.",
"For the second case, we observe that GTS gives a wrong prediction 3 5 as GTS does not model quantities' numerical qualities.",
"For the last case, this MWP requires models to have the ability to handle situation where quantities are repeatedly and frequently used.",
"Graph2Tree is able to handle this situation better than the GTS model as our model encodes the MWP in richer graph representation.",
"The three case studies demonstrate Graph2Tree model strengths in generating more accurate and realistic solution expressions for MWPs.",
"Besides, further analysis is performed on error cases.",
"We found that our model, like other baselines, performed poorly in solving MWPs with long solution expressions.",
"Answering these MWPs requires complex reasoning which opens the possibility for future works.",
"The earlier works on math word problems (MWPs) are mainly tested on small-scale datasets.",
"These works can be broadly divided into statistical machine learning based (Kushman et al., 2014; Hosseini et al., 2014; Mitra and Baral, 2016; Roy and Roth, 2018; Zou and Lu, 2019a) and semantic parsing based (Shi et al., 2015; Koncel-Kedziorski et al., 2015; Roy and Roth, 2015; Huang et al., 2017; Zou and Lu, 2019b).",
"Recently, deep learning based models have become a new trend in solving math word problems.",
"Wang et al. (2017) applied a vanilla seq2seq model to map the language text to an expression.",
"Li et al. (2019) applied multi-head attention to model different types of MWP features.",
"Both Wang et al. (2018) and Chiang and Chen (2019) proposed to generate expressions with the implicit tree structure.",
"Huang et al. (2018) designed a new intermediate form to generate.",
"Other models (Wang et al., 2019; Liu et al., 2019; Xie and Sun, 2019) have generated an expression tree explicitly to derive the final answer.",
"Transformer is a self-attention based neural network which has shown potential in tasks like neural machine translation (Vaswani et al., 2017) and language modeling (Devlin et al., 2019).",
"However, there are only a fewer works which focus on extension of transformer to graph-structure data.",
"In community of natural language processing, the first graph transformer was introduce in a knowledge-graph-to-text task (Koncel-Kedziorski et al., 2019), where a graph attention Network (Velickovic et al., 2018) is used with a transformer style architecture.",
"Another graph transformer (Cai and Lam, 2019) extends vanilla multi-head attention mechanism into relation-enhanced global attention mechanism.",
"Our work aims to explore the adaptation of transformer in modeling multiple heterogeneous graph in parallel for the MWP task.",
"In this paper, we proposed a novel MWP solver, Graph2Tree , which improves the task performance by enriching the quantity representations in the problem.",
"We conducted extensive experiments to evaluate our model against state-of-the-art baselines.",
"Our experiments shown that Graph2Tree is able to outperform the baselines on the MWP task.",
"For future work, we aim to consider more complex relationships among the quantities and other attributes to enrich quantity representations further.",
"We will also explore adding heuristic in the tree-based decoder to guide and improve the generation of solution expression.",
"This work is supported by the National Natural Science Foundation of China (No. 61832001 and No. 61672133), Sichuan Science and Technology Program (No. 2019YFG0535 and No. 2018GZDZX0032) and the National Research Foundation, Prime Ministers Office, Singapore under its International Research Centres in Singapore Funding Initiative."
] | [
"abstain",
"abstain",
"objective",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"other",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"method",
"result",
"objective",
"objective",
"other"
] |
[
"We describe a Question Answering (QA) dataset that contains complex questions with conditional answers , i.e. the answers are only applicable when certain conditions apply.",
"Answering the questions requires compositional logical reasoning across complex context.",
"We call this dataset ConditionalQA.",
"In addition to conditional answers, the dataset also features: (1) long context documents with information that is related in logically complex ways; (2) multi-hop questions that require compositional logical reasoning; (3) a combination of extractive questions, yes/no questions, questions with multiple answers, and not-answerable questions; (4) questions asked without knowing the answers.",
"We show that ConditionalQA is challenging for many of the existing QA models, especially in selecting answer conditions.",
"We believe that this dataset will motivate further research in understanding complex documents to answer hard questions.",
"1 1 Introduction Many reading comprehension (RC) datasets have been recently proposed (Rajpurkar et al., 2016, 2018; Kwiatkowski et al., 2019; Yang et al., 2018; Dasigi et al., 2021; Ferguson et al., 2020).",
"In a reading comprehension task, models are provided with a document and a question and asked to find the answers.",
"Questions in existing reading comprehension datasets generally have a unique answer or a list of answers that are equally correct, e.g. Who was the president of the US? with the answers George Washington, Thomas Jefferson, etc.",
"We say that these questions have deterministic answers.",
"However, questions in the real world do not always have deterministic answers, i.e. answers to the questions are different under different conditions.",
"For example, in Figure 1, the document 1 https://haitian-sun.github.io/ conditionalqa/ Figure 1: An example of question and document in ConditionalQA dataset.",
"discusses Funeral Expense Payment and a question asks an applicant's eligibility.",
"This question cannot be deterministically answered: the answer is yes only if you're arranging a funeral in the UK, while the answer is no, if ... another close relative of the deceased is in work is true.",
"We call answers that are different under different conditions conditional answers .",
"A conditional answer consists of an answer and a list of conditions .",
"An answer is only true if its conditions apply.",
"In the example above, you are arranging a funeral in the UK is the condition for the answer yes.",
"An answer can have multiple conditions.",
"Conditional answers are commonly seen when the context so complex so asking a complete question with a deterministic answer is impractical; for example, when a person asks a question with some prior knowledge in mind but cannot enumerate all necessary details.",
"A practical way to answer incomplete questions is to find all possible answers to the question and if some answers are only true 3627 under certain conditions, the conditions should be output as well.",
"Answering such questions generally requires the models to understand the complex logic in the context and perform extensive reasoning to identify the answers and conditions.",
"We present the ConditionalQA dataset, which contains questions with conditional answers.",
"We take documents from the UK government website 2 as our corpus.",
"Documents in this corpus discuss public policies in the UK and were first used in the ShARC dataset",
"(Saeidi et al., 2018).",
"It is par-tically interesting for constructing the ConditionalQA dataset because it contains complex contents with complex internal logic such as conjunction, disjunction, and exception",
"(see the example in Figure 1).",
"Questions in ConditionalQA are asked by human annotators.",
"Each example contains a question, a scenario when the question is asked, and a document that discusses the policy that the question asks about.",
"The task is to find all possible answers to the questions that apply to the user's scenario.",
"If an answer is only true under certain conditions, the model should return the list of conditions along with the answer.",
"Answers and conditions are annotated by human annotators with the exact input, i.e. the question, the scenario, and the associated document.",
"We provide supporting evidences labeled by human annotators as additional supervision.",
"In addition to having conditional answers, ConditionalQA also features the following properties.",
"First, the documents in ConditionalQA have complex structure.",
"As opposed to Wikipedia pages, where most sentences or paragraphs contain stand-alone information, documents in ConditionalQA usually have complex internal logic that is crucial for answering the questions.",
"Second, many questions in the dataset are naturally multi-hop, as illustrated in the example on Figure 1, e.g. being the partner of the deceased satisfied the requirement on your relationship with the deceased which is one of high-level requirements to obtain the benefit.",
"Answering those question requires models that understand the internal logic within the document and reason over the it to find correct answers.",
"Third, we decouple the asking and answering process when annotating questions, as suggested by Ferguson et al.",
"(2020); Dasigi et al.",
"(2021); Clark et al.",
"(2020), so questions are asked without knowing the answers.",
"Forth, ConditionalQA contains various types of questions including yes/no ques-2 https://www.gov.uk/parental-leave tions and extractive questions.",
"Questions can have one or multiple answers, or can be not answerable, as a result of the decoupled annotation process.",
"We experimented with several strong baseline models on ConditionalQA",
"(Ainslie et al., 2020; Sun et al., 2021; Izacard and Grave, 2021).",
"The best performing model achieves only 64.9% accuracy on yes/no questions, marginally better than the majority baseline",
"(62.2% if always predicting yes), and 25.2% exact match",
"(EM)",
"on extractive answers.",
"We further measure the accuracy of jointly predicting answers and conditions, in which case the accuracy drops to 49.1% and 22.5%.",
"The best metrics with conditions are obtained if no condition is predicted, showing how challenging it is for existing models to predict conditional answers.",
"Many question answering datasets have been proposed in the past few years",
"(Rajpurkar et al., 2016, 2018; Yang et al., 2018; Dasigi et al., 2021; Ferguson et al., 2020; Kwiatkowski et al., 2019)",
"and research on these has significantly boosted the performance of QA models.",
"As large pretrained language models",
"(Devlin et al., 2019; Liu et al., 2019; Ainslie et al., 2020; Beltagy et al., 2020; Guu et al., 2020; Verga et al., 2020)",
"achieved better performance on traditional reading comprehension and question answering tasks, efforts have been made to make the questions more complex.",
"Several multi-hop QA datasets were released",
"(Yang et al., 2018; Ferguson et al., 2020; Talmor and Berant, 2018; Welbl et al., 2018)",
"to test models' ability to solve complex questions.",
"However, most questions in these datasets are answerable by focusing on a small piece of evidence at a time, e.g. a sentence or a short passage, leaving reasoning through long and complex contents a challenging but unsolved problem.",
"Some datasets have been recently proposed for question answering over long documents.",
"QASPER",
"(Dasigi et al., 2021)",
"contains questions asked from academic papers, e.g. What are the datasets experimented in this paper?.",
"To answer those questions, the model should read several sections and collect relevant information.",
"NarrativeQA",
"(Mou et al., 2021)",
"requires reading entire books or movie scripts to answer questions about their characters or plots.",
"Other datasets, e.g. HybridQA",
"(Chen et al., 2021b), can also be viewed as question answering over long documents if tables with hyper-linked text from the cells are flattened into 3628 a hierarchical document.",
"ShARC",
"(Saeidi et al., 2018)",
"is a conversational QA dataset that also use UK government websites as its corpus.",
"However, the ShARC dataset only contains yes/no questions and the conversation history is generated by annotators with the original rule text in hand, making the conversation artificial.",
"The length of context in ShARC is usually short, such as a few sentences or a short paragraph.",
"While using the same corpus, ConditionalQA contains completely different questions and new types of answers.",
"It focuses on a new problem that has not been previously studied.",
"Most of the existing datasets, including the ones discussed above, contain questions with unique answers.",
"Answers are unique because questions are well specified, e.g. Who is the president of the US in 2010?.",
"However, questions can be ambiguous if not all information is provided in the question, e.g. When was the Harry Potter movie released? does not specify which Harry Potter movie.",
"AmbigQA",
"(Min et al., 2020)",
"contains questions that are ambiguous, and requires the model to find all possible answers of an ambiguous question and rewrite the question to make it well specified.",
"Similar datasets Temp-LAMA",
"(Dhingra et al., 2021), TimeQA",
"(Chen et al., 2021a)",
"and Situat-edQA",
"(Zhang and Choi, 2021)",
"have been proposed that include questions that require resolving temporal or geographic ambiguity in the context to find the answers.",
"They are similar to ConditionalQA in that questions are incomplete, but ConditionalQA focuses on understanding documents with complex logic and answering questions with conditions.",
"It's usually not possible to disambiguate questions in ConditionalQA as rewriting the questions",
"(or scenarios)",
"to reflect all conditions of answers to make the questions deterministic is impractical.",
"We create ConditionalQA in the public policy domain.",
"There are some existing domain specific datasets, including PubMedQA and BioAsq",
"(Nen-tidis et al., 2018; Jin et al., 2019)",
"in medical domain, UDC",
"(Lowe et al., 2016)",
"in computer software domain, QASPER",
"(Dasigi et al., 2021)",
"in academic paper domain, PrivacyQA and PolicyQA",
"(Ahmad et al., 2020; Ravichander et al., 2019)",
"in legal domain, etc.",
"PrivacyQA and PolicyQA have similar context as ConditionalQA, but the questions do not require compositional reasoning and the answers are short text spans.",
"We use a corpus in the public policy domain because it is easy to understand by non-experts while being complex enough to support challenging questions.",
"In our task, the model is provided with a long document that describes a public policy, a question about this document, and a user scenario.",
"The model is asked to read the document and find all answers and their conditions if any.",
"Documents in ConditionalQA describe public policies in the UK, e.g. Apply for Visitor Visa or Punishment of Driving Violations.",
"Each document covers a unique topic and the contents are grouped into sections and subsections.",
"Contents in the same section are closely related but may also be referred in other sections.",
"We create ConditionalQA in this domain because these documents are rather complex with internal logic, yet annotators are familiar with the content so they can ask natural yet challenging questions, compared to formal legal or financial documents with more sophisticated terms and language.",
"The input to a reading comprehension model consists of a document, a question, and a user scenario:",
"A document describes a public policy in the UK.",
"Content of a document is coherent and hierarchical, structured into sections and subsections.",
"Documents are crawled from the website and processed by serializing the DOM trees of the web pages into lists of HTML elements with tags, such as <h1>, <p>, <li>, and <tr>.",
"Please see more information in 4.1.",
"A question asks about a specific aspect of the document, such as eligibility or other aspects with how, when, what, who, where, etc.",
"Questions are relevant to the content in the document, even though they may be not answerable.",
"A user scenario provides background information for the question.",
"Some information will be used to restrict the answers that can be possibly correct.",
"Not all information in the user scenario is relevant because they are written by crowd source workers without seeing the full document or knowing the answers.",
"Information in the scenario is also likely to be incomplete.",
"This setup simulates the real 3629 information seeking process of having both irrelevant and incomplete information.",
"A reading comprehension model is asked to predict the answers and the list of conditions if there is any.",
"An answer to the question has three different types:",
"(1)",
"yes or no for questions such as Can I get this benefit?;",
"(2)",
"an extracted text span for questions asking how, when, what,",
"etc.;",
"(3)",
"not answerable if an answer does not exist in the document.",
"Since the information to get a definite answer is sometimes incomplete, besides predicting the answers, the model is asked to identify their conditions.",
"A condition contains information that must be satisfied in order to make the answer correct but is not mentioned in the user scenario.",
"In ConditionalQA, we restrict a condition to be one of the HTML elements in the document instead of the exact extracted text span.",
"3 Selected conditions are then evaluated as a retrieval task with F1 at the element level, i.e. the model should retrieve all HTML elements with unsatisfied information to get a perfect F1 score.",
"If no condition is required, the model must return an empty list.",
"Please see 3.4 for more details on evaluation.",
"We evaluate performance of models on the ConditionalQA dataset as a reading comprehension (RC) task.",
"Answers are measured with exact match (EM) and F1.",
"Some questions have multiple answers.",
"The model should correctly predict all possible answers to get the full score.",
"Since the order of answers does not matter, to compute the metrics, we compare all possible permutations of the predicted answers to the list of correct answers.",
"We take the best result among all permutations as the result for this example.",
"Let { a 1 , . . . , a m } be the list of predicted answer and { a 1 , . . . , a n } the reference answers.",
"The EM of the predicted answers is EM = max { a 1 ,..., a m } 1 n min( m,n ) (cid:88) i =1 s em ( a i , a i ) m,n (1) 3 We argue that selecting HTML elements as conditions is already very challenging (see experimental results in 5.2) and leave extracting the exact text spans as future work.",
"where { a 1 , . . . , a m } is a permutation of the predicted answers { a 1 , . . . , a m } and s em ( , ) is the scoring function that measures EM between two text spans.",
"m,n is a penalty term that is smaller than 1 if more answers than the reference answers are predicted, i.e. m > n .",
"We compute token-level F1 in the similar way using the scoring function s f 1 ( , ) on the extracted answer spans.",
"For not answerable questions, EM and F1 are 1.0 if and only if no answer is predicted.",
"We additionally measure the performance of answers with conditions.",
"We adopt the same permutation strategy as above, except that the scoring function will also take into account the accuracy of predicted conditions.",
"Let C i be the set of predicted conditions for the predicted answer a i and C i be the oracle conditions for the answer a i .",
"The new scoring function for the predicted answer with conditions is s em + c ( a i , C i , a i , C i ) = s em ( a i , a i ) F1 ( C i , C i ) where F1 ( , ) measures the accuracy of the set of predicted conditions at HTML element level.",
"Recall that conditions are restricted to select from HTML elements in the document.",
"F1 ( C i , C i ) equals to 1 if and only if all required conditions are selected.",
"This is different from s f 1 ( , ) that measures token level F1 of the extracted answers.",
"If the answer does not require any conditions, the model should predict an empty set.",
"We simply replace the scoring function s em ( , ) in Eq.",
"1 with s em + c ( , ) to compute EM with conditions.",
"Documents are originally presented on the UK government website in the HTML format.",
"We crawled the pages from the website and processed it to only keep the crucial tags, that include: Headings <h1, h2, h3, h4>: We keep headings at different levels.",
"This can be used to identify the hierarchical structure in the documents.",
"Text <p>: This tag is used for general contents.",
"We replace descriptive tags, e.g. <strong>, with the plain tag <p> for simplicity.",
"List <li>: We keep the tags for list items, but drop their parent tags <ul> or <ol>.",
"We observe that very few ordered lists (<ol>) have 3630 been used in the dataset, so we will not distinguish them.",
"Table <tr>: Again, we drop their parent tags <table> to simplify the document format.",
"We further remove the <td> and <th> tags from cells and concatenate cells in the same row with the separation of | .",
"A processed document contains a list of strings that starts with a tag, follows with its content, and ends with the tag, e.g. [<h1> Overview </h1>, <p> You can apply for ... </p>, . . . ].",
"We drop some common sections that do not contain any crucial information, e.g. How to Apply, to make sure that questions are specific to the topic of the documents.",
"We further require that the document should contain at least 3 sections.",
"We end up with 652 documents as our corpus.",
"The max length of the documents is 9230 words (16154 sub-words in T5 (Raffel et al., 2020)).",
"We collect questions from crowd source workers on Amazon Mechanical Turk.",
"To encourage workers asking questions not be restricted to a specific piece of text, we hide the full document but instead provide a snippet of the document to the workers.",
"A snippet includes a table of content that contains section and subsection titles (from <h1> and <h2> tags), and the very first subsection in the document that usually provides a high level overview of the topic.",
"The snippet lets workers get familiar with the topic of this document so they can ask closely relevant questions.",
"We observe that restricting the geographic location of workers to the UK can significantly improve the quality of questions because local residents are more familiar with their policies.",
"We ask the workers to perform three sub-tasks when coming up with the questions.",
"First, we ask the workers to provide three attributes that can identify the group of people who may benefit from or be regulated by the policy discussed in the document.",
"Second, they are asked to come up with a scenario when they will want to read this document and a question about what they would like to know.",
"Third, workers are asked to mark which attributes have been mentioned in their question and scenario.",
"When assessing the annotation quality, we find that asking workers to provide attributes makes the questions and scenarios much more specific, significantly improving the quality of the dataset.",
"We assign 3 workers to documents with four or more sections and 2 workers to documents with three sections.",
"Each worker is asked to give two questions and the two questions have to be diverse.",
"We collect 3617 questions in this stage.",
"We hire another group of workers to work on the answer portion of this task.",
"Finding answers is very challenging to crowd source workers because it requires the workers to read the full document carefully to understand every piece of information in the document.",
"We provide one-on-one training for the workers to teach them how to select supporting evidences, answers, and conditions.",
"Workers are asked to perform three sub-tasks.",
"The first step is to select supporting evidences from the document.",
"Supporting evidences are HTML elements that are closely related to the questions, including elements that have content that directly justify the answers and the ones that will be selected as conditions in the next step.",
"In the second step, workers are asked to type answers and select associated conditions.",
"Workers can input as many answers as possible or mark the question as not answerable.",
"For each answer, they can select one or more supporting evidences as the answer's conditions if needed.",
"Workers are asked not to select conditions if there is sufficient information in the scenario to answer the question.",
"We give workers permission to slightly modify the questions or scenarios if the questions are not clearly stated, or they can mark it as a bad question (different from not answerable) so we will drop it from the dataset.",
"We additionally perform a revise step to improve the annotation quality.",
"We provide the union of selected evidences and answers from multiple annotations of a question to an additional group of annotators and let them deselect unrelated evidences and merge answers.",
"As the amount of information provided to workers at this step is significantly less than in the previous answer selection stage, the annotation quality improves significantly.",
"We end up with 3102 questions with annotated answers.",
"To encourage the model of learning subtle difference in user scenarios that affects the answers and conditions, we create new questions by modifying existing questions with conditional answers by moving one of the conditions to their scenarios.",
"Specifically, we show the workers the original questions, scenarios, and the annotated answers and conditions.",
"Evidences are also provided for workers to get them familiar with the background of the questions and reasoning performed to get the original answers.",
"Workers are asked to pick one of the conditions and modify the original scenario to reflect this condition.",
"The modified questions and scenarios are sent back to the answering stage to get their annotations.",
"We randomly select a small portion of the questions that have conditional answers as inputs to this stage so as to not affect the original distribution of the dataset.",
"We collected 325 additional examples from this stage.",
"We partition the dataset by documents to prevent leaking information between questions from the same document.",
"The dataset contains 436 documents and 2338 questions in the training set, 59 documents and 285 questions in the development set, and 136 documents and 804 questions in the test set.",
"Please see Appendix A for more statistics on ConditionalQA.",
"Evaluating existing models on ConditionalQA is challenging.",
"In addition to predicting answers to questions, the ConditionalQA task also asks the model to find the answers' conditions if any of them applies.",
"To the best of our knowledge, no existing model fits the purpose of this task.",
"We modified three competitive QA models as baselines to the ConditionalQA dataset.",
"In addition to the new form of answers, traditional reading comprehension models also face the challenge that the context of questions in ConditionalQA is too long to fit into the memory of many Transformer-based models like BERT (Devlin et al., 2019) and even ETC (Ainslie et al., 2020).",
"The baseline models we implemented are described below.",
"ETC : ETC (Ainslie et al., 2020) is a pretrained Transformer-based language model that is designed for longer inputs (up to 4096 tokens).",
"ETC achieved the state-of-the-art on several challenging tasks, e.g. HotpotQA and WikiHop (Yang et al., 2018; Welbl et al., 2018).",
"Since ETC cannot fit the entire document (with up to 16154 tokens) into its memory, we cannot let ETC to jointly predict answers and conditions, we designed a two stage 3632 Yes / No Extractive Conditional Overall answer w/ conds answer w/ conds answer w/ conds* answer w/ conds majority 62.2 / 62.2 42.8 / 42.8 / / / / / / ETC 63.1 / 63.1 47.5 / 47.5 8.9 / 17.3 6.9 / 14.6 39.4 / 41.8 2.5 / 3.4 35.6 / 39.8 26.9 / 30.8 DocHopper 64.9 / 64.9 49.1 / 49.1 17.8 / 26.7 15.5 / 23.6 42.0 / 46.4 3.1 / 3.8 40.6 / 45.2 31.9 / 36.0 FiD 64.2 / 64.2 48.0 / 48.0 25.2 / 37.8 22.5 / 33.4 45.2 / 49.7 4.7 / 5.8 44.4 / 50.8 35.0 / 40.6 human 91.4 / 91.4 82.3 / 82.3 72.6 / 84.9 62.8 / 69.1 74.7 / 86.9 48.3 / 56.6 82.6 / 88.4 73.3 / 76.2 Table 2: Experiment results on ConditionalQA (EM / F1).",
"pipeline to run ETC on ConditionalQA.",
"In the first stage, ETC is trained as a normal reading comprehension model to predict answers from the document by jointly encoding the questions and documents.",
"We adopt a sequential reading approach that reads one section at a time.",
"The answer with the highest probability among all sections will be considered as the final answer.",
"We append three special tokens yes , no , and not answerable for the yes/no and not answerable questions.",
"Since it is not clear how to extract multiple answers with the Transformer-based extractive QA model, we restrict to the number of predicted answers to one.",
"The second stage in the pipeline is to select conditions.",
"Questions, answers, and documents are concatenated together into a single input for ETC.",
"We then use the embeddings of global tokens for sentences in ETC to predict conditions.",
"Since the number of conditions for the answer is unknown, we train the condition selection process with a binary classification target, by labeling each global token as positive or negative.",
"The threshold of selecting conditions is a hyper-parameter.",
"DocHopper : DocHopper (Sun et al., 2021) is an iterative attention method that extends ETC for reading long documents to answer multi-hop questions.",
"It reads the full documents at once and jointly predicts answers and conditions.",
"The model iteratively attends to information at different levels in the document to gather evidences to predict the final answers.",
"We modify the iterative process in DocHopper for the purpose of this task: specifically, DocHopper is trained to run three iterative attention steps: (1) attend to the supporting evidences; (2) attend to the sentence that contains the answer; and (3) attend to the conditions.",
"Since the query vector in each attention step is updated with information from the previous steps, conditions attended at the third step are aware of the previously predicted answers.",
"Unfortunately, DocHopper is still restricted to predicting one answer for each question.",
"The condition selection step in DocHopper is also trained with binary classification loss.",
"Different from the ETC pipeline, the three attention steps are jointly optimized.",
"FiD : FiD (Izacard and Grave, 2021) is a generative model with an encoder-decoder architecture.",
"The encoder reads multiple contexts independently and generates their embeddings.",
"The decoder attends to all embeddings of the context to generate the final answers.",
"In this task, we train FiD to sequentially generate the answers with conditions, i.e. [ a 1 , c 11 , c 12 , . . . , a 2 , c 21 , c 22 , . . . ] where { a 1 , . . . , a n } are the correct answers and { C 1 , . . . , C n } are their conditions, i.e., c ij C i is the j 'th condition for the answer a i .",
"If C i is empty, the model is trained to predict NA as the only condition for the i 'th answer.",
"FiD can predict multiple answers as opposed to ETC and DocHopper.",
"Human We randomly sample 80 questions and ask human annotators to answer them.",
"Annotators are provided with the full instructions and 10 additional annotated examples to clarify the task.",
"We do not provide additional training to the annotators.",
"Experiment results are shown in Table",
"2. We report the numbers on yes/no questions and extractive questions separately.",
"The numbers in Table 2 show that the ConditionalQA task is very challenging the performance of the best model on yes/no questions is 64.9% (marginally higher than always predicting the majority answer yes), and the performance on extractive questions is 25.2% EM.",
"FiD has the best performance on extractive questions because FiD can predict multiple answers while ETC-pipeline and DocHopper only predict one.",
"The performance drops significantly if answers and conditions are jointly evaluated.",
"The best performance on jointly evaluating answers and conditions (w/ conditions) in Table 2 is only 49.1% for yes/no questions and 22.5% EM for extractive questions.",
"Even worse, this best result is obtained when no condition is selected, i.e. the threshold 3633 Error types % Examples Correct answers Predictions Not answerable 7.6 \"Am I eligible for a tax reduction?\" not_answerable \"yes\" Wrong answer type (yes/no vs. extractive) 4.2 \"How can I check if this design has been registered?\" \"ask the intellectual property office to search for you\" \"no\" Wrong answer (yes/no) 19.5 \"Will it be classed as a small vessel?\" \"yes\" \"no\" Wrong answer (extractive, right type) 20.3 \"How many points will I receive on my license?\" \"6\" \"3\" Wrong answer (extractive, wrong type) 9.3 \"What is the account number should I send the money to?\" \"12001020\" \"hmrc\" Correct answer w/ wrong conditions 14.4 \"Can I still send simpler annual accounts as a micro-entity?\" \"yes\",[\" $316,000 or less on its balance sheet\" ] \"yes\", [] Partial answer 24.5 \"What will not need to be repeated for each trip?\" \"a microchip\", \"rabies vaccination\" \"a microchip\" Table 3: Error analysis on the predictions of the best performed model (FiD).",
"of selecting conditions is 1 .",
"0 .",
"The difficulty of selecting conditions is more obvious if we focus on the subset of questions that have at least one conditional answer.",
"The accuracy drops by 90% if answers and conditions are jointly evaluated.",
"4 We also study how the threshold on the confidence scores of selecting conditions affects the evaluation results.",
"Results are shown in Figure",
"2. As we decrease the threshold for selecting conditions, the EM with conditions on the subset of questions that have conditional answers slightly improves, but the overall EM with conditions drops dramatically due to the false positive conditions.",
"FiD is a generative model so we can not evaluate it in the same way.",
"In our evaluation, predictions from the best performing FiD checkpoint also do not select any conditions.",
"Table 4 shows the best results on the subset of questions that have conditional answers.",
"Hyper-parameters are tuned on the subset of questions.",
"We could possibly get better results on questions with conditional answers with threshold < 1 .",
"0 , but the improvement is still marginal.",
"We manually check 200 examples in the prediction of the best performed model FiD and label the type",
"4 The EM/F1 w/ conditions* is non-zero on this subset of questions even if no condition is ever selected, because some questions have both conditional and deterministic answers.",
"Models get partial credits if they predicts the deterministic answers correctly.",
"of errors made.",
"The numbers are shown in Table",
"3. The most errors are made when only a subset of correct answers is predicted.",
"This is due to the fact that the model (FiD) has a tendency to predict one answer for each question.",
"The second most common errors are made by predicting answers with the correct type but wrong value.",
"Such errors are commonly made by reading comprehension models in many tasks.",
"The model made a lot of errors in yes/no questions because they consist of around 50% of the questions.",
"The model is good at distinguishing yes/no questions and extractive question as producing the wrong kind of answer only makes up of 4.2% of the errors.",
"We propose a challenging dataset ConditionalQA that contains questions with conditional answers.",
"The dataset requires models to understand complex logic in a document in order to find correct answers and conditions to the questions.",
"Experiments on state-of-the-art QA models show that their overall performance on ConditionalQA is relatively poor.",
"This also suggests that current QA models lack the reasoning ability to understand complex documents and answer hard questions with answers beyond single span extraction.",
"We hope that this dataset will stimulate further research in building NLP models with better reasoning abilities.",
"This dataset should be ONLY used for NLP research purpose.",
"Questions are artificial and do not contain any personal information.",
"Answers are NOT provided by legal professionals and should NOT be used for any legal purposes.",
"This work was supported in part by the NSF IIS1763562, ONR Grant N000141812861, Google Research.",
"We would also like to thank Vijay A. Saraswat <[email protected]> for valuable feedback."
] | [
"method",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Signed languages are the primary means of communication for many deaf and hard of hearing individuals.",
"Since signed languages exhibit all the fundamental linguistic properties of natural language, we believe that tools and theories of Natural Language Processing (NLP) are crucial towards its modeling.",
"However, existing research in Sign Language Processing (SLP) seldom attempt to explore and leverage the linguistic organization of signed languages.",
"This position paper calls on the NLP community to include signed languages as a research area with high social and sci-entific impact.",
"We first discuss the linguistic properties of signed languages to consider during their modeling.",
"Then, we review the limitations of current SLP models and identify the open challenges to extend NLP to signed languages.",
"Finally, we urge (1) the adoption of an efficient tokenization method; (2) the development of linguistically-informed models; (3) the collection of real-world signed language data; (4) the inclusion of local signed language communities as an active and leading voice in the direction of research.",
"Natural Language Processing (NLP) has revolutionized the way people interact with technology through the rise of personal assistants and machine translation systems, to name a few.",
"However, the vast majority of NLP models require a spoken language input (speech or text), thereby excluding around 200 different signed languages and up to 70 million deaf people 1 from modern language technologies.",
"Year Figure 1: Evolution of the number of publications referring to sign language in its title from computer science venues and in the ACL anthology.",
"Publications in computer science are extracted from the Semantic Scholar archive (Ammar et al., 2018).",
"Throughout history, Deaf communities fought for the right to learn and use signed languages, as well as for the recognition of signed languages as legitimate languages (2).",
"Indeed, signed languages are sophisticated communication modalities that are at least as capable as spoken languages in all manners, linguistic and social.",
"However, in a predominantly oral society, deaf people are constantly encouraged to use spoken languages through lipreading or text-based communication.",
"The exclusion of signed languages from modern language technologies further suppresses signing in favor of spoken languages.",
"This disregards the preferences of the Deaf communities who strongly prefer to communicate in signed languages both online and for in-person day-to-day interactions, among themselves and when interacting with spoken language communities (Padden and Humphries, 1988; Glickman and Hall, 2018).",
"Thus, it is essential to make signed languages accessible.",
"To date, a large amount of research on Sign Language Processing (SLP) has been focused on the visual aspect of signed languages, led by the Computer Vision (CV) community, with little NLP involvement (Figure 1).",
"This is not unreasonable, given that a decade ago, we lacked the adequate CV tools to process videos for further linguistic analyses.",
"However, like spoken languages, signed languages are fully-fledged systems that exhibit all the fundamental characteristics of natural languages (3), and current SLP techniques fail to address or leverage the linguistic structure of signed languages (4).",
"This leads us to believe that NLP tools and theories are crucial to process signed languages.",
"Given the recent advances in CV, this position paper argues that now is the time to incorporate linguistic insight into signed language modeling.",
"Signed languages introduce novel challenges for NLP due to their visual-gestural modality, simultaneity, spatial coherence, and lack of written form.",
"By working on signed languages, the community will gain a more holistic perspective on natural languages through a better understanding of how meaning is conveyed by the visual modality and how language is grounded in visuospatial concepts.",
"Moreover, SLP is not only an intellectually appealing area but also an important research area with a strong potential to benefit signing communities.",
"Examples of beneficial applications enabled by signed language technologies include better documentation of endangered sign languages; educational tools for sign language learners; tools for query and retrieval of information from signed language videos; personal assistants that react to signed languages; real-time automatic sign language interpretations, and more.",
"Needless to say, in addressing this research area, researchers should work alongside and under the direction of deaf communities, and to the benefit of the signing com-munities' interest above all (Harris et al., 2009).",
"After identifying the challenges and open problems to successfully include signed languages in NLP (5), we emphasize the need to: (1) develop a standardized tokenization method of signed languages with minimal information loss for its modeling; (2) extend core NLP technologies to signed languages to create linguistically-informed models; (3) collect signed language data of sufficient size that accurately represents the real world; (4) involve and collaborate with the Deaf communities at every step of research.",
"Over the course of modern history, spoken languages were dominant so much so that signed languages struggled to be recognized as languages in their own right and educators developed misconceptions that signed language acquisition may hinder the development of speech skills.",
"For example, in 1880, a large international conference of deaf educators called the Second International Congress on Education of the Deaf banned teaching signed languages, favoring speech therapy instead.",
"It was not until the seminal work on American Sign Language (ASL) by Stokoe (1960) that signed languages started gaining recognition as natural, independent, and well-defined languages, which then inspired other researchers to further explore signed languages as a research area.",
"Nevertheless, antiquated notions that deprioritized signed languages continue to do harm and subjects many to linguistic neglect (Humphries et al., 2016).",
"Several studies have shown that deaf children raised solely with spoken languages do not gain enough access to a first language during their critical period of language acquisition (Murray et al., 2020).",
"This language deprivation can lead to life-long consequences on the cognitive, linguistic, socioemo-tional, and academic development of the deaf (Hall et al., 2017).",
"Signed languages are the primary languages of communication for the Deaf 2 and are at the heart of Deaf communities.",
"Failing to recognize signed languages as fully-fledged natural language systems in their own right has had harmful effects in the past, and in an increasingly digitized world, the NLP community has an important responsibility to include signed languages in its research.",
"NLP research should strive to enable a world in which all people, including the Deaf, have access to languages that fit their lived experience.",
"Jaffe (1994); Ong and Ranganath (2005); Parton",
"2 When capitalized, Deaf refers to a community of deaf people who share a language and a culture, whereas the lowercase deaf refers to the audiological condition of not hearing.",
"(2006) survey early works in SLP that were mostly limited to using sensors to capture fingerspelling and isolated signs, or use rules to synthesize signs from spoken language text, due to the lack of adequate CV technology at the time to process videos.",
"This paper will instead focus on more recent vision-based and data-driven approaches that are nonintrusive and more powerful.",
"The introduction of a continuous signed language benchmark dataset (Forster et al., 2014; Cihan Camgoz et al., 2018), coupled with the advent of deep learning for visual processing, lead to increased efforts to recognize signed expressions from videos.",
"Recent surveys on SLP mostly review these different approaches for sign language recognition developed by the CV community (Koller, 2020; Rastgoo et al., 2020; Adaloglou et al., 2020).",
"Meanwhile, signed languages have remained relatively overlooked in NLP literature (Figure 1).",
"Bragg et al. (2019) argue the importance of an interdisciplinary approach to SLP, raising the importance of NLP involvement among other disciplines.",
"We take this argument further by diving into the linguistic modeling challenges for signed languages and providing a roadmap of open questions to be addressed by the NLP community, in hopes of stimulating efforts from an NLP perspective towards research on signed languages.",
"Signed languages consist of phonological, morphological, syntactic, and semantic levels of structure that fulfill the same social, cognitive, and communicative purposes as other natural languages.",
"While spoken languages primarily channel the oral-auditory modality, signed languages use the visual-gestural modality, relying on the face, hands, body of the signer, and the space around them to create distinctions in meaning.",
"We present the linguistic features of signed languages 3 that must be taken into account during their modeling.",
"Phonology Signs are composed of minimal units that combine manual features such as hand config-uration, palm orientation, placement, contact, path movement, local movement, as well as non-manual features including eye aperture, head movement, and torso positioning 4 (Liddell and Johnson, 1989; 3 We mainly refer to ASL, where most sign language research has been conducted, but not exclusively. 4 In this work, we focus on visual signed languages rather than tactile systems such as Pro-Tactile ASL which DeafBlind Johnson and Liddell, 2011; Brentari, 2011; Sandler, 2012).",
"In both signed and spoken languages, not all possible phonemes are realized, and inventories of two languages' phonemes/features may not overlap completely.",
"Different languages are also subject to rules for the allowed combinations of features.",
"Simultaneity Though an ASL sign takes about twice as long to produce than an English word, the rates of transmission of information between the two languages are similar (Bellugi and Fischer, 1972).",
"One way signed languages compensate for the slower production rate of signs is through simultaneity: signed languages make use of multiple visual cues to convey different information simul-taneously(Sandler, 2012).",
"For example, the signer may produce the sign for 'cup' on one hand while simultaneously pointing to the actual cup with the other to express that cup.",
"Similarly to tone in spoken languages, the face and torso can convey additional affective information (Liddell et al., 2003; Johnston and Schembri, 2007).",
"Facial expressions can modify adjectives, adverbs, and verbs; a head shake can negate a phrase or sentence; eye direction can help indicate referents.",
"Referencing The signer can introduce referents in discourse either by pointing to their actual locations in space, or by assigning a region in the signing space to a non-present referent and by pointing to this region to refer to it (Rathmann and Mathur, 2011; Schembri et al., 2018).",
"Signers can also establish relations between referents grounded in signing space by using directional signs or embodying the referents using body shift or eye gaze (Dudis, 2004; Liddell and Metzger, 1998).",
"Spatial referencing also impacts morphology when the directionality of a verb depends on the location of the reference to its subject and/or object (de Beuzeville, 2008; Fenlon et al., 2018): for example, a directional verb can move from the location of its subject and ending at the location of its object.",
"While the relation between referents and verbs in spoken language is more arbitrary, referent relations are usually grounded in signed languages.",
"The visual space is heavily exploited to make referencing clear.",
"Another way anaphoric entities are referenced in sign language is by using classifiers or depicting signs (Supalla, 1986; Wilcox and Hafer, 2004; Roy, 2011) that help describe the characteristics of the Americans sometimes prefer.",
"referent.",
"Classifiers are typically one-handed signs that do not have a particular location or movement assigned to them, or derive features from meaningful discourse (Liddell et al., 2003), so they can be used to convey how the referent relates to other entities, describe its movement, and give more details.",
"For example, to tell about a car swerving and crashing, one might use the hand classifier for a vehicle, move it to indicate swerving, and crash it with another entity in space.",
"To quote someone other than oneself, signers perform role shift (Cormier et al., 2015), where they may physically shift in space to mark the distinction, and take on some characteristics of the people they are representing.",
"For example, to recount a dialogue between a taller and a shorter person, the signer may shift to one side and look up when taking the shorter person's role, shift to the other side and look down when taking the taller person's role.",
"Fingerspelling Fingerspelling is a result of language contact between a signed language and a surrounding spoken language written form (Bat-tison, 1978; Wilcox, 1992; Brentari and Padden, 2001; Patrie and Johnson, 2011).",
"A set of manual gestures correspond with a written orthography or phonetic system.",
"Fingerspelling is often used to indicate names or places or new concepts from the spoken language but often have become integrated into the signed languages themselves as another linguistic strategy (Padden, 1998; Montemurro and Brentari, 2018).",
"In this section, we present the existing methods, resources, and tasks in SLP, and discuss their limitations to lay the ground for future research.",
"Representation is a significant challenge for SLP, as unlike spoken languages, signed languages have no widely adopted written form.",
"Figure 2 illustrates each signed language representation we will describe below.",
"Videos are the most straightforward representation of a signed language and can amply incorporate the information conveyed through sign.",
"One major drawback of using videos is their high dimensionality: they usually include more information than needed for modeling, and are expensive to store, transmit, and encode.",
"As facial features are essential in sign, anonymizing raw videos also remains an open problem, limiting the possibility of making these videos publicly available (Isard, 2020).",
"Poses reduce the visual cues from videos to skeleton-like wireframe or mesh representing the location of joints.",
"While motion capture equipment can often provide better quality pose estimation, it is expensive and intrusive, and estimating pose from videos is the preferred method currently (Pishchulin et al., 2012; Chen et al., 2017; Cao et al., 2019; Guler et al., 2018).",
"Compared to video representations, accurate poses are lower in complexity and anonymized, while observing relatively low information loss.",
"However, they remain a continuous, multidimensional representation that is not adapted to most NLP models.",
"Written notation systems represent signs as discrete visual features.",
"Some systems are written linearly and others use graphemes in two dimensions.",
"While various universal (Sutton, 1990; Prillwitz and Zienert, 1990) and language-specific notation systems (Stokoe Jr, 2005; Kakumasu, 1968; Bergman, 1979) have been proposed, no writing system has been adopted widely by any sign language community, and the lack of standard hinders the exchange and unification of resources and applications between projects.",
"Figure 2 depicts two universal notation systems: SignWriting (Sutton, 1990), a two-dimensional pictographic system, and HamNoSys (Prillwitz and Zienert, 1990), a linear stream of graphemes that was designed to be readable by machines.",
"Glossing is the transcription of signed languages sign-by-sign, where every sign has a unique identi-fier.",
"While various sign language corpus projects have provided gloss annotation guidelines (Mesch and Wallin, 2015; Johnston and De Beuzeville, 2016; Konrad et al., 2018), again, there is no single agreed-upon standard.",
"Linear gloss annotations are also an imprecise representation of signed language: they do not adequately capture all information expressed simultaneously through different cues (i.e. body posture, eye gaze) or spatial relations, which leads to an inevitable information loss up to a semantic level that affects downstream performance on SLP tasks (Yin and Read, 2020b).",
"Bilingual dictionaries for signed language (Mesch and Wallin, 2012; Fenlon et al., 2015; Crasborn et al., 2016; Gutierrez-Sigut et al., 2016) map a spoken language word or short phrase to a signed language video.",
"One notable dictionary is, SpreadTheSign 5 is a parallel dictionary containing around 23,000 words with up to 41 different spoken-signed language pairs and more than 500,000 videos in total.",
"While dictionaries may help create lexical rules between languages, they do not demonstrate the grammar or the usage of signs in context.",
"Fingerspelling corpora usually consist of videos of words borrowed from spoken languages that are signed letter-by-letter.",
"They can be synthetically created (Dreuw et al., 2006) or mined from online resources (Shi et al., 2018, 2019).",
"However, they only capture one aspect of signed languages.",
"Isolated sign corpora are collections of annotated single signs.",
"They are synthesized (Ebling et al., 2018; Huang et al., 2018; Sincan and Keles, 2020; Hassan et al., 2020) or mined from online resources (Vaezi Joze and Koller, 2019; Li et al., 2020), and can be used for isolated sign language recognition or for contrastive analysis of minimal signing pairs (Imashev et al., 2020).",
"However, like dictionaries, they do not describe relations between 5 https://www.spreadthesign.com/ signs nor do they capture coarticulation during signing, and are often limited in vocabulary size (20-1000 signs) Continuous sign corpora contain parallel sequences of signs and spoken language.",
"Available continuous sign corpora are extremely limited, containing 4-6 orders of magnitude fewer sentence pairs than similar corpora for spoken language machine translation (Arivazhagan et al., 2019).",
"Moreover, while automatic speech recognition (ASR) datasets contain up to 50,000 hours of recordings (Pratap et al., 2020), the largest continuous sign language corpus contain only 1,150 hours, and only 50 of them are publicly available (Hanke et al., 2020).",
"These datasets are usually synthesized (Databases, 2007; Crasborn and Zwitserlood, 2008; Ko et al., 2019; Hanke et al., 2020) or recorded in studio conditions (Forster et al., 2014; Cihan Camgoz et al., 2018), which does not account for noise in real-life conditions.",
"Moreover, some contain signed interpretations of spoken language rather than naturally-produced signs, which may not accurately represent native signing since translation is now a part of the discourse event.",
"Availability Unlike the vast amount and diversity of available spoken language resources that allow various applications, signed language resources are scarce and currently only support translation and production.",
"Unfortunately, most of the signed language corpora discussed in the literature are either not available for use or available under heavy restrictions and licensing terms.",
"Signed language data is especially challenging to anonymize due to the importance of facial and other physical features in signing videos, limiting its open distribution, and developing anonymization with minimal information loss, or accurate anonymous representations is a promising research problem.",
"The CV community has mainly led the research on SLP so far to focus on processing the visual features in signed language videos.",
"As a result, current SLP methods do not fully address the linguistic complexity of signed languages.",
"We survey common SLP tasks and limitations of current methods by drawing on linguistic theories of signed languages.",
"Detection Sign language detection is the binary classification task to determine whether a signed language is being used or not in a given video frame.",
"While recent detection models (Borg and Camilleri, 2019; Moryossef et al., 2020) achieve high performance, we lack well-annotated data that include interference and distractions with non-signing instances for proper evaluation.",
"A similar task in spoken languages is voice activity detection (VAD) (Sohn et al., 1999; Ramrez et al., 2004), the detection of when a human voice is used in an audio signal.",
"However, as VAD methods often rely on speech-specific representations such as spectrograms, they are not always applicable to videos.",
"Identification Sign language identification clas-sifies which signed language is being used in a given video automatically.",
"Existing works utilize the distribution of phonemes (Gebre et al., 2013) or activity maps in signing space (Monteiro et al., 2016) to identify the signed language in videos.",
"However, these methods only rely on low-level visual features, while signed languages have several distinctive features on a linguistic level, such as lexical or structural differences (McKee and Kennedy, 2000; Kimmelman, 2014; Ferreira-Brito, 1984; Shroyer and Shroyer, 1984) which have not been explored for this task.",
"Segmentation Segmentation consists of detecting the frame boundaries for signs or phrases in videos to divide them into meaningful units.",
"Current methods resort to segmenting units loosely mapped to signed language units (Santemiz et al., 2009; Farag and Brock, 2019; Bull et al., 2020), and does not leverage reliable linguistic predictors of sentence boundaries such as prosody in signed languages (i.e. pauses, sign duration, facial expressions, eye apertures) (Sandler, 2010; Ormel and Crasborn, 2012).",
"Recognition Sign language recognition (SLR) detects and label signs from a video, either on isolated (Imashev et al., 2020; Sincan and Keles, 2020) or continuous (Cui et al., 2017; Camgoz et al., 2018, 2020b) signs.",
"Though some previous works have referred to this as sign language translation, recognition merely determines the associated label of each sign, without handling the syntax and morphology of the signed language (Padden, 1988) to create a spoken language output.",
"Instead, SLR has often been used as an intermediate step during translation to produce glosses from signed language videos.",
"Translation Sign language translation (SLT) commonly refers to the translation of signed language to spoken language.",
"Current methods either perform translation with glosses (Camgoz et al., 2018, 2020b; Yin and Read, 2020a,b; Moryossef et al., 2021) or on pose estimations and sign articulators from videos (Ko et al., 2019; Camgoz et al., 2020a), but do not, for instance, handle spatial relations and grounding in discourse to resolve ambiguous referents.",
"Production Sign language production consists of producing signed language from spoken language and often use poses as an intermediate representation to overcome challenges in animation.",
"To overcome the challenges in generating videos directly, most efforts use poses as an intermediate representation, with the goal of either using computer animation or pose-to-video models to perform video production.",
"Earlier methods generate and concatenate isolated signs (Stoll et al., 2018, 2020), while more recent methods (Saun-ders et al., 2020b,a; Zelinka and Kanis, 2020; Xiao et al., 2020) autoregressively decode a sequence of poses from an input text.",
"Due to the lack of suitable automatic evaluation methods of generated signs, existing works resort to measuring back-translation quality, which cannot accurately capture the quality of the produced signs nor its usability in real-world settings.",
"A better understanding of how distinctions in meaning are created in signed language may help develop a better evaluation method.",
"The limitations in the design of current SLP models often stem from the lack of exploring the linguistic possibilities of signed languages.",
"We therefore invite the NLP community to collaborate with the CV community, for their expertise in visual processing, and signing communities and sign linguists, for their expertise in signed languages and the lived experiences of signers, in researching SLP.",
"We believe that first, the development of known tasks in the standard NLP pipeline to signed languages will help us better understand how to model them, as well as provide valuable tools for higher-level applications.",
"Although these tasks have been thoroughly researched for spoken languages, they pose interesting new challenges in a different modality.",
"We also emphasize the need for real-world data to develop such methods, and a close collaboration with signing communities to have an accurate understanding of how signed language technologies can benefit signers, all the while respecting the Deaf community's ownership of signed languages.",
"Although signed and spoken languages differ in modality, we argue that as both express the syntax, semantics, and pragmatics of natural languages, fundamental theories of NLP can and should be extended to signed languages.",
"NLP applications often rely on low-level tools such as tokenizers and parsers, so we invite more research efforts on these core NLP tasks that often lay the foundation of other applications.",
"We also discuss what considerations should be taken into account for their development to signed languages and raise open questions that should be addressed.",
"Tokenization The vast majority of NLP methods require a discrete input.",
"To extend NLP technologies to signed languages, we must first and foremost be able to develop adequate tokenization tools that maps continuous signed language videos to a discrete, accurate representation with minimal information loss.",
"While existing SLP systems and datasets often use glosses as discrete lexical units of signed phrases, this poses three significant problems: (1) linear, single-dimensional glosses cannot fully capture the spatial constructions of signed languages, which downgrades downstream performance (Yin and Read, 2020b); (2) glosses are language-specific and requiring new glossing models for each language is impractical given the scarcity of resources; (3) glosses lack standard across corpora which limits data sharing and adds significant overhead in modeling.",
"We thus urge the adoption of an efficient , universal , and standardized method for tokenization of signed languages, all the while considering: how do we define lexical units in signed languages?",
"(John-ston and Schembri, 1999; Johnston, 2010)",
"To what degree can phonological units of signed languages be mapped to lexical units?",
"Should we model the articulators of signs separately or together?",
"What are the cross-linguistic phonological differences to consider?",
"To what extent can ideas used in automatic speech recognition be applied to signed languages?",
"Syntactic Analysis Part-of-speech (POS) tagging and syntactic parsing are fundamental to understand the meaning of words in context.",
"Yet, no such linguistic tools for automatic syntactic analyses exist.",
"To develop such tools, we must first define to what extent POS tagging and syntactic parsing for spoken languages also generalize to signed languages do we need a new set of POS and dependency tags for signed languages?",
"How are morphological features expressed?",
"What are the annotation guidelines to create datasets on syntax?",
"Can we draw on linguistic theories to design features and rules that perform these tasks?",
"Are there typologically similar spoken languages for some signed languages we can perform transfer learning with?",
"Named Entity Recognition (NER) Recognizing named entities and finding relationships between them are highligh important in information retrieval and classification.",
"Named entities in signed languages can be produced by a finger-spelled sequence, a sign, or even through mouthing of the name while the referent is introduced through pointing.",
"Bleicken et al. (2016) attempt NER in German Sign Language (DGS) to perform anonymization, but only do so indirectly, by either performing NER on the gold DGS gloss annotations and German translations or manually on the videos.",
"We instead propose NER in a fully automated fashion while considering, what are the visual markers of named entities?",
"How are they introduced and referenced?",
"How are relationships between them established?",
"Coreference Resolution Resolving coreference is crucial for language understanding.",
"In signed languages, present referents, where the signer explicitly points to the entity in question, are relatively unambiguous.",
"In contrast, non-present referents and classifiers are heavily grounded in the signing space, so good modeling of the spatial coherence in sign language is required.",
"Evidence suggests that classic theoretical frameworks, such as discourse representation theory, may extend to signed languages (Steinbach and Onea, 2016).",
"We pose the following questions: to what extent can automatic coreference resolution of spoken languages be applied to signed languages?",
"How do we keep track of referents in space?",
"How can we leverage spatial relations to resolve ambiguity?",
"Towards Linguistically Informed and Multimodal SLP We highly encourage the collaboration of multimodal and SLP research communities to develop powerful SLP models informed by core NLP tools such as the ones discussed, all the while processing and relating information from both linguistic and visual modalities.",
"On the one hand, theories and methods to reason multimodal messages can enhance the joint modeling of vision and language in signed languages.",
"SLP is especially subject to three of the core technical challenges in multimodal machine learning (Baltrusaitis et al., 2018): translation how do we map visual-gestural information to/from audio-oral and textual information?",
"alignment how do we relate signed language units to spoken language units?",
"co-learning can we transfer high-resource spoken language knowledge to signed language?",
"On the other hand, meaning in spoken languages is not only conveyed through speech or text but also through the visual modality.",
"Studying signed languages can give a better understanding of how to model co-speech gestures, spatial discourse relations, and conceptual grounding of language through vision.",
"Data is essential to develop any of the core NLP tools previously described, and current efforts in SLP are often limited by the lack of adequate data.",
"We discuss the considerations to keep in mind when building datasets, challenges of collecting such data, and directions to facilitate data collection.",
"What is Good Signed Language Data?",
"For SLP models to be deployable, they must be developed using data that represents the real world accurately.",
"What constitutes an ideal signed language dataset is an open question, we suggest including the following requirements: (1) a broad domain; (2) sufficient data and vocabulary size; (3) real-world conditions; (4) naturally produced signs; (5) a diverse signer demographic; (6) native signers; and when applicable, (7) dense annotations.",
"To illustrate the importance of data quality during modeling, we first take as an example a current benchmark for SLP, the RWTH-PHOENIX-Weather 2014T dataset (Cihan Camgoz et al., 2018) of German Sign Language, that does not meet most of the above criteria: it is restricted to the weather domain (1); contains only around 8K segments with 1K unique signs (2); filmed in studio conditions (3); interpreted from German utterances (4); and signed by nine Caucasian interpreters (5,6).",
"Although this dataset successfully addressed data scarcity issues at the time and successfully rendered results comparable and fueled competitive research, it does not accurately represent signed languages in the real world.",
"On the other hand, the Public DGS Corpus (Hanke et al., 2020) is an open-domain (1) dataset consisting of 50 hours of natural signing (4) by 330 native signers from various regions in Germany (5,6), annotated with glosses, HamNoSys and German translations (7), meeting all but two requirements we suggest.",
"We train a gloss-to-text sign language translation transformer (Yin and Read, 2020b) on both datasets.",
"On RWTH-PHOENIX-Weather 2014T, we obtain 22.17 BLEU on testing; on Public DGS Corpus, we obtain a mere 3.2 BLEU.",
"Although Transformers achieve encouraging results on RWTH-PHOENIX-Weather 2014T (Saunders et al., 2020b; Camgoz et al., 2020a), they fail on more realistic, open-domain data.",
"These results reveal that firstly, for real-world applications, we need more data to train such types of models, and secondly, while available data is severely limited in size, less data-hungry and more linguistically-informed approaches may be more suitable.",
"This experiment reveals how it is crucial to use data that accurately represent the complexity and diversity of signed languages to precisely assess what types of methods are suitable, and how well our models would deploy to the real world.",
"Challenges of Data Collection Collecting and annotating signed data inline with the ideal requires more resources than speech or text data, taking up to 600 minutes per minute of an annotated signed language video (Hanke et al., 2020).",
"Moreover, annotation usually require a specific set of knowledge and skills, which makes recruiting or training qual-ified annotators challenging.",
"Additionally, there is little existing signed language data in the wild that are open to use, especially from native signers that are not interpretations of speech.",
"Therefore, data collection often requires significant efforts and costs of on-site recording as well.",
"Automating Annotation To collect more data that enables the development of deployable SLP models, one useful research direction is creating tools that can simplify or automate parts of the collection and annotation process.",
"One of the largest bottleneck in obtaining more adequate signed language data is the amount of time and scarcity of experts required to perform annotation.",
"Therefore, tools that perform automatic parsing, detection of frame boundaries, extraction of articulatory features, suggestions for lexical annotations, and allow parts of the annotation process to be crowdsourced to non-experts, to name a few, have a high potential to facilitate and accelerate the availability of good data.",
"Finally, when working with signed languages, it is vital to keep in mind who this technology should benefit, and what they need.",
"Researchers in SLP must honor that signed languages belong to the Deaf community and avoid exploiting their language as a commodity (Bird, 2020).",
"Solving Real Needs Many efforts in SLP have developed intrusive methods (e.g. requiring signers to wear special gloves), which are often rejected by signing communities and therefore have limited real-world value.",
"Such efforts are often marketed to perform sign language translation when they, in fact, only identify fingerspelling or recognize a very limited set of isolated signs at best.",
"These approaches oversimplify the rich grammar of signed languages, promote the misconception that signs are solely expressed through the hands, and are considered by the Deaf community as a manifestation of audism, where it is the signers who must make the extra effort to wear additional sensors to be understood by non-signers (Erard, 2017).",
"In order to avoid such mistakes, we encourage close Deaf involvement throughout the research process to ensure that we direct our efforts towards applications that will be adopted by signers, and do not make false assumptions about signed languages or the needs of signing communities.",
"Building Collaboration Deaf collaborations and leadership are essential for developing signed language technologies to ensure they address the community's needs and will be adopted, and that they do not rely on misconceptions or inaccuracies about signed language (Harris et al., 2009; Kusters et al., 2017).",
"Hearing researchers cannot relate to the deaf experience or fully understand the context in which the tools being developed would be used, nor can they speak for the deaf.",
"Therefore, we encourage the creation of a long-term collaborative environment between signed language researchers and users, so that deaf users can identify meaningful challenges, and provide insights on the considerations to take, while researchers cater to the signers' needs as the field evolves.",
"We also recommend reaching out to signing communities for reviewing papers on signed languages, to ensure an adequate evaluation of this type of research results published at ACL venues.",
"There are several ways to connect with Deaf communities for collaboration: one can seek deaf students in their local community, reach out to schools for the deaf, contact deaf linguists, join a network of researchers of sign-related technologies 6 , and/or participate in deaf-led projects.",
"We urge the inclusion of signed languages in NLP.",
"We believe that the NLP community is well-positioned, especially with the plethora of successful spoken language processing methods coupled with the recent advent of computer vision tools for videos, to bring the linguistic insight needed for better signed language models.",
"We hope to see an increase in both the interests and efforts in collecting signed language resources and developing signed language tools while building a strong collaboration with signing communities.",
"We would like to thank Marc Schulder, Claude Mauk, David Mortensen, Chaitanya Ahuja, Sid-dharth Dalmia, Shruti Palaskar and Graham Neu-big as well as the anonymous reviewers for their helpful feedback and insightful discussions."
] | [
"abstain",
"method",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"other"
] |
[
"Many efforts of research are devoted to semantic role labeling (SRL) which is crucial for natural language understanding.",
"Supervised approaches have achieved impressing performances when large-scale corpora are available for resource-rich languages such as English.",
"While for the low-resource languages with no annotated SRL dataset, it is still challenging to obtain competitive performances.",
"Cross-lingual SRL is one promising way to address the problem, which has achieved great advances with the help of model transferring and annotation projection.",
"In this paper, we propose a novel alternative based on corpus translation, constructing high-quality training datasets for the target languages from the source gold-standard SRL annotations.",
"Experimental results on Universal Proposition Bank show that the translation-based method is highly effective, and the automatic pseudo datasets can improve the target-language SRL performances significantly.",
"Semantic role labeling (SRL), which aims to capture the high-level meaning of a sentence, such as who did what to whom , is an underlying task for facilitating a broad range of natural language processing (NLP) tasks (Shen and Lapata, 2007; Liu and Gildea, 2010; Genest and Lapalme, 2011; Gao and Vogel, 2011; Wang et al., 2015; Khan et al., 2015).",
"Currently, the majority of research work on SRL is dedicated to the English language, due to the availability of large quantity of labeled data.",
"With this regard, cross-lingual SRL, especially the one transferring the advantage of the source language with affluent amount of resources (e.g., English) to the target language where the labeled data is scarce or even not available, is of great importance Corresponding author.",
"(Kozhevnikov and Titov, 2013; He et al., 2019; Aminian et al., 2019).",
"Previous work on cross-lingual SRL can generally be divided into two categories: model transferring and annotation projection.",
"The former builds cross-lingual models on language-independent features such as cross-lingual word representations and universal POS tags which can be transferred into target languages directly (McDonald et al., 2013; Swayamdipta et al., 2016; Daza and Frank, 2019).",
"The latter bases on a large-scale parallel corpus between the source and target languages where the source-side sentences are annotated with SRL tags automatically by a source SRL labeler, and then the source annotations are projected onto the target-side sentences in accordance of word alignments (Yarowsky et al., 2001; Hwa et al., 2005; van der Plas et al., 2011; Kozhevnikov and Titov, 2013; Pado and Lapata, 2014; Akbik et al., 2015).",
"In addition, the annotation projection can be combined with model transferring naturally.",
"Particularly, the projected SRL tags in annotation projection could contain much noise because of the source-side automatic annotations.",
"A straightforward solution is the translation-based approach, which has been demonstrated effective for cross-lingual dependency parsing (Tackstrom et al., 2012; Rasooli and Collins, 2015; Guo et al., 2016; Zhang et al., 2019).",
"The key idea is to translate the gold-standard source training data into target language side by translation directly, avoiding the problem of the low-quality source annotations.",
"Fortunately, due to recent great advances of neural machine translation (NMT) (Bahdanau et al., 2015; Wu et al., 2016), this approach could have great potentials for cross-lingual transferring.",
"To this end, in this paper, we study the translation-based method for cross-lingual SRL.",
"Figure 1 illustrates the differences between previous approaches.",
"Sentences of the source language training corpus are translated into the target language, and then the source SRL annotations are projected into the target side, resulting in a set of high-quality target language SRL corpus, which is used to train the target SRL model.",
"Further, we merge the gold-standard source corpus and the translated target together, which can be regarded as a combination of the translation-based method and the model transferring.",
"Our baseline is a simple BiLSTM CRF model by using multilingual contextualized word representations (Peters et al., 2018; Devlin et al., 2019).",
"For a better exploration of the blended corpus, we adopt a parameter generation network (PGN) to enhance the BiLSTM module, which can capture the language differences effectively (Platanios et al., 2018; Jia et al., 2019).",
"We conduct experiments based on Universal Proposition Bank corpus (v1.0) (Akbik et al., 2015; Akbik and Li, 2016) over seven languages.",
"First, we verify the effectiveness of our method on the single-source SRL transferring, where the English language is adopted as the source language and the remaining are used as the target languages.",
"Results show that the translation-based method is highly effective for cross-lingual SRL, and the performances are further improved when PGN-BiLSTM is used.",
"Further, we conduct experiments on the multi-source SRL transferring, where for each target language all the remaining six languages are used as the source languages.",
"The same tendencies as the single-source setting can be observed.",
"We conduct detailed analysis work for both settings to understand our proposed method comprehensively.",
"We present the first work of the translation-based approach for unsupervised cross-lingual SRL.",
"We build a high-quality of pseudo training corpus for a target language, and then verify the effectiveness of the corpus under a range of settings.",
"We take advantage of the multilingual contextualized word representations, and strengthen the multilingual model training with PGN-BiLSTM model.",
"There exists extensive work for cross-lingual transfer learning (van der Plas et al., 2011; Kozhevnikov and Titov, 2013; Pado and Lapata, 2014; Rasooli and Collins, 2015; Tiedemann and Agic, 2016; Zhao et al., 2018; Chen et al., 2018, 2019; Aminian et al., 2019).",
"Model transferring and annotation projection are two mainstream categories for the goal.",
"The first category aims to build a model based on the source language corpus, and then adapt it to the target languages (Yarowsky et al., 2001; Hwa et al., 2005; Tiedemann, 2015).",
"The second category attempts to produce a set of automatic training instances for the target language by a source language model and a number of parallel sentences, and then train a target model on the dataset (Bjorkelund et al., 2009; McDonald et al., 2013; Lei et al., 2015; Swayamdipta et al., 2016; Mulcaire et al., 2018; Daza and Frank, 2019).",
"For cross-lingual SRL, annotation projection has received the most attention (Pado and Lapata, 2014).",
"A range of strategies have been proposed to enhance the SRL performance of the target language, including improving the projection quality (Tiedemann, 2015), joint learning of syntax and semantics (Kozhevnikov and Titov, 2013), iterative bootstrapping to reduce the influence of noise target corpus (Akbik et al., 2015), and joint translation and SRL (Aminian et al., 2019).",
"Our work is mainly inspired by the recent work of treebank translation of cross-lingual dependency parsing (Tiedemann et al., 2014; Tiedemann, 2015; Rasooli and Collins, 2015; Guo et al., 2016; Tiedemann and Agic, 2016; Conneau et al., 2018; Zhang et al., 2019), which is referred to as the translation-based approaches.",
"These approaches directly project the gold-standard annotation into the target side, alleviating the problem of erroneous source annotations in standard annotation projection.",
"In addition, we combine the approach with model transferring, which has been concerned lit-tle for cross-lingual SRL.",
"The model transferring 1 https://github.com/scofield7419/ XSRL-ACL under Apache License 2.0.",
"benefits much from the recent advance of crosslingual contextualized word representations (He et al., 2019).",
"The development of universal annotation schemes for a variety of NLP tasks can greatly facilitate cross-lingual SRL, including POS tagging (Petrov et al., 2012), dependency parsing (Mc-Donald et al., 2013; Przepiorkowski and Patejuk, 2018), morphology (Sylak-Glassman et al., 2015) and SRL (Aminian et al., 2019).",
"Our work makes use of the publicly available Universal Proposition Bank (UPB) (Akbik et al., 2015; Akbik and Li, 2016), which annotates the predicate and semantic roles following the frame and role schemes of the English Proposition Bank 3.0 (Kingsbury and Palmer, 2002; Palmer et al., 2005).",
"(A0) (get.01) (A1) (A0) (get.01) (A1) (AM-MOD) (AM-MOD) (A0) (A1) (AM-TMP) (come.01) (AM-DIR) (A0) (A1) (AM-TMP) (come.01) you should get a cocker spaniel sie sollten einen cockerspaniel bekommen you love it when I come over du liebst es wenn ich rberkomme (AM-LOC) (A0) (provoke.01) (A0) (A1) (provoke.01) US assault provoked the battle US-angriff provozierte die schlacht (A1)",
"(A0) (get.01) (A1) (A0) (get.01) (A1) (AM-MOD) (AM-MOD) (A0) (A1) (AM-TMP) (come.01) (AM-DIR) (A0) (A1) (AM-TMP) (come.01) you should get a cocker spaniel sie sollten einen cockerspaniel bekommen you love it when I come over du liebst es wenn ich rberkomme (AM-LOC) (A0) (provoke.01) (A0) (A1) (provoke.01) US assault provoked the battle US-angriff provozierte die schlacht (A1)",
"Supervised SRL models are also closely related to our work (He et al., 2017, 2018a; Xia et al., 2019).",
"A great deal of work attempts for an end-to-end solution with sophistical neural networks, detecting the predicates as well as the corresponding argument roles in one shot (He et al., 2017; Tan et al., 2018; Li et al., 2019).",
"Also there exist a number of studies which aims for adapting various powerful features for the task (Strubell et al., 2018; Li et al., 2018).",
"In this work, we exploit a multilingual PGN-BiLSTM model (Jia et al., 2019) with contextualized word representations (He et al., 2019), which can obtain state-of-the-art performance for cross-lingual SRL.",
"We induce automatic target data from the gold-standard source data by full translation and then project the SRL predicates and arguments into their corresponding words by aligning, producing the final translated SRL corpus for the target language automatically.",
"The method has been demonstrated effective for cross-lingual dependency parsing (Tiedemann et al., 2014; Tiedemann, 2015; Tiedemann and Agic, 2016; Zhang et al., 2019).",
"Compared with annotation projection, we can ensure the annotation quality at the source side, thus higher quality target corpus is also expected.",
"In addition, dependency-based SRL could benefit more by this method, as only predicate words and their arguments are required to be projected into the target side, while dependency parsing should concern all sentential words.",
"The overall process is accomplished by two steps: translating and projecting.",
"Translating.",
"First, we use a state-of-the-art translation system to produce the target translations for the sentences of the source SRL data.",
"Give a source sentence e 1 e n , we translate it into f 1 f m of the target language.",
"It is worth noting that the recent impressive advances in NMT (Bahdanau et al., 2015; Wu et al., 2016) facilitate our work greatly, which enables our method to have high-quality translations.",
"Projecting.",
"Then we incrementally project the corresponding predicates or arguments of a source sentence e 1 e n to its target f 1 f m .",
"We adopt two kinds of information to assist the projection: (1) the alignment probabilities a ( f j | e i ) from the source word e i into f j , which can be calculated by a word-alignment tool, and (2) the POS tag distributions p ( t | f j ) of the target sentential words, which can be derived from a supervised target language POS tagger, where i [1 , n ] , j [1 , m ] , and t denotes an arbitrary POS tag.",
"We focus on SRL-related words of the source sentence only, and perform the process gradually at the predicate level.",
"For each predicate in a sentence, we collect the predicate word as well as its role words, and then project their role labels into the target sentence.",
"Formally, for each of these words (i.e., e i ), we have the SRL role tag r e i as well as its POS tag t e i , both of which have been already annotated in the UPB.",
"First, we find its target word f j with the highest alignment probability, regarding the word f j as the corresponding projection carrying the semantic role r e i .",
"Then we calculate the confidence score of this projection by the following formula: score( e i f j , r e i ) = a ( f j | e i ) p ( t e i | f j ) , (1) which is a joint probability of word alignment corresponding and POS tag consistency.",
"The one-one target-source alignment",
"2(a) is the ideal condition of the projection.",
"However, there could be many-to-one cases for the given words, leading to semantic role conflicts at the target language words.",
"For these cases, we take precedence for the predicate projections, and otherwise keep only the highest confidence projections.",
"Figure",
"2(b) shows a predicate-argument conflict example, where the predicate projection is reserved, and Figure",
"2(c) shows an argument-argument conflict example where the projection with the higher confidence score is reserved.",
"Finally, we set a threshold value to remove low confidence projections.",
"If the confidence score of a predicate projection is below , all the roles of this predicate are removed as well.",
"For the argument projections whose confidence is below , we remove the single arguments directly, with no influence on the other projections.",
"In this work, we focus on dependency-based SRL, recognizing semantic roles for a given predicate (He et al., 2017).",
"The task can be treated as a standard sequence labeling problem, and a simple multi-layer BiLSTM-CRF model is exploited here, which has archived state-of-the-art performance with contextualized word representations (He et al., 2018b; Xia et al., 2019; He et al., 2019).",
"In particular, we adapt the model to better support multilingual inputs by using a PGN module on the BiLSTM (Hochreiter and Schmidhuber, 1997).",
"Figure 3 shows the overall architecture.",
"Given an input sentence s = w 1 w n of a specific language L and w p ( p denotes the position) is the predicate word, we use three sources of features to represent each word: (1) the word form, (2) the",
"where t 1 t n is the universal POS tag sequence for the input sentence.",
"For the POS tags and the predicate indicators, we use the embedding method to obtain their vectorial representations.",
"We compare three kinds of word form representations for cross-lingual SRL: (1) multilingual word embeddings, (2) multilingual ELMo representation (Peters et al., 2018), and (3) multilingual BERT representation (Devlin et al., 2019).",
"Note that we use the averaged vector of the inner-word piece representations from BERT outputs as the full word representation.",
"We employ the PGN-BiLSTM (Platanios et al., 2018; Jia et al., 2019) to encode the input sequence x 1 x n , which is first introduced for cross-domain transfer learning to capture domain difference.",
"Here we use it for the multilingual setting aiming to model the language characteristics.",
"Compared with the vanilla BiLSTM module, PGN-BiLSTM dynamically selects the language-aware parameters for BiLSTM.",
"Let V be the flat-tened vector of all the parameters of a BiLSTM cell, the language-aware VL is produced by: VL = WPGN e L , (3) where WPGN denotes the parameters of vanilla BiLSTM part in the PGN-BiLSTM, including the weights of the input, forget, output gates and the cell modules, and e L is the embedding representation of language L .",
"The mechanism of parameter generation of PGN-BiLSTM is illustrated in Figure 4.",
"Following, we derive module parameters from BiLSTM Params: Parameter Generation Network Params: Figure 4: The mechanism of the PGN-BiLSTM.",
"VL to compute the BiLSTM outputs.",
"The overall process can be formalized as: h 1 h n = PGN-BiLSTM ( x 1 x n , e L ) = BiLSTM VL ( x 1 x n ) (4) which differs from the vanilla BiLSTM in that e L is one extra input to obtain BiLSTM parameters.",
"Specifically, we adopt a 3-layer bidirectional PGN-LSTM as the encoder.",
"Given the encoder output h 1 h n for sentence s = w 1 w n , we use CRFs (Lafferty et al., 2001) to compute the probability of each candidate output y = y 1 y n :",
"where W and T are the parameters of CRFs, and Z is a normalization factor for probability calculation.",
"The Viterbi algorithm is used to search for the highest-probability output SRL tag sequence.",
"Our experiments are based on the Universal Proposition Bank (UPB, v1.0) 2 , which is built upon Universal Dependency Treebank (UDT, v1.4) 3 and Proposition Bank (PB, v3.0) 4 .",
"In UPB, consistent dependency-based universal SRL annotations are constructed across all languages.",
"In particular, we assemble the English SRL dataset based on the English EWT subset from the UDT v1.4 and the English corpus in PB v3.0.",
"Finally, we choose a total of seven languages as our datasets, including English (EN) and German (DE) of the IE.German 2 https://github.com/System-T/ UniversalPropositions 3 https://lindat.mff.cuni.cz/ repository/xmlui/handle/11234/1-1827 4 http://propbank.github.io/ Fam.",
"family, French (FR), Italian (IT), Spanish (ES) 5 and Portuguese (PT) of the IE.Romance family, and Finnish (FI) of the Uralic family.",
"Table 1 shows the data statistics in detail.",
"We focus on unsupervised cross-lingual SRL, assuming that no gold-standard target-language SRL corpus is available.",
"Our goal is to construct pseudo training datasets by corpus translation from the gold-standard source-language SRL datasets.",
"The Google Translation System 6 is adopted for sentence translation, and the fastAlign toolkit (Dyer et al., 2013) is used to obtain word alignments.",
"In order to obtain accurate word alignment, we collect a set of parallel corpora to augment the training dataset of fastAlign.",
"7 The universal POS tags of the translated sentences are produced by supervised monolingual POS taggers, which are trained on the corresponding UDT v1.4 datasets, respectively.",
"8 5.3 Settings Multi-lingual word representations.",
"As mentioned in Section 4.1, we investigate three kinds of multilingual word representations: (1) Word Embedding (Emb): MUSE is exploited to align all monolingual fastText word embeddings into a universal space (Lample et al., 2018).",
"9 (2) ELMo: A blended dataset 10 of the seven languages is used to train multilingual ELMo (Mulcaire et al., 2019).",
"5 We merge the Spanish and Spanish-AnCora as one.",
"6 https://translate.google.com/ , Oct. 1 2019 7 http://opus.nlpl.eu/ , Europarl v8.",
"8 A simple BiLSTM-CRF POS tagging model with monolingual ELMo representations is used, which can achieve accuracies of 96.54%(EN), 97.15%(DE), 94.42%(FR), 97.21%(IT), 94.12%(ES), 95.86%(PT) and 92.16%(FI), respectively.",
"9 https://github.com/facebookresearch/ MUSE 10 CoNLL2017 corpus: https://lindat.mff.cuni.",
"(3) BERT: the official released multilingual BERT (base, cased version) is used directly (Devlin et al., 2019).",
"11 Hyperparameters.",
"For SRL translation, there is only one hyperparameter, the projection confidence threshold , for filtering low-quality translated SRL sentences.",
"Figure 5 shows the performances in the preliminary experiments for each languages under different .",
"Accordingly, we set universally for all languages to 0 .",
"4 .",
"For the neural SRL models, the dimension sizes of multilingual word embeddings, ELMo and BERT are 300, 1024 and 768, respectively.",
"The POS tag, predicate-indicator and language ID embedding sizes are 100, 100 and 32, respectively.",
"The hidden size of LSTM is set to 650.",
"We exploit online training with a batch size of 50, and the model parameters are optimized by using the Adam algorithm with an initial rate of 0.0005.",
"The training is performed over the whole training dataset without early-stopping for 80 iterations on bilingual transfer, and 300 iterations on multilingual transfer.",
"Baselines.",
"In order to test the effectiveness of our PGN model, we compare it with several baselines as well.",
"First, we denote our model by using the vanilla BiLSTM instead as BASIC , and in particular, this model is exploited for all monolingual training all through this work.",
"Further, we adopt two much stronger baselines, the MoE model proposed by Guo et al. (2018) and the MAN-MoE model proposed by Chen et al. (2019), respectively.",
"Both the two models are designed to train a model effectively based on corpora from multiple languages, similar to our PGN model.",
"Evaluation.",
"We use the F1 score as the major metric to measure the model performance for each 11 https://github.com/google-research/ bert Model DE FR IT ES PT FI Avg SRC Emb 42.7 51.0 42.6 40.1 43.9 30.0 41.7 BERT 43.2 53.1 44.4 41.2 44.2 31.6 43.0 ELMo 46.8 54.6 43.0 42.1 46.1 33.9 44.4 TGT Emb 49.4 51.3 45.5 48.4 46.9 38.7 46.7 BERT 53.0 54.3 49.1 51.3 48.8 41.1 49.6 ELMo 54.6 55.3 49.7 53.6 49.8 43.9 51.1 SRC & TGT (ELMo) BASIC 59.2 61.7 55.1 58.3 53.7 47.6 55.8 PGN 65.0 64.8 58.7 62.5 56.0 54.5 60.3 MoE 63.2 63.3 56.7 60.3 55.0 50.6 58.2 MAN-MoE 64.3 65.3 57.1 62.8 55.2 52.3 59.4 Table 2: Results of cross-lingual transfer from English.",
"target language.",
"Each model is trained five times and the averaged value is reported.",
"We conduct sig-nificance tests by using the Dan Bikel's randomized parsing evaluation comparator 12 .",
"We first conduct experiments on cross-lingual transfer from the English source to the rest of the other six target languages, respectively, which has been a typical setting for cross-lingual investigations (Wang et al., 2019).",
"The results are summarized in Table 2. We list the F-scores by using only the source corpus ( SRC), only the translated target corpus ( TGT) and the mixture corpus of source and target ( SRC & TGT), comparing the performances of different multilingual word representations as well as different multilingual SRL models.",
"Multilingual word representations.",
"First, we evaluate the effectiveness of the three different multilingual word representations exploited.",
"We compare their performances under two settings, by using S RC and T GT corpus, respectively.",
"According to the results, we find that the multilingual contextualized word representations (i.e. BERT and ELMo) are better in both two settings, which is consistent with previous studies (Mulcaire et al., 2019; Schuster et al., 2019).",
"Interestingly, the multilingual BERT performs worse than the ELMo, which can be explained by that the ELMo representation is pre-trained based on the corpus which involves in the focused seven languages.",
"This indicates that the official released multilingual BERT can be further improved, since monolingual BERT has been demonstrated to produce better performances than 12 http://www.cis.upenn.edu/dbikel/ software.html#comparator ELMo (Tenney et al., 2019).",
"Translated target.",
"Next, We consider taking the translated target as only the training data to examine the effectiveness of the pseudo datasets.",
"As shown in Table 2, we find that the translated datasets can bring significantly better performances than the source baseline overall languages, resulting in an averaged F1 score increase of 51 .",
"1 44 .",
"4 = 6 .",
"7 .",
"The results demonstrate that corpus translation is one effective way for crosslingual SRL.",
"The observation is in line with the previous work for cross-lingual dependency parsing (Tiedemann and Agic, 2016; Zhang et al., 2019).",
"By direct gold-standard corpus translation, the produced pseudo training data can not only remain high-quality SRL annotations but also capture the language divergences effectively, which leads to better performance than the source baseline model.",
"Combining source and pseudo target.",
"Further, we combine the pseudo translated target corpus with the source language corpus together to train the target SRL models.",
"According to the numbers in Table 2, we see that further gains can be achieved for all languages, where the averaged improvement is 55.8-51.1=4.7 ( BASIC is used for a fair compari-son).",
"Note that since several source sentences are filtered during translation which might be the reason for the gains, we make a fairer comparison off-the-line by setting =0 (i.e., no sentence filtering).",
"Similar gains can be achieved still.",
"Considering that the translated sentences are semantically equal to their counterparts in the gold-standard source, the possible reasons could be two hands: (1) the translated sentences may be biased in linguistic expression due to the data-driven translation models, (2) the discarded conflicted annotations in corpus translation are important, which are complementary to our model.",
"Language-aware encoder.",
"Finally, we investigate the effectiveness of PGN-BiLSTM module, which is exploited to capture language-specific information when the mixture corpus of both source and target datasets are used for training.",
"As shown in Table 2, we can see that the language-aware encoder by PGN can boost the F1 scores significantly, achieving an averaged improvement by 60.3-55.8=4.5.",
"In addition, we report the results of MoE and MAN-MoE , respectively, which also exploit the language information.",
"All the results demonstrate the usefulness of language-specific informa-Model EN DE FR IT ES PT FI Avg SRC Emb 50.3 49.2 52.4 44.9 46.7 51.0 36.4 47.3 BERT 51.8 50.6 54.0 45.3 51.3 51.8 38.1 49.0 ELMo 53.6 51.6 56.7 51.3 57.4 52.6 39.7 51.8 TGT Emb 56.5 51.6 55.2 47.1 50.0 53.2 40.4 50.6 BERT 59.8 55.5 57.0 52.6 54.3 56.6 44.0 54.3 ELMo 60.7 57.8 59.9 54.8 56.7 58.8 46.9 56.5 SRC & TGT (ELMo) BASIC 61.9 64.8 60.3 56.4 61.1 63.1 50.7 59.8 PGN 65.7 68.8 66.1 64.8 68.7 69.2 58.6 66.0 MoE 63.2 67.8 63.1 62.6 65.2 67.5 54.2 63.4 MAN-MoE 64.0 68.5 67.2 65.7 67.5 69.0 57.5 65.6 Table 3: Cross-lingual transfer with multiple sources.",
"Further, we investigate the setting of multi-source transfer learning, where all other languages except a given target language are used as the source languages, aiming to study the effectiveness of our translation-based method comprehensively.",
"Overall performances.",
"The results on multiple source SRL transferring are shown in Table 3. Generally, the results share similar tendencies with the single-source cross-lingual transfer from the source English, where the multilingual ELMo performs the best, the SRL models trained on the translated target datasets show better performances than those trained with the source datasets, and the mixture corpus with both source and target language datasets bring the best performances, which can be further improved by our final PGN model with language-aware encoders.",
"We compare the PGN model with the MoE and MAN-MoE as well, showing slightly better performances, which indicates the effectiveness of the PGN-BiLSTM module.",
"In addition, we can see that multi-source models outperform the single-source models in all cases, which is intuitive and consistent with previous studies (Lin et al., 2019).",
"Fine-grained bilingual transfer.",
"Following, we investigate the individual bilingual SRL transferring by examining the performance of each source-target language pair, aiming to uncover which language benefits a target most and trying to answer whether all source languages are useful for a target language.",
"Table 4 shows the results, where the cross-lingual models are trained on the mixture corpus of the source and translated target datasets.",
"First, we can see that the languages belonging to a single family can benefit each other greatly, bringing better performances than the other languages in the majority of cases (i.e., ENDE, FRITES PT).",
"Second, the multi-source transfer as indicated by All is able to obtain better performances across all languages, which further demonstrates its advantages over the single-source transfer.",
"Further, we look into the PGN model in detail, aiming to understand their capabilities of modeling linguistic-specific information.",
"We examine it by simply visualizing the language ID embeddings e L of each source-target language pair, respectively, where their Euclidean distances are depicted.",
"Intuitively, better performance can be achieved if the distance between the target and the source languages is closer.",
"Figure 6 shows the heatmap matrix.",
"We can see the overall tendency is highly similar to the results in Table 4, which is consistent with our intuition.",
"Here we conduct detailed analysis to understand the gains from the translated target datasets.",
"We select three representative languages for analysis, including German (DE), French (FR) and Finnish (FI), one language for each family, and compare 55 65 75 45 55 65 DE FR FI 30 45 60 DE FR FI 30 45 60 A0 A1 A2 AM-TMP SRC TGT SRC+TGT(sgl) SRC+TGT(mul) Figure 7: Performances on different argument label.",
"four models mainly, including three models (i.e., S RC, T GT and S RC & TGT with PGN ) of the single-source transfer from English and the final PGN model of multi-source transfer.",
"Performances by the SRL roles.",
"First, we investigate the cross-lingual SRL performances in terms of SRL Roles.",
"We select four representative roles for comparison, including A0 ( Agent ), A1 ( Patient ), A2 ( Instrument, Benefactive, Attribute ) and AM-TMP ( Temporal ), and report their F1 scores.",
"Figure 7 shows the results.",
"As a whole, the role A0 achieves the best F1 scores across all languages and all models, A1 ranks the second, and A2 and AM-TMP are slightly worse.",
"The tendency could be accounted for by the distribution of these labels, where A0 is the most frequent and A2 and AM-TMP have lower frequencies than A0 and A1 .",
"The second possible reason could be due to that the majority of the A0 and A1 words are notional words which could be more easily transferred by cross-lingual models.",
"In addition, we can see that the tendencies across different models for all three languages and all labels are identical, where multi-source transfer performs the best, single-source S RC+TGT ranks the second and our baseline model is the last.",
"The observation is consistent with the overall tendency, demonstrating the stability and also further verifying the effectiveness of our proposed models.",
"Performances by the distances to the predicate.",
"Second, we study the SRL performances in terms of the distance to the predicate word.",
"Intuitively, long-distance relations are more difficult, thus we expect that the SRL performance would decrease as the distance increases, as SRL actually detects the relationship between the role words and their 30 50 70 30 50 70 1-2 3-4 4-6 7-9 9 15 40 65 DE FR FI SRC TGT SRC+TGT(sgl) SRC+TGT(mul) Figure 8: Performances by surface distance between predicates and arguments.",
"predicates.",
"Figure 8 shows the F1 scores.",
"First, for all the settings we can see that the SRL performance drops by longer distances, which confirms our intuition.",
"In addition, the tendency between different models is the same as the overall results, demonstrating the effectiveness of our method.",
"We proposed a translation-based alternative for cross-lingual SRL.",
"The key idea is to construct high-quality datasets for the target languages by corpus translation from the gold-standard SRL annotations of the source languages.",
"In addition, we combined the gold-standard source SRL corpora and the pseudo translated target corpora together to enhance the cross-lingual SRL models.",
"We investigated cross-lingual SRL models with different kinds of multilingual word representations.",
"Further, we presented a PGN-BiLSTM encoder to better exploit the mixture corpora of different languages.",
"Experimental results on the UPB v1.0 dataset show that the translation-based method is an effective method for cross-lingual SRL transferring.",
"Significant improvements can be achieved by using the translated datasets for all selected languages, including both single-source and multi-source transfer.",
"Experiment analysis is offered to understand the proposed method in depth.",
"This work is supported by the National Natural Science Foundation of China (No.61772378 and 61602160), the National Key Research and Development Program of China (No.2017YFC1200500),",
"the Research Foundation of Ministry of Education of China (No.18JZD015), and the Major Projects of the National Social Science Foundation of China (No.11&ZD189)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"objective",
"abstain",
"method",
"abstain",
"objective",
"objective",
"result",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"This paper studies zero-shot cross-lingual transfer of vision-language models.",
"Specifically, we focus on multilingual text-to-video search and propose a Transformer-based model that learns contextual multilingual multimodal embeddings.",
"Under a zero-shot setting, we empirically demonstrate that performance degrades significantly when we query the multilingual text-video model with non-English sentences.",
"To address this problem, we introduce a multilingual multimodal pre-training strategy, and collect a new multilingual instructional video dataset (Multi-HowTo100M) for pre-training.",
"Experiments on VTT show that our method significantly improves video search in non-English languages without additional annotations.",
"Furthermore, when multilingual annotations are available, our method outperforms recent baselines by a large margin in multilingual text-to-video search on VTT and VATEX; as well as in multilingual text-to-image search on Multi30K.",
"Our model and Multi-HowTo100M is available at http://github.com/berniebear/ Multi-HT100M 1 Introduction One of the key challenges at the intersection of computer vision (CV) and natural language processing (NLP) is building versatile vision-language models that not only work in English, but in all of the world's approximately 7,000 languages.",
"Since collecting and annotating task-specific parallel multimodal data in all languages is impractical, a framework that makes vision-language models generalize across languages is highly desirable.",
"One technique that has shown promise to greatly improve the applicability of NLP models to new languages is zero-shot cross-lingual transfer , where models trained on a source language are applied Equal contribution.",
"as-is to a different language without any additional annotated training data (Tackstrom et al., 2012; Klementiev et al., 2012; Cotterell and Heigold, 2017; Chen et al., 2018; Neubig and Hu, 2018).",
"In particular, recent techniques for cross-lingual transfer have demonstrated that by performing unsupervised learning of language or translation models on many languages, followed by downstream task fine-tuning using only English annotation, models can nonetheless generalize to a non-English language (Wu and Dredze, 2019a; Lample and Con-neau, 2019; Huang et al., 2019a; Artetxe et al., 2020; Hu et al., 2020).",
"This success is attributed to the fact that many languages share a considerable amount of underlying vocabulary or structure.",
"At the vocabulary level, languages often have words that stem from the same origin, for instance, desk in English and Tisch in German both come from the Latin discus.",
"At the structural level, all languages have a recursive structure, and many share traits of morphology or word order.",
"For cross-lingual transfer of vision-language models, the visual information is clearly an essential element.",
"To this end, we make an important yet under-explored step to incorporate visual-textual relationships for improving multilingual models (De-vlin et al., 2019; Artetxe et al., 2020).",
"While spoken languages could be different, all humans share similar vision systems, and many visual concepts can be understood universally (Sigurdsson et al., 2020; Zhang et al., 2020).",
"For example, while is termed cat for an English speaker and chat for a French speaker; they understand similarly.",
"We leverage this observation to learn to associate sentences in different languages with visual concepts for promoting cross-lingual transfer of vision-language models.",
"In this work, we focus on multilingual text-to-video search tasks and propose a Transformer-based video-text model to learn contextual multilingual multimodal representations.",
"Our vanilla model yields state-of-the-art performance in multilingual text video search when trained with multilingual annotations.",
"However, under the zero-shot setting, rather surprisingly, there is a significant performance gap between English and non-English queries (see 5.5 for details).",
"To resolve this problem, motivated by recent advances in large-scale language model (Artetxe et al., 2020) and multimodal pre-training (Lu et al., 2019; Miech et al., 2019; Patrick et al., 2020), we propose a multilingual multimodal pre-training (MMP) strategy to exploit the weak supervision from large-scale multilingual text-video data.",
"We construct the Multilingual-HowTo100M dataset, that extends the English HowTo100M (Miech et al., 2019) dataset to contain subtitles in 9 languages for 1.2 million instructional videos.",
"Our method has two important benefits.",
"First, compared to pre-training on English-video data only, pre-training on multilingual text-video data exploits the additional supervision from a variety of languages, and therefore, enhances the search performance on an individual language.",
"Second, by exploiting the visual data as an implicit pivot at scale, our methods learns better alignments in the multilingual multimodal embedding space ( e . g ., cat--chat), which leads to improvement in zero-shot cross-lingual transfer ( e . g ., from cat-to chat) of vision-language models.",
"In our experiments on VTT (Xu et al., 2016) and VATEX (Wang et al., 2019), our method yields state-of-the-art English video search performance.",
"For zero-shot cross-lingual transfer, the proposed multilingual multimodal pre-training improves English-video pre-training by 2 2 .",
"5 in average R@1 across 9 languages.",
"Additionally, when trained with in-domain multilingual annotations as other baselines, our method outperforms them by a large margin in multilingual text video search on VATEX and text image search on Multi30K (El-liott et al., 2016).",
"To summarize, we make the following contributions: (1) We propose a transformer-based videotext model that learns contextual multilingual multimodal representations (3.1).",
"(2) We empirically demonstrate that vision-language models, unlike NLP models, have limited zero-shot cross-lingual transferrability.",
"(5.5).",
"(3) We introduce the multilingual multimodal pre-training strategy and construct a new Multi-HowTo100M dataset (4) for pre-training to improve zero-shot cross-lingual capability of vision-language models.",
"(4) We demonstrate the effectiveness of our approach, by achieving state-of-the-art multilingual text video search performance in both the zero-shot (5.5) and fully supervised setup (5.6).",
"Cross-lingual representations.",
"Early work on learning non-contextual cross-lingual representations used either parallel corpora (Gouws and Sgaard, 2015; Luong et al., 2015) or a bilingual dictionary to learn a transformation (Faruqui and Dyer, 2014; Mikolov et al., 2013).",
"Later approaches reduced the amount of supervision using self-training (Artetxe et al., 2017).",
"With the advances in monolingual transfer learning (McCann et al., 2017; Howard and Ruder, 2018; Peters et al., 2018; Devlin et al., 2019), multilingual extensions of pre-trained encoders have been proven effective in learning deep contextual cross-lingual representations (Eriguchi et al., 2017; Lample and Conneau, 2019; Wu and Dredze, 2019b; Siddhant et al., 2020; Pires et al., 2019; Pfeiffer et al., 2020).",
"We extend prior work to incorporate visual context.",
"Video-text representations.",
"The HowTo100M dataset (Miech et al., 2019) has attracted significant interest in leveraging multimodal pre-training for text video search (Korbar et al., 2020), captioning (Iashin and Rahtu, 2020), and unsupervised translation via image-based (Surs et al., 2020; Huang et al., 2020b) and video-based (Sig-urdsson et al., 2020) alignment.",
"This work studies a challenging and unexplored task: Zero-shot cross-lingual transfer of vision-language models.",
"Unlike prior image/video-text work that utilizes RNN (Dong et al., 2019; Chen et al., 2020a; Burns et al., 2020; Kim et al., 2020) and inter-modal contrastive objectives (Sigurdsson et al., 2020; Liu et al., 2019; Huang et al., 2019b; Patrick et al., 2021), we employ Transformers to learn contextual multilingual multimodal representations and uniquely models cross-lingual instances.",
"Moreover, we build Multi-HowTo100M, the largest text-video dataset for multilingual multimodal pre-training.",
"Cross-lingual Transfer.",
"Cross-lingual transfer has proven effective in many NLP tasks including dependency parsing (Schuster et al., 2019), named entity recognition (Rahimi et al., 2019), sentiment analysis (Barnes et al., 2019), document classification (Schwenk and Li, 2018), and question an-mBERT 3D-CNN TP TP a man performs shot put un hombre realiza lanzamiento de bala A tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n 1 D-CNN 1 D-CNN 1 D-CNN 1 D-CNN 3DCNN Time GRU a man shot put GRU GRU GRU 1 D-CNNA tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n L i n e a r 1 D-CNNA tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n L i n e a r A tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n 1 D-CNN 1 D-CNN 1 D-CNN 1 D-CNN 3DCNN Time GRU a man shot put GRU GRU GRU 1 D-CNNA tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n L i n e a r 1 D-CNNA tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n L i n e a r A tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n 1 D-CNN 1 D-CNN 1 D-CNN 1 D-CNN 3DCNN Time GRU a man shot put GRU GRU GRU 1 D-CNNA tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n L i n e a r 1 D-CNNA tt e n t i o n A tt e n t i o n A tt e n t i o n A tt e n t i o n L i n e a r time mBERT TP Contrastive attraction Contrastive repulsion Inter-modal Cross-lingual Intra-modal Transformer Pooling (TP) !",
"We utilize intra-modal , inter-modal , and conditional cross-lingual contrastive objectives to align ( x, v, y ) where x and y are the captions or transcriptions in different languages of a video v .",
"TP: Transformer pooling head.",
"swering (Lewis et al., 2020; Artetxe et al., 2020).",
"Recently, XTREME (Hu et al., 2020) was proposed to evaluate the cross-lingual transfer capabilities of multilingual representations across a diverse set of NLP tasks and languages.",
"However, a comprehensive evaluation of multilingual multimodal models on zero-shot cross-lingual transfer capabilities is still missing.",
"To our best knowledge, we are the first work that investigates and improves zero-shot cross-lingual transfer of vision-language models.",
"We consider the problem of learning multilingual multimodal representations from a corpus C of video-text pairs { ( x i , v i ) } Ci =1 , where v i is a video clip and x i is its corresponding text (caption or transcription) that is written in one of K languages.",
"Our goal is to learn a shared multilingual text encoder c x = ( x ) and a video encoder c v = ( v ) , both of which project the input to a shared D dimensional embedding space c v , c t RD , where semantically similar instances ( i .",
"e",
"., paired ( x i , v i ) ) are closer to each other than the dissimilar ones ( i .",
"e",
"., ( x i , v j ) , i (cid:54) = j ).",
"In the following, we denote a batch of multilingual text-video samples as B = { ( x i , v i ) } Bi =1 } where B C .",
"Figure 1 gives an overview of the proposed method.",
"Our text encoder consists of a multilingual Transformer ( e .",
"g .",
"multilingual BERT (Devlin et al., 2019)) and a text Transformer pooling head (ex-plained below).",
"Similarly, our video encoder consists of a 3D-CNN ( e .",
"g .",
"R(2+1)D network (Tran et al., 2018)) and a video Transformer pooling head.",
"We use these multilingual multimodal Transformers to encode text and video for alignment.",
"Unlike prior multilingual text-image models (Gella et al., 2017; Kim et al., 2020; Huang et al., 2019b) that utilize word embeddings and RNNs, our multilingual text encoder is built on a multilingual Transformer that generates contextual multilingual representations e x RN D to encode a sentence x containing N words.",
"We employ an additional 2-layer Transformer which we will call a Transformer pooling head (TP) as it serves as a pooling function to selectively encode variable-length sentences and aligns them with the corresponding visual content.",
"We use the first output token of the second Transformer layer as the final sentence representation.",
"Precisely, we set c x = Trans (2) x ( query=key=value= e x )[0] where Trans (2) x is a 2-layer stack of Transformers (Vaswani et al., 2017) with e x as the (query,key,value) in the multihead attention.",
"Note that we use the same text encoder to encode sentences in all languages.",
"For encoding videos, our model uses pre-trained 3D-CNNs that encode spatial-temporal context in a video.",
"For a M -second video v , we apply R(2+1)D (Tran et al., 2018) and S3D (Miech et al., 2020) networks to its frames, concatenate network outputs, and apply a linear layer to encode the visual input, e v RM D , to our model.",
"Similarly to the text part, we employ a two-layer Transformer as the pooling head to encode videos with different lengths into fixed-length representations.",
"Formally, we set c v = Trans (2) v ( query=key=value= e v )[0] .",
"Since videos are typically long and have a high frame rate ( e . g ., 30 fps), it is infeasible to update 3D-CNNs simultaneously and therefore, we use pre-extracted video features.",
"Our model is parameterized by = mBERT Trans x Trans v .",
"For learning multimodal representations, the common practice is to minimize a contrastive objective to map the associated (video, text) embeddings",
"to be near to each other in a shared embedding space.",
"The inter-modal max-margin triplet loss has been widely studied in video-text (Yu et al., 2018; Liu et al., 2019) and image-text (Kim et al., 2020; Burns et al., 2020; Huang et al., 2019b) research.",
"In this work, we generalize and model all inter-modal , intra-modal , and cross-lingual instances with a noise contrastive estimation objective (NCE) (Gut-mann and Hyvarinen, 2010; van den Oord et al., 2018; Chen et al., 2020b).",
"Inter-modal NCE.",
"Let X and V denote the subsets of the sampled sentences in multiple languages and videos in B , respectively.",
"And let s ( a, b ) = a T b (cid:107) a (cid:107)(cid:107) b (cid:107) be the cosine similarity measure.",
"We use an (inter-modal) NCE objective defined as: L ( X , V ) = 1 BB (cid:88) i =1 log (cid:96) NCE (( x i ) , ( v i )) , (1) where (cid:96) NCE ( c x , c v ) = e s ( c x ,c v ) e s ( c x ,c v ) + (cid:80) ( x (cid:48) ,v (cid:48) ) N e s ( c x (cid:48) ,c v (cid:48) ) (2) In inter-modal NCE, L inter = L ( X , V ) , the noise N is a set of negative video-text pairs sampled to enforce the similarity of paired ones are high and and those do not are low.",
"Following Miech et al. (2020), we set the negatives of ( x i , v i ) as other x j and v j , j (cid:54) = i in B .",
"Intuitively, inter-modal NCE draws paired (se-mantically similar) instances closer and pushes apart non-paired (dissimilar) instances.",
"Note that we do not distinguish language types in X and the sentences in all possible languages will be drawn towards their corresponding videos in the shared multilingual text-video embedding space.",
"Intra-modal NCE.",
"Beyond cross-modality matching, we leverage the intra-modal contrastive objective to learn and preserve the underlying structure within the video and text modality.",
"For example, Corgi should be closer to Husky than Balinese .",
"Prior image-text work (Gella et al., 2017; Huang et al., 2019c) utilizes a triplet loss to maintain such neighborhood relationships.",
"Inspired by recent success in self-supervised image and video representation learning (Yalniz et al., 2019; Ghadiyaram et al., 2019), our model leverages intra-modal NCE that constrains the learned representations to be invariant against noise and to maintain the within-modality structure simultaneously.",
"We minimize the following intra-modal NCE loss: L intra = L ( X , X m ) + L ( V , V m ) , (3) where X m and V m are the noised version of the original sentences and videos.",
"For noising, we randomly mask 5% of the multilingual text tokens and video clips.",
"We optimize our model by min L inter + L intra (4) 3.3 When Visually-Pivoted Multilingual Annotations Are Available In many multilingual multimodal datasets, there are sentences in different languages that describe a shared visual context.",
"For example, 10 English and 10 Chinese descriptions are available for each video in VATEX.",
"With these visually-pivoted (weakly paralleled) sentences ( x, y ) , we further revise the contrastive objectives to leverage this additional supervisory signal.",
"Given a visually-pivoted corpus C p that contains all possible combination of visually-pivoted pairs { ( x i , v i , y i ) } C p i =0 , we sample batches B p = { ( x i , v i , y i ) } B p i =1 , B p C p and revise the contrastive objective as: L inter = L ( X , V ) + L ( Y , V ) (5) L intra = L ( X , X m ) + L ( Y , Y m ) + L ( V , V m ) (6) Visual-pivoted Cross-lingual NCE.",
"Inspired by Translation Language Modeling (TLM) in XLM (Lample and Conneau, 2019), we propose a multimodal TLM-like contrastive objective which promotes alignments of descriptions in different languages that describe the same video.",
"We use the intuition that conditioned on a video, the descriptions (need not to be translation pairs) in different languages would likely be semantically similar.",
"To this end, we set the cross-lingual NCE as: L cross = L ( X |V , Y|V ) (7) For visually-pivoted sentences, as shown in Fig. 1, we generate their representations conditioned on the video they describe.",
"We extend the key and value of multihead attention with the additional visual content e v and generate new c x | v and c y | v for matching.",
"Specifically, our model employs c x | v = Trans (2) x ( query= e x , key=value= e x || e v )[0] .",
"With the access to (visually-pivoted) multilingual annotations, we optimize our model by min L inter + L intra + L cross (8) yafries yaKifaransaunawezapia kuandamananayo tambinla voyaacompaarcon un poco de papas fritas und dann ziehen Sie es so fest wie mglich It will also be accompanied with a little of frenchfries What it is, is a heat gun and I got this for ten bucks 00:00:37.160 --> 00:00:48.860 we just made our six-sided coaster so and then pull it as tight as possible nous venonsde faire notrecaboteursix ctsdoncceque 00:11:36.380 --> 00:11:44.390 00:08:35.289 --> 00:08:39.300 von Pommes Frites knnenSie es auchmitbegleiten khoaitychinbncngcthikmvin 00:01:16.290 --> 00:01:21.210 Figure 2: Video clips and the corresponding multilingual subtitles in Multi-HowTo100M.",
"At the inference time, we simply apply c x = ( x ) and c v = ( v ) to encode multilingual text queries and videos.",
"For text-to-video search, we sort videos according to their cosine similarity scores to the text query.",
"As large-scale pre-training has been shown important in recent NLP and vision-language models, we construct the Multilingual HowTo100M dataset (Multi-HowTo100M) to facilitate research in multilingual multimodal learning.",
"The original HowTo100M (Miech et al., 2019) dataset is a large-scale video collection of 1.2 million instructional videos (around 138 million clips/segments) on YouTube, along with their automatic speech recognition (ASR) transcriptions as the subtitles.",
"For each video in HowTo100M, we crawl and collect the multilingual subtitles provided by YouTube, which either consist of user-generated subtitles or those generated by Google ASR and Translate in the absence of user-generated ones.",
"Essentially, we collect video subtitles in 9 languages: English ( en ), German ( de ), French ( fr ), Russian ( ru ), Spanish ( es ), Czech ( cz ), Swahili ( sw ), Chinese ( zh ), Vietnamese ( vi ).",
"At the time of dataset collection (May 2020), there are 1.1 million videos available, each with subtitles in 7-9 languages.",
"The video length ranges from 1 minute to more than 20 minutes.",
"We utilize Multi-HowTo100M for multilingual multimodal pre-training to exploit the weak supervision from large-scale multilingual text-video data.",
"In Fig. 2, we provide a visualization of few instances sampled in Multi-HowTo100M with the corresponding video frame, timestamp, and transcriptions in different languages.",
"Please refer to Appendix for more details and dataset statistics.",
"In this section, we first describe our experimental setup (5.1-5.3).",
"In 5.4, we conduct ablation studies to validate the effectiveness of proposed multilingual text-video model .",
"With the best models at hand, we investigate their zero-shot cross-lingual transferability in 5.5, where we showcase that the proposed multilingual multimodal pre-training serves as the key facilitator.",
"We then verify the superior text video search performance of our method under the monolingual, multilingual, and cross-modality settings in 5.6.",
"MSR-VTT (VTT) (Xu et al., 2016) contains 10K videos, where each video is annotated with 20 captions.",
"Additionally, we created pseudo-multilingual data by translating the English captions into 8 languages with off-the-shelf machine translation models.",
"1 We use the official training set (6.5K videos) and validation set (497 videos).",
"We follow the protocol in Miech et al. (2019); Liu et al. (2019) which evaluates on text video search with the 1K testing set defined by Yu et al. (2018).",
"VATEX (Wang et al., 2019) is a multilingual (Chi-nese and English) video-text dataset with 35K videos.",
"Five ( en , zh ) translation pairs and five non-paired en and zh descriptions are available for each video.",
"We use the official training split (26K videos) and follow the testing protocol in Chen et al. (2020a) to split the validation set equally into 1.5K validation and 1.5K testing videos.",
"Multi30K (Elliott et al., 2016) is a multilingual extension of Flickr30K (Young et al., 2014).",
"For each image, there are two types of annotations available: (1) One parallel (English,German,French,Czech) translation pair and (2) five English and five Ger-1 https://marian-nmt.github.io/ man descriptions collected independently.",
"The training, validation, and testing splits contain 29K, 1K, and 1K images respectively.",
"For the video backbone, we use a 34-layer, R(2+1)-D (Tran et al., 2018) network pre-trained on IG65M (Ghadiyaram et al., 2019) and a S3D (Miech et al., 2020) network pre-trained on HowTo100M.",
"We pre-extract video features and concatenate the two 3D-CNN outputs to form e x RM 1024 as a video input.",
"For the text backbone, we use multilingual BERT (mBERT) (Devlin et al., 2019) or XLM-Roberta-large (XLM-R) (Artetxe et al., 2020), where the latter achieves near SoTA zero-shot cross-lingual transfer performance for NLP tasks.",
"Following Hu et al. (2020), instead of using the top layer, we output the 12-th layer in XLM-R and mBERT.",
"For vision-language tasks, we freeze layers below 9 as this setup empirically performs the best.",
"Our model employs a 2-layer Transformer with 4-head attention for the text and video transformer pooling (TP) modules.",
"The embedding dimension D is set to 1024.",
"We use the Adam (Kingma and Ba, 2015) optimizer and a 0 .",
"0002 learning rate to train our model for 16 (pre-training) and 10 (fine-tuning) epochs.",
"The softmax temperature in all noise contrastive objectives is set to 0 .",
"1 .",
"We use Multi-HowTo100M for multilingual multimodal pre-training (MMP).",
"For each video, we randomly sample the start and end time to construct a video clip.",
"For a video clip, we randomly sample one language type each time from 9 languages and use the consecutive ASR transcriptions that are closest in time to compose (text-video) pairs for training.",
"For simplicity and speed purposes, we follow the training protocol of XLM-R to pre-train on a multilingual corpus wihtout using translation pairs, i .",
"e",
"., we use multilingual text-video pairs ( x, v ) but no translation pairs from Multi-HowTo100M and utilize only interand intra-modal NCE (Eq. 1-3) for MMP.",
"We fine-tune our model on VTT, VATEX, and Multi30K to evaluate on text video search tasks.",
"In the zero-shot cross-lingual transfer experiments, we use only English-video data and fine-tune with Eq.",
"1-3.",
"We then test the model with non-English queries.",
"When annotations in additional languages are available (by humans in VATEX and Multi30K; Text-B Video-B R@1 R@5 R@10 XLM-R S3D 19.5 49.0 62.8 XLM-R R(2+1)D 19.0 49.5 63.2 XLM-R R+S 21.0 50.6 63.6 mBERT R+S 19.9 49.8 62.5 Table 1: Text and Video (B)ackbone comparison.",
"by MT models ( i . e ., translate-train ) in VTT), we utilize all available multilingual annotations ( i . e ., fully supervised) and iterate over all possible ( x, v, y ) pairs to train with Eq.",
"5-7 to demonstrate the strong performance target for evaluating zero-shot cross-lingual transfer on VTT and to compare fairly with other fully-supervised baselines in multilingual text video search on VATEX and Multi30K.",
"We report the standard recall at k (R@ k ) metrics (higher is better).",
"In this section, we ablate and compare different text/video encoders, Transformer model architectures, and learning objectives for English video search on VTT.",
"Text and Video Encoders.",
"Table 1 compares different text and video encoder backbones.",
"For the visual encoders, while R(2+1)D outperforms S3D, the simple concatenation ( i . e ., early-fusion) of their output features provides a 1 .",
"5 2 .",
"0 improvement in R@1.",
"For the text encoder, XLM-R significantly outperforms mBERT.",
"Transformer Pooling.",
"Table 2 compares various configurations of the proposed Transformer pooling module.",
"We observe that a simple 2-layer Transformer achieves the best performance.",
"Weight Model en de fr cs zh ru vi sw es Avg mBERT 19.9 11.1 11.6 8.2 6.9 7.9 2.7 1.4 12.0 9.1 mBERT-MP 20.6 11.3 11.9 8.0 7.1 7.7 2.5 1.1 12.5 9.2 mBERT-MMP 21.8 15.0 15.8 11.2 8.4 11.0 3.7 3.4 15.1 11.7 XLM-R 21.0 16.3 17.4 16.0 14.9 15.4 7.7 5.7 17.3 14.7 XLM-R-MP 23.3 17.4 18.5 17.1 16.3 17.0 8.1 6.2 18.5 15.8 XLM-R-MMP 23.8 19.4 20.7 19.3 18.2 19.1 8.2 8.4 20.4 17.5 mBERT + translated VTT 19.6 18.2 18.0 16.9 16.2 16.5 8.4 13.0 18.5 16.1 mBERT-MMP + translated VTT 21.5 19.1 19.8 18.3 17.3 18.3 8.9 14.1 20.0 17.4 XLM-R + translated VTT 21.5 19.6 20.1 19.3 18.9 19.1 10.3 12.5 18.9 17.8 XLM-R-MMP + translated VTT 23.1 21.1 21.8 20.7 20.0 20.5 10.9 14.4 21.9 19.4 Table 4: Recall@1 of multilingual text video search on VTT.",
"Learning Objective.",
"From Table 3, the intra-modal contrastive objective is important for both NCE and Triplet loss.",
"In general, the NCE loss outperforms the Triplet loss.",
"The proposed inter-modal and intra-modal NCE objective achieves the best performance.",
"When captions in multiple languages are available, cross-lingual NCE additionally provides a consistent improvement.",
"Table 4 shows the multilingual text video search results on VTT.",
"With the best English-video models at hand (with either mBERT or XLM-R as the text backbone), we first investigate how well these models transfer to other non-English languages under the zero-shot setting.",
"We then analyze the benefit of the proposed multilingual multimodal pre-training.",
"The upper section shows the zero-shot results.",
"Unlike cross-lingual transfer in NLP tasks, employing multilingual Transformers in vision-language tasks apparently does not generalize well across languages.",
"For example, there is a significant drop in R@1 (19.9 11.1 (-44%) with mBERT, 21.0 16.3 (-24%) with XLM-R) when directly applying English-finetuned model to German video search.",
"For comparison, there is only a -10% degradation for XLM-R on en de cross-lingual transfer in XNLI (Conneau et al., 2018).",
"Multimodal (English-video) pre-training (MP) on HowTo100M only improves average R@1 (+0.1 or mBERT and +1.1 for XLM-R) compared to model-from-scratch.",
"In contrast, our proposed multilingual multimodal pre-training (MMP) is shown to be the key facilitator for zero-shot cross-lingual transfer.",
"MMP improves German Video search (11.1 15.0, +35% for mBERT, and 16.3 19.4, +20% for XLM-R) and achieves 2 .",
"6 2 .",
"8 improvement in average R@1.",
"We attribute the effectiveness of MMP to learning improved alignments between multilingual textual and visual context in the shared embedding space, as relatively balanced improvements between English video and non-English video is observed with fine-tuning.",
"Fig. 3 demonstrates the trend of R@1 while incrementally incorporating additional languages for MMP.",
"For XLM-R, the improvement in R@1 asymptotically converges when pre-training with more multilingual text-video pairs.",
"On the other hand, for zero-shot German video search, pretraining with more languages keeps improving the search performance, even though the additional language ( e . g ., French) is different from the target language ( i . e ., German).",
"The lower section of Table 4 shows the results of models fine-tuned with (synthesized) pseudo-multilingual annotations.",
"It can be regarded as the translate-train scenario, which serves as a strong performance target for evaluating zero-shot cross-lingual transfer, as discussed in (Lample and Conneau, 2019; Hu et al., 2020).",
"Both mBERT and XLM-R yield better performance across non-a soccer team walking out on the field 1 2 3 1 2 3 (0.69) (0.58) (0.53) (0.71) (0.47) (0.54) Rank 1 2 3 (0.52) (0.48) mt ngi n ng ang ni v d n khng gian adam (0.44) 1 2 3 (0.45) (0.42) (0.46) Figure 4: Qualitative multilingual ( en , ru , vi , zh ) text video search results on VTT.",
"English languages with the in-domain translated pseudo-multilingual annotations.",
"However, for English video search, a 0 .",
"7 degradation is observed compared to the zero-shot setting.",
"It is likely due to the noise in the translated captions.",
"Notably, there is still a performance gap between zero-shot and translate-train settings for models with mBERT.",
"In contrast, the gap is much smaller for models with XLM-R.",
"In the following sections, we refer Ours-MMP as our best model with XLM-R as the text backbone and compare it with other state-of-the-art methods.",
"Qualitative Results Fig. 4 shows the multilingual text video search results with Ours-MMP (VTT: en -only) on VTT under the zero-shot setup.",
"Note that only one shared English-finetuned model is used for text video search in all languages.",
"As demonstrated, the proposed model successfully retrieves the correct videos with English ( en ) and Russian ( ru ) queries.",
"The other top-ranked videos also share similar visual appearance to the correct one.",
"For zero-shot transferring of the English-finetuned model to distant languages such as Vietnamese ( vi ) and Chinese ( zh ), we observe that there is still limitation for our zero-shot models to understand abstract concepts ( e . g ., space project) and associate small objects ( e . g ., microphone) with the text queries in distant languages.",
"English Video Search on VTT.",
"Table 5 shows the comparison of English video models on VTT.",
"For a fair comparison to other baselines, our model fine-tunes only with the original English annotations on VTT.",
"The results show that our model outperforms other baselines by a large margin.",
"Specifically, our model achieves 8.9 R@1 improvement over the original HowTo100M model (Miech et al., 2019) and other recent baselines with pre-training on HowTo100M.",
"Using a smaller set of visual fea-Model R@1 R@5 R@10 JSFusion (Yu et al., 2018) 10 .",
"tures and training on a smaller (6,513 vs 9,000) training set 2 , our model also outperforms CE (Liu et al., 2019) with or without pre-training.",
"Multilingual Text Video Search on VATEX.",
"Table 6 summarizes English video and Chinese video search performance on the VATEX dataset.",
"Under the zero-shot setting where we train with only English-video pairs, our model already outperforms other baselines.",
"However, a clear performance gap between English video and Chinese video search is observed, indicating that cross-lingual transfer to a distant language remains challenging even with XLM-R.",
"With the proposed MMP, the gap is significantly closed by 5.8/8.1/7.7 in R@1/5/10.",
"When in-domain human-annotated Chinese captions are available, the performance of our model can further be improved for both languages and our model yields new state-of-the-art performance.",
"2 CE uses 9,000 videos (VTT training and part of exclusive testing set) for training, while other baselines and our model in Table 5 are trained on the official VTT training set which contains 6,513 videos.",
"Cross-Modality Transfer to Multi30K: From Video-Text to Image-Text.",
"To extend our study on zero-shot cross-lingual transfer for image-text tasks, we investigate the feasibility of transferring our video-text model across modalities.",
"We replace the 3D-CNN in the original video-text model with a 2D-CNN to encode the image.",
"In practice, following MHA-D (Huang et al., 2019b), we utilize the Faster-RCNN (Ren et al., 2015) pre-trained in Visual Genome (Krishna et al., 2016) to extract regional visual features.",
"Essentially, an image is encoded as e v = RM H where M = 36 is the maximum number of visual objects in an image.",
"For models with MMP, we initialize their weights with the model pre-trained on Multi-HowTo100M.",
"To tackle the feature mismatch between 2D-CNN and 3D-CNN, we leverage a linear layer with a doubled learning rate to map 2D-CNN features to the same dimension as 3D-CNN features.",
"Table 7 shows the results on Multi30K.",
"For zero-shot cross-lingual transfer, when trained from scratch (M30K: en -only), our model achieves comparable performance to MHA-D but lags in German image search since it only uses English annotations.",
"In Ours-MMP, pre-training improves all recall metrics even with modality gap.",
"The average R@1 improvement is 3.2.",
"A larger gain for (relatively) low-resource language such as Czech is observed.",
"Without using any Czech annotations, our zero-shot model with MMP achieves comparable Czech image search performance to SMALR (Burns et al., 2020), which uses 10 languages including Czech.",
"However, when transferring across modalities and using only English annotations, there are performance gaps between English Image and German/Czech Image search, implying that transferring models across modalities is feasible but remains challenging.",
"We consider zero-shot cross-modal cross-lingual transfer as our future work.",
"For a fair comparison with other baselines, when trained with annotations in all 4 languages provided by Multi30K, our model greatly outperforms all baselines by large margins in multilingual text image search.",
"We have presented a multilingual multimodal pretraining (MMP) strategy, the Multi-HowTo100M dataset, and a Transformer-based text-video model for learning contextual multilingual multimodal representations.",
"The results in this paper have convincingly demonstrated that MMP is an essential ingredient for zero-shot cross-lingual transfer of vision-language models.",
"Meanwhile, there are many remaining challenges, such as resolving the performance gap between zero-shot and training with in-domain non-English annotations; as well as techniques to transfer varieties of vision-language models ( e .",
"g",
"., VQA (Goyal et al., 2017), TVQA (Lei et al., 2020)) or visually-enhanced NLP models such as unsupervised multimodal machine translation (Huang et al., 2020b).",
"We believe the proposed methodology, and the corresponding resources we release, will be an important first step towards spurring more research in this direction.",
"This work is supported by the DARPA grants funded under the AIDA program (FA8750-18-2-0018) and the GAILA program (award HR00111990063) (P.Y.).",
"This work is also supported by EPSRC Centre for Doctoral Training in Autonomous Intelligent Machines & Systems [EP/L015897/1] (M.P.).",
"The authors appreciate Prahal Arora, Shengxin Zha, Polina Kuznetsova, Xu Hu, and Geoffrey Zweig for their suggestions of this work.",
"The authors would also like to thank the anonymous reviewers for their feedback."
] | [
"abstain",
"objective",
"objective",
"objective",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"method",
"other",
"method",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other"
] |
[
"Election manifestos document the intentions, motives, and views of political parties.",
"They are often used for analysing a party's fine-grained position on a particular issue, as well as for coarse-grained positioning of a party on the leftright spectrum.",
"In this paper we propose a two-stage model for automatically performing both levels of analysis over manifestos.",
"In the first step we employ a hierarchical multi-task structured deep model to predict fineand coarse-grained positions, and in the second step we perform post-hoc calibration of coarse-grained positions using probabilistic soft logic.",
"We empirically show that the proposed model outperforms state-of-art approaches at both granularities using manifestos from twelve countries, written in ten different languages.",
"The adoption of NLP methods has led to significant advances in the field of computational social science (Lazer et al., 2009), including political science (Grimmer and Stewart, 2013).",
"Among a myriad of data sources, election manifestos are a core artifact in political analysis.",
"One of the most widely used datasets by political scientists is the Comparative Manifesto Project (CMP) dataset (Volkens et al., 2017), which contains manifestos in various languages, covering over 1000 parties across 50 countries, from elections dating back to 1945.",
"In CMP, a subset of the manifestos has been manually annotated at the sentence-level with one of 57 political themes, divided into 7 major categories.",
"1 Such categories capture party positions ( FAVORABLE , UNFAVORABLE or NEITHER ) 1 https://manifesto-project.wzb.eu/ coding_schemes/mp_v5 on fine-grained policy themes, and are also useful for downstream tasks including calculating manifesto-level (policy-based) leftright position scores (Budge et al., 2001; Lowe et al., 2011; Daubler and Benoit, 2017).",
"An example sentence from the Green Party of England and Wales 2015 election manifesto where they take an UNFAVORABLE position on MILITARY is: We would: Ensure that ... less is spent on military research.",
"Such manual annotations are labor-intensive and prone to annotation inconsistencies (Mikhaylov et al., 2012).",
"In order to overcome these challenges, supervised sentence classification approaches have been proposed (Verberne et al., 2014; Subramanian et al., 2017).",
"Other than the sentence-level labels, the manifesto text also has a document-level score that quantifies its position on the leftright spectrum.",
"Different approaches have been proposed to derive this score, based on alternate definitions of leftright (Slapin and Proksch, 2008; Benoit and Laver, 2007; Lo et al., 2013; Daubler and Benoit, 2017).",
"Among these, the RILE index is the most widely adopted (Merz et al., 2016; Jou and Dalton, 2017), and has been shown to correlate highly with other popular scores (Lowe et al., 2011).",
"RILE is defined as the difference between RIGHT and LEFT positions on (pre-determined) policy themes across sentences in a manifesto (Volkens et al., 2013); for instance, UNFAVORABLE position on MILITARY is categorized as LEFT .",
"RILE is popular in CMP in particular, as mapping individual sentences to LEFT / RIGHT / NEUTRAL categories has 1964 been shown to be less sensitive to systematic errors than other sentence-level class sets (Klinge-mann et al., 2006; Volkens et al., 2013).",
"Finally, expert survey scores are gaining popularity as a means of capturing manifesto-level political positions, and are considered to be context-and time-specific, unlike RILE (Volkens et al., 2013; Daubler and Benoit, 2017).",
"We use the Chapel Hill Expert Survey (CHES) (Bakker et al., 2015), which comprises aggregated expert surveys on the ideological position of various political parties.",
"Although CHES is more subjective than RILE, the CHES scores are considered to be the gold-standard in the political science domain.",
"In this work, we address both fineand coarse-grained multilingual manifesto text policy position analysis, through joint modeling of sentence-level classification and document-level positioning (or ranking) tasks.",
"We employ a two-level structured model, in which the first level captures the structure within a manifesto, and the second level captures context and temporal dependencies across manifestos.",
"Our contributions are as follows: we employ a hierarchical sequential deep model that encodes the structure in manifesto text for the sentence classification task; we capture the dependency between the sentenceand document-level tasks, and also utilize additional label structure (categoriza-tion into LEFT / RIGHT / NEUTRAL : Volkens et al. (2013)) using a joint-structured model; we incorporate contextual information (such as political coalitions) and encode temporal dependencies to calibrate the coarse-level manifesto position using probabilistic soft logic (Bach et al., 2015), which we evaluate on the prediction of the RILE index or expert survey party position score.",
"Analysing manifesto text is a relatively new application at the intersection of political science and NLP.",
"One line of work in this space has been on sentence-level classification, including classifying each sentence according to its major political theme (1-of-7 categories) (Zirn et al., 2016; Glavas et al., 2017a), its position on various policy themes (Verberne et al., 2014; Biessmann, 2016; Subramanian et al., 2017), or its relative disagreement with other parties (Menini et al., 2017).",
"Recent approaches (Glavas et al., 2017a; Subramanian et al., 2017) have also handled multilingual manifesto text (given that manifestos span multiple countries and languages; see Section 5.1) using multilingual word embeddings.",
"At the document level, there has been work on using label count aggregation of (manually-annotated) fine-grained policy positions, as features for inductive analysis (Lowe et al., 2011; Daubler and Benoit, 2017).",
"Text-based approaches has used dictionary-based supervised methods, unsupervised factor analysis based techniques and graph propagation based approaches (Hjorth et al., 2015; Bruinsma and Gemenis, 2017; Glavas et al., 2017b).",
"A recent paper closely aligned with our work is Subramanian et al. (2017), who address both sentenceand document-level tasks jointly in a multilingual setting, showing that a joint approach outperforms previous approaches.",
"But they do not exploit the structure of the text and use a much simpler model architecture: averages of word embeddings, versus our bi-LSTM encodings; and they do not leverage domain information and temporal regularities that can influence policy positions (Greene, 2016).",
"This work will act as a baseline in our experiments in Section 5.",
"Policy-specific position classification can be seen as related to target-specific stance classification (Mohammad et al., 2017), except that the target is not explicitly mentioned in most cases.",
"Secondly, manifestos have both fineand coarse-grained positions, similar to sentiment analysis (McDonald et al., 2007).",
"Finally, manifesto text is well structured within and across documents (based on coalition), has temporal dependencies, and is multilingual in nature.",
"In this section, we detail the first step of our two-stage approach.",
"We use a hierarchical bidirectional long short-term memory (bi-LSTM) model (Hochreiter and Schmidhuber, 1997; Graves et al., 2013; Li et al., 2015) with a multi-task objective for the sentence classification and document-level regression tasks.",
"A post-hoc calibration of coarse-grained manifesto position is given in Section 4. Let D be the set of manifestos, where a manifesto d D is made up of L sentences, and a sentence s i has T words: w i 1 , w i 2 , ...w iT .",
"The set D s D is annotated at the sentence-level 1965 with positions on fine-grained policy issues (57 classes).",
"The task here is to learn a model that can:",
"(a) classify sentences according to policy issue classes; and",
"(b) score the overall document on the policy-based leftright spectrum (RILE), in an inter-dependent fashion.",
"Word encoder : We initialize word vector representations using a multilingual word embedding matrix, W e .",
"We construct W e by aligning the embedding matrices of all the languages to English, in a pair-wise fashion.",
"Bilingual projection matrices are built using pre-trained Fast-Text monolingual embeddings (Bojanowski et al., 2017) and a dictionary D constructed by translating 5000 frequent English words using Google Translate.",
"Given a pair of embedding matrices E (English) and O (Other), we use singular value decomposition of OTDE (which is U VT ) to get the projection matrix ( W = UVT ), since it also enforces monolingual invariance (Artetxe et al., 2016; Smith et al., 2017).",
"Finally, we obtain the aligned embedding matrix, W e , as OW .",
"We use a bi-LSTM to derive a vector representation of each word in context.",
"The bi-LSTM traverses the sentence s i in both the forward and backward directions, and the encoded representation for a given word w it s i , is defined by concatenating its forward ( h it ) and backward hidden states ( h it ), t (cid:2) 1 , T (cid:3) .",
"Sentence model : Similarly, we use a bi-LSTM to generate a sentence embedding from the word-level bi-LSTM, where each input sentence s i is represented using the last hidden state of both the forward and backward LSTMs.",
"The sentence embedding is obtained by concatenating the hidden representations of the sentence-level bi-LSTM, in both the directions, h i = (cid:2) h i , h i (cid:3) , i (cid:2) 1 , L (cid:3) .",
"With this representation, we perform fine-grained classification (to one-of-57 classes), using a softmax output layer for each sentence.",
"We minimize the cross-entropy loss for this task, over the sentence-level labeled set D s D .",
"This loss is denoted LS .",
"Document model : To represent a document d we use average-pooling over the sentence representations h i and predicted output distributions ( y i ) of individual sentences, 2 i.e., 2 Preliminary experiments suggested that this representation performs better than using either hidden representations or just the output distribution.",
"V d = 1 LP i d (cid:20) y i h i (cid:21) .",
"The range of RILE is [ 100 , 100] , which we scale to the range [ 1 , 1] , and model using a final tanh layer.",
"We minimize the mean-squared error loss function between the predicted r d and actual RILE score r d , which is denoted as LD : LD = 1 | D | | D | X d =1 k r d r d k 22 (1) Overall, the loss function for the joint model (Figure 1), combining LS and LD , is: LJ = LS + (1 ) LD (2) where 0 1 is a hyper-parameter which is tuned on a development set.",
"3.1 Joint-Structured Model The RILE score is calculated directly from the sentence labels, based on mapping each label according to its positioning on policy themes, as LEFT , RIGHT and NEUTRAL (Volkens et al., 2013).",
"Specifically, 13 out of 57 classes are categorized as LEFT , another 13 as RIGHT , and the rest as NEUTRAL .",
"We employ an explicit structured loss which minimizes the deviation between sentence-level LEFT / RIGHT / NEUTRAL polarity predictions p and the document-level RILE score.",
"The motivation to do this is two-fold:",
"(a) enabling interaction between the sentenceand document-level tasks with homogeneous target space (polarity and RILE); and",
"(b) since we have more documents with just RILE and no sentence-level labels, 3 augmenting an explicit semi-supervised learning objective could propagate down the RILE label to generate sentence labels that concord with the document score.",
"For the sentence-level polarity prediction (shown in Figure 1), we use cross-entropy loss over the sentence-level labeled set D s D , which is denoted as LSP .",
"The explicit structured sentence-document loss is given as: L struc = 1 | D | | D | X d =1 1 L d X i d ( p i right p i left ) r d !",
"3 Strictly speaking, for these documents even, sentence annotation was used to derive the RILE score, but the sentencelevel labels were never made available.",
"where p i right and p i left are the predicted RIGHT and LEFT class probabilities for a sentence s i ( d ), r d is the actual RILE score for the document d , and L d is the length of each document, d D. We augment the joint model's loss function (Equa-tion (2)) with LSP and L struc to generate a regularized multi-task loss: LT = LJ + LSP + L struc (4) where , 0 are hyper-parameters which are, once again, tuned on the development set.",
"We refer to the model trained with Equation (2) as Joint , and that trained with Equation (4) as Joint struc .",
"We leverage party-level information to enforce smoothness and regularity in manifesto positioning on the leftright spectrum (Greene, 2016).",
"For example, manifestos released by parties in a coalition are more likely to be closer in RILE score, and a party's position in an election is often a relative shift from its position in earlier election, so temporal information can provide smoother estimations.",
"To address this, we propose an approach using hinge-loss Markov random fields (HL-MRFs), a scalable class of continuous, conditional graphical models (Bach et al., 2013).",
"HL-MRFs have been used for many tasks including political framing analysis on Twitter (Johnson et al., 2017) and user stance classification on socio-political issues (Sridhar et al., 2014).",
"These models can be speci-fied using Probabilistic Soft Logic (PSL) (Bach et al., 2015), a weighted first order logical template language.",
"An example of a PSL rule is : P ( a ) Q ( a , b ) R ( b ) where P , Q , and R are predicates, a and b are variables, and is the weight associated with the rule.",
"PSL uses soft truth values for predicates in the interval (cid:2) 0 , 1 (cid:3) .",
"The degree of ground rule satisfaction is determined using the Lukasiewicz t-norm and its corresponding co-norm as the relaxation of the logical AND and OR, respectively.",
"The weight of the rule indicates its importance in the HL-MRF probabilistic model, which defines a probability density function of the form: P ( Y | X ) exp MX r =1 r r ( Y , X ) !",
", r ( Y , X ) = max { l r ( Y , X ) , 0 } r , (5) where r ( Y , X ) is a hinge-loss potential corresponding to an instantiation of a rule, and is spec-ified by a linear function l r and optional exponent r { 1 , 2 } .",
"Note that the hinge-loss potential captures the distance to satisfaction.",
"4 4.2 PSL Model Here we elaborate our PSL model (given in Table 1) based on coalition information, manifesto content-based features (manifesto similarity and rightleft ratio), and temporal dependency.",
"Our target pos (calibrated RILE) is a continuous variable (cid:2) 0 , 1 (cid:3) , where 1 indicates that a manifesto occupies an extreme right position, 0 denotes an extreme left position, and 0.5 indicates center.",
"Each instance of a manifesto and its party affiliation are denoted by the predicates Manifesto and Party .",
"Coalition : We model multi-relational networks based on regional coalitions within a given country ( RegCoalition ), 5 and also cross-country coalitions in the European parliament 4 Degree of satisfaction for the example PSL rule r , P Q R , using the Lukasiewicz co-norm is given as min { 2 P Q + R , 1 } .",
"From this, the distance to satisfaction is given as max { P + Q R 1 , 0 } , where P + Q R 1 indicates the linear function l r .",
"5 http://www.parlgov.org/ 1967 PSL coal Coalition features Manifesto ( x ) Party ( x , a ) Manifesto ( y ) Party ( y , b ) SameElec ( x , y ) RegCoalition ( a , b ) pos ( x ) pos ( y ) Manifesto ( x ) Party ( x , a ) Manifesto ( y ) Party ( y , b ) SameElec ( x , y ) RegCoalition ( a , b ) pos ( x ) pos ( y ) Manifesto ( x ) Party ( x , a ) Manifesto ( y ) Party ( y , b ) Recent ( x , y ) EUCoalition ( a , b ) pos ( x ) pos ( y ) Manifesto ( x ) Party ( x , a ) Manifesto ( y ) Party ( y , b ) Recent ( x , y ) EUCoalition ( a , b ) pos ( x ) pos ( y ) Transitivity Manifesto ( x ) Party ( x , a ) Manifesto ( y ) Party ( y , b ) Manifesto ( z ) Party ( z , c ) SameElec ( x , y ) SameElec ( y , z ) RegCoalition ( a , b ) RegCoalition ( b , c ) pos ( x ) pos(z) Manifesto ( x ) Party ( x , a ) Manifesto ( y ) Party ( y , b ) Manifesto ( z ) Party ( z , c ) SameElec ( x , y ) SameElec ( y , z ) RegCoalition ( a , b ) RegCoalition ( b , c ) pos ( x ) pos ( z ) Manifesto ( x ) Party ( x , a ) Manifesto ( y ) Party ( y , b ) Manifesto ( z ) Party ( z , c ) Recent ( x , y ) Recent ( y , z ) EUCoalition ( a , b ) EUCoalition ( b , c ) pos ( x ) pos(z) Manifesto ( x ) Party ( x , a ) Manifesto ( y ) Party ( y , b ) Manifesto ( z ) Party ( z , c ) Recent ( x , y ) Recent ( y , z ) EUCoalition ( a , b ) EUCoalition ( b , c ) pos ( x ) pos ( z ) PSL esim Similarity-based relational feature Manifesto ( x ) Manifesto ( y ) Similarity ( x , y ) Recent ( x , y ) pos ( x ) pos ( y ) Manifesto ( x ) Manifesto ( y ) Similarity ( x , y ) Recent ( x , y ) pos ( x ) pos ( y ) PSL ploc Rightleft ratio Manifesto ( x ) LwRightLeftRatio ( x ) pos ( x ) Manifesto ( x ) LwRightLeftRatio ( x ) pos ( x ) PSL temp Temporal Dependency Manifesto ( x ) Party ( x , a ) PreviousManifesto ( x , a , t ) pos ( t ) pos ( x ) Manifesto ( x ) Party ( x , a ) PreviousManifesto ( x , a , t ) pos ( t ) pos ( x ) Table 1: PSL Model: Values for Similarity , LwRightLeftRatio and pos are obtained from the joint-structured model (Figure 1).",
"( EUCoalition ).",
"6 We set the scope of interaction between manifestos ( x and y ) from a country to the same election ( SameElec ).",
"For manifestos across countries, we consider only the most recent manifesto ( Recent ) from each party ( y ), released within 4 years relative to x .",
"We use a logistic transformation of the number of times two parties have been in a coalition in the past (to get a value between 0 and 1), for both RegCoalition and EUCoalition .",
"We also construct rules based on transitivity for both the relational features, i.e., parties which have had common coalition partners, even if they were not allies themselves, are likely to have similar policy positions.",
"level label distributions), similar to the modeling intuition captured by Burford et al. (2015) in the context of congressional debate vote prediction.",
"For a pair of recent manifestos ( Recent ) we use the cosine similarity ( Similarity ) between their respective document vectors V d (Figure 1).",
"Rightleft ratio : For a given manifesto, we compute the ratio of sentences categorized under RIGHT to OTHERS ( # RIGHT # RIGHT + # LEFT + # NEUTRAL ) , where the categorization for sentences is obtained using the joint-structured model (Equation (4)).",
"We also encode the location of sentence l s in a document, by weighing the count of sentences for each class C by its location value P s C log( l s + 1) (referred to as loc lr ).",
"The intuition here is that the beginning parts of a manifesto tends to contain generic information such as preamble, compared to 1968 later parts which are more policy-dense.",
"We perform a logistic transformation of loc lr to derive the LwRightLeftRatio .",
"Temporal dependency : We capture the temporal dependency between a party's current manifesto position and its previous manifesto position ( PreviousManifesto ).",
"Other than for the look-up based random variables, the network is instantiated with predictions (for Similarity , LwRightLeftRatio and pos ) from the joint-structured model (Figure 1).",
"All the random variables, except pos (which is the target variable), are fixed in the network.",
"These values are then used inside a PSL model for collective probabilistic reasoning, where the first-order logic given in Table 1 is used to define the graphical model (HL-MRF) over the random variables detailed above.",
"Inference on the HL-MRF is used to obtain the most probable interpretation such that it satisfies most ground rule instances, i.e., considering the relational and temporal dependencies.",
"As our dataset, we use manifestos from CMP for European countries only, as in Section 5.5 we will validate the manifesto's overall position on the left-right spectrum, using the Chapel Hill Expert Survey (CHES), which is only available for European countries (Bakker et al., 2015).",
"In this, we sample 1004 manifestos from 12 European countries, written in 10 different languages Danish (Denmark), Dutch (Netherlands), English (Ire-land, United Kingdom), Finnish (Finland), French (France), German (Austria, Germany), Italian (Italy), Portuguese (Portugal), Spanish (Spain), and Swedish (Sweden).",
"Out of the 1004 manifestos, 272 are annotated with both sentence-level labels and RILE scores, and the remainder only have RILE scores (see Table 2 for further statis-tics).",
"There are (less) scenarios where a natural sentence is segmented into sub-sentences and annotated with different classes (Daubler et al., 2012).",
"Hence we use NLTK sentence tokenizer followed by heuristics from Daubler et al. (2012) to obtain sub-sentences.",
"Consistent with previous work (Subramanian et al., 2017), we present results with manually segmented and annotated test documents.",
"Sentence-level baseline approaches include:",
"BoW-NN : TF-IDF-weighted unigram bag-of-words representation of sentences (Biess-mann, 2016), and monolingual training using a multi-layer perceptron (MLP) model.",
"BoT-NN : Similar to above, but trigram bag-of-words.",
"AE-NN : MLP model with average multilingual word embeddings as the sentence representation (Subramanian et al., 2017).",
"CNN : Convolutional neural network (CNN: Glavas et al. (2017a)) with multilingual word embeddings.",
"Bi-LSTM : Simple bi-LSTM over multilingual word embeddings, last hidden units are concatenated to form the sentence representation, and fed directly into a softmax sentence-level layer.",
"We evaluate two scenarios: (1) with a trainable embedding matrix W e ( Bi-LSTM(+up) ); and (2) without a trainable W e .",
"Document-level baseline approaches include: BoC : Bag-of-centroids (BoC) document representation based on clustering the word embeddings (Lebret and Collobert, 2014), fed into a neural network regression model.",
"HCNN : Hierarchical CNN, where we encode both the sentence and document using stacked CNN layers.",
"HNN : State-of-the-art hierarchical neural network model of Subramanian et al. (2017), based on average embedding representations for sentences and the document.",
"We present results evaluated under two different settings:",
"(a) 8020% random split averaged across 10 runs to validate the hierarchical model (Sec-tion 5.3 and Section 5.4); and",
"(b) temporal setting, where trainand test-set are split chronologically, to validate both the hierarchical deep model and the PSL approach especially, since we encode temporal dependencies (Section 5.5).",
"We present sentence-level results with a 8020% random split in Table 3, stratified by country, averaged across 10 runs.",
"For Bi-LSTM , we found the setting with a trainable embedding matrix ( Bi-LSTM(+up) ) to perform better than the non-trainable case ( Bi-LSTM ).",
"Hence we use a similar setting for Joint and Joint struc .",
"We show the effect of (from Equation (2)) in Figure 2a, based on which we set = 0 .",
"3 hereafter.",
"With the chosen model, we study the effect of the structured loss (Equation (4)), by varying with fixed = 0 .",
"1 , as shown in Figure 2b.",
"We observe that = 0 .",
"7 gives the best performance, and varying with at 0.7 does not result in any further improvement (see Figure 2c).",
"Sentence-level results measured using F-measure, for baseline approaches and the proposed models selected from Figure 2a ( Joint ), Figures 2b and 2c ( Joint struc ) are given in Table 3. We also evaluate the special case of = 1 , in the form of sentence-only model Joint sent .",
"For the document-level task, results for overall manifesto positioning measured using Pearson's correlation ( r ) and Spearman's rank correlation ( ) are given in Table 4. We also evaluate the hierarchical bi-LSTM model with document-level objective only, Joint doc .",
"We observe that hierarchical modeling ( Joint sent , Joint and Joint struc ) gives the best performance for sentence-level classification for all the languages except Portuguese, on which it performs slightly worse than Bi-LSTM(+up) .",
"Also, Joint struc , does not improve over Joint sent .",
"We perform further analysis to see the effect of joint-structured model on the sentence-level task under sparsely-labeled conditions in Section 5.4.",
"On the other hand, for the document-level task, 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.1 0.2 0.3 0.4 0.5 P e a r s o n c o rr e l a t i o n ( r ) 0.0 0.1 0.2 0.3 0.4 0.5 Fm e a s u r e",
"the joint model ( Joint ) performs better than Joint doc and all the baseline approaches.",
"Lastly, the joint-structured model ( Joint struc ) provides further improvement over Joint .",
"To understand the utility of joint modeling, especially given that there are more manifestos with document-level labels only than both sentence-and document-level labels, we compare the following two settings: (1) Joint struc , which uses additional manifestos with document-level supervision (RILE); and (2) Joint sent , which uses manifestos with sentence-level supervision only.",
"We vary the proportion of labeled documents at the sentence-level, from 10% to 80%, to study the effect under sparsely-labeled conditions.",
"Note that 80% is the maximum labeled training data under the cross-validation setting.",
"In other cases, a subset (say 10%) is randomly sampled for train-1970 Lang.",
"ing. From Figure 3, having more manifestos with document-level supervision demonstrates the advantage of semi-supervised learning, especially when the sentence-level supervision is sparse ( 40%) Joint struc performs better than Joint sent .",
"Finally, we present the results using PSL, which calibrates the overall manifesto position on the leftright spectrum, obtained using the joint-structured model ( Joint struc ).",
"As we evaluate the effect of temporal dependency, we use manifestos before 2008-09 for training (868 in total) and the later ones (until 2015, 136 in total) for testing.",
"This test set covers one recent set of election manifestos for most countries, and two for the Nether-Approach F-measure AE-NN 0.31 Bi-LSTM(+up) 0.36 Joint struc 0.42 Table 5: Micro-averaged F-measure for manifestos released after 2008-09.",
"lands, Spain and United Kingdom.",
"To avoid variance in right-to-left ratio and the target variable ( pos , initialized using Joint struc ) between the training and test sets, we build a stacked network (Fast and Jensen, 2008), whereby we estimate values for the training set using cross-validation across the training partition, and estimate values for the test-set with a model trained over the entire training data.",
"Note that we build the Joint struc model afresh using the chronologically split training set, and the parameters are tuned again using an 80-20 random split of the training set.",
"For a consistent view of results for both the tasks (and stages), we provide micro-averaged results for sentence-classification with the competing approaches (from Table 3): AE-NN (Subramanian et al., 2017), Bi-LSTM(+up) , and Joint struc .",
"Results are presented in Table 5, noting that the results for a given method will differ from earlier due to the different data split.",
"For the document-level regression task, we also evaluate other approaches based on manifesto similarity and automated scaling with sentence-level policy positions: Cross-lingual scaling ( CLS ) : A recent unsupervised approach for crosslingual political speech text scoring (Glavas et al., 2017b), based on TF-IDF weighed average word-embeddings to represent documents, and a graph constructed using pair-wise document 1971 RILE CHES r r CLS 0.11 0.10 0.09 0.07 PCA 0.26 0.17 0.01 0.02 Joint struc 0.46 0.42 0.42 0.42 PSL coal 0.51 0.45 0.49 0.45 PSL coal+esim 0.52 0.47 0.50 0.46 PSL coal+esim+ploc 0.54 0.56 0.53 0.56 PSL coal+esim+ploc+temp 0.54 0.57 0.55 0.61 Table 6: Manifesto regression task using the two-stage approach.",
"similarity.",
"Given two pivot texts (for left and right), label propagation approach is used to position other documents.",
"PCA : Apply principal component analysis (Gabel and Huber, 2000) on the distribution of sentence-level policy positions (56 classes, without 000), and use the projection on its principal component to explain maximum variance in its sentence-level positions, as a latent manifesto-level position score.",
"Joint struc : We evaluate the scores obtained using Joint struc , which we calibrate using PSL.",
"We validate the calibrated position scores using both RILE and CHES 7 scores.",
"We use CHES 2010-14, and map the manifestos to the closest survey year (wrt its election date).",
"CHES scores are used only for evaluation and not during training.",
"We provide results in Table 6 by augmenting features for the PSL model (Table 1) incrementally.",
"We observed that the coalition-based feature, and polarity of sentences with its position information improves the overall ranking ( r , ).",
"Document similarity based relational feature provides only mild improvement (similarly to Burford et al. (2015)), and temporal dependency provides further improvement against CHES.",
"That is, combining content, network and temporal features provides the best results.",
"This work has been targeted at both fineand coarse-grained manifesto text position analysis.",
"We have proposed a two-stage approach, where in the first step we use a hierarchical multi-task 7 https://www.chesdata.eu/ deep model to handle the sentenceand document-level tasks together.",
"We also utilize additional information on label structure, to augment an auxiliary structured loss.",
"Since the first step places the manifesto on the leftright spectrum using text only, we leverage context information, such as coalition and temporal dependencies to calibrate the position further using PSL.",
"We observed that:",
"(a) a hierarchical bi-LSTM model performs best for the sentence-level classification task, offering a 10% improvement over the state-of-art approach (Subramanian et al., 2017);",
"(b) modeling the document-level task jointly, and also augmenting the structured loss, gives the best performance for the document-level task and also helps the sentence-level task under sparse supervision scenarios; and",
"(c) the inclusion of a calibration step with PSL provides significant gains in performance against both RILE and CHES, in the form of an increase from = 0 .",
"42 to 0.61 wrt CHES survey scores.",
"There are many possible extensions to this work, including:",
"(a) learning multilingual word embeddings with domain information; and",
"(b) modeling other policy related scores from text, such as support for EU integration.",
"We thank the anonymous reviewers for their insightful comments and valuable suggestions.",
"This work was funded in part by the Australian Government Research Training Program Scholarship, and the Australian Research Council."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"It remains an open question whether incorporating external knowledge benefits commonsense reasoning while maintaining the flexibility of pretrained sequence models.",
"To investigate this question, we develop generated knowledge prompting, which consists of generating knowledge from a language model, then providing the knowledge as additional input when answering a question.",
"Our method does not require task-specific supervision for knowledge integration, or access to a structured knowledge base, yet it improves performance of large-scale, state-of-the-art models on four commonsense reasoning tasks, achieving state-of-the-art results on numerical commonsense (NumerSense), general commonsense (CommonsenseQA 2.0), and scientific commonsense (QASC) benchmarks.",
"Generated knowledge prompting highlights large-scale language models as flexible sources of external knowledge for improving commonsense reasoning.",
"Our code is available at github.com/liujch1998/GKP 1 Introduction It remains an open research question whether external knowledge is needed for commonsense reasoning.",
"On one hand, a substantial body of prior work has reported that integrating external knowledge can help improve task performance (Mitra et al., 2019; Bian et al., 2021, inter alia ), especially if the knowledge is high quality (e.g. hand-crafted by ex-perts).",
"On the other hand, recent leaderboards are often dominated by large-scale pretrained models that are fine-tuned on a target benchmark (Khashabi et al., 2020; Lourie et al., 2021), suggesting that the benefits of external knowledge may wash away as the underlying models increase in size and are pretrained on ever larger amounts of raw text.",
"Even if external knowledge is found to be effective on a particular task, flexibility remains a fundamental hurdle to integrating external knowl-Figure 1: Generated knowledge prompting involves",
"(i) using few-shot demonstrations to generate question-related knowledge statements from a language model;",
"(ii) using a second language model to make predictions with each knowledge statement, then selecting the highest-confidence prediction.",
"edge, as many benchmarks currently lack appropriate knowledge bases with sufficient coverage.",
"Furthermore, prior methods often require task-specific, custom supervision for knowledge integration (Mi-tra et al., 2019; Chang et al., 2020), introducing a burden for rapidly adapting new pretrained models to a wide variety of tasks.",
"In this paper, we investigate whether external knowledge can be helpful for commonsense reasoning, even on top of the largest state-of-the-art pretrained models (e.g. T5-11b (Raffel et al., 2019) and its variants), with a focus on four recent commonsense benchmarks.",
"To facilitate easier adaptation with any zero-shot or finetuned models, we propose an approach that does not require access to a structured knowledge base or joint finetuning for knowledge integration.",
"The key insight behind our method, Generated Knowledge Prompting (sketched in Figure 1), is that we can generate useful knowledge from a language model, then provide the knowledge as an input prompt that is concatenated with a question.",
"To 3154 Dataset Question / Knowledge Prediction Score NumerSense the word children means [M] or more kids.",
"support a variety of settings without finetuning, the quality and flexibility of knowledge is crucial.",
"We propose a simple, yet effective, method that elicits knowledge statements (i.e. knowledge expressed as natural language statements) from generic language models in a few-shot setting.",
"Compared to prior work that elicits knowledge via clarification questions (Shwartz et al., 2020) or contrastive explanations (Paranjape et al., 2021), our approach can generate knowledge flexibly, beyond the scope of pre-defined templates (Table 1).",
"Experiments show that our method improves both zero-shot and finetuned models on numerical commonsense (NumerSense (Lin et al., 2020)), general commonsense (CommonsenseQA (Talmor et al., 2019), CommonsenseQA 2.0 (Talmor et al., 2021)), and scientific commonsense (QASC (Khot et al., 2020)) benchmarks, setting a new state-of-the-art on three of these datasets.",
"It outperforms the template-based knowledge generation method self-talk (Shwartz et al., 2020), while performing comparably to retrieval-based systems.",
"We find three factors contribute to the performance of generated knowledge prompting:",
"(i) the quality of knowledge,",
"(ii) the quantity of knowledge where the performance improves with more knowledge statements, and",
"(iii) the strategy for integrating knowledge during inference.",
"Our qualitative analysis suggests that the generated knowledge statements cover a variety of types, and can transform commonsense question answering to explicit reasoning procedures, e.g. deduction, that are supported by off-the-shelf and finetuned language models.",
"A multiple-choice commonsense reasoning task involves predicting an answer a A q given a question",
"question q Q , where the set of choices A q is finite and can vary by question, and both questions and answers are variable-length text sequences.",
"Our method answers commonsense questions in two steps.",
"The first step is knowledge generation , where we use a language model p G ( k | q ) to generate knowledge statements conditioned on the question: K q = { k m : k m p G ( k | q ) , m = 1 . . . M } , where each knowledge statement k m is a variable-length text sequence.",
"Intuitively, each statement contains information that is helpful for answering the question (e.g. Table 1).",
"The second step is knowledge integration , where generated knowledge is integrated into the decision process of a language model used for inference, a = arg max a A q p I ( a | q, K q ) In contrast, the vanilla setting of using the inference model without knowledge is represented by a = arg max a A q p I ( a | q ) .",
"Next, we describe the knowledge generation and integration steps in detail.",
"We generate question-related knowledge statements by prompting a language model.",
"The prompt consists of an instruction, a few demonstrations that are fixed for each task, and a new-question placeholder.",
"The demonstrations are human-written, and each consists of a question in the style of the task and a knowledge statement that is helpful for answering this question.",
"For a given task, we write five demonstrations using the format in Table 2.",
"challenges posed by the task (e.g. numerical commonsense, scientific commonsense).",
"We pair each question with a knowledge statement that turns the commonsense problem posed by the question into an explicit reasoning procedure, without directly answering the question.",
"For example, the knowledge statement Birds have two wings.",
"Penguin is a kind of bird.",
"is helpful for the question Penguins have <mask> wings , because it turns the problem into deductive reasoning.",
"Meanwhile, Penguins have two wings.",
"would be a poor knowledge statement to demonstrate according to our guideline.",
"When generating knowledge for a new question q , we plug the question into the placeholder, and repeatedly sample generated continuations of this prompt to obtain a set of knowledge statements K q = { k 1 , k 2 , . . . , k M } .",
"For full prompts on all the tasks we evaluate on, see Appendix A.2.",
"In the knowledge integration step, we use a language model called the inference model to make predictions with each generated knowledge statement, then select the highest-confidence prediction.",
"Specifically, we use each knowledge statement to prompt the model, forming M knowledge-augmented questions: q 0 = q, q 1 = [ k 1 || q ] , . . . , q M = [ k M || q ] where [ || ] denotes text concatenation.",
"We compute an aggregated score for each answer choice a using the augmented question that best supports it under the inference model: p I ( a | q, K q ) max 0 m M p I ( a | q m ) .",
"which is the choice that gets most support from one of the knowledge statements.",
"This prediction uses a single knowledge statement, which we refer to as the selected knowledge : k = k m where m = arg max 0 m M max a A q p I ( a | q m ) .",
"The inference model may be any existing language model taken off-the-shelf (i.e. zero-shot) or finetuned on the task.",
"We do not do any further finetuning with knowledge prompting.",
"Here, we describe the implementation details of",
"our method and how they are adapted to each task.",
"For knowledge generation, we use GPT-3 (Brown et al., 2020) as the underlying language model, where our few-shot prompting method is most effective.",
"We generate M = 20 knowledge statements for each question with nucleus sampling p = 0 .",
"5 (Holtzman et al., 2019), and discard repetitions and empty strings.",
"Generation is terminated when it exceeds 64 tokens or hits the \\n token.",
"1 For inference, we use off-the-shelf T5 (Raffel et al., 2019) and GPT-3, as well as finetuned models that are state-of-the-art on each dataset, including UnifiedQA (UQA) (Khashabi et al., 2020) and Unicorn (Lourie et al., 2021).",
"See details in the task setup below.",
"We evaluate our method on four commonsense reasoning datasets which cover a variety of challenges and problem formats.",
"1 An exception is with the CSQA2 dataset, where for the best results we choose M = 5 and allow for up to 128 tokens in each generation.",
"NumerSense (Lin et al., 2020) consists of numerical statements about common objects and concepts where for each sentence we need to recover a masked number word.",
"The choices are integers ranging from zero to ten, plus the word no , so the task can be framed as a multiple-choice problem.",
"Since NumerSense is a diagnostic dataset, we only use zero-shot inference models, which is the current SOTA.",
"We follow Zhang (2021) who uses the state-of-the-art zero-shot T5 with text-infilling setup and select the choice with highest likelihood on its token(s).",
"We also implement zero-shot GPT-3 inference, where we plug in each choice to the question and compute the choice probability as the generative probability of the entire sentence, normalized over all the choices.",
"CommonsenseQA (CSQA) (Talmor et al., 2019) is a 5-way multiple-choice QA dataset about common world scenarios.",
"We do inference with the zero-shot and finetuned T5 models.",
"For zero-shot T5, we format the question as text-infilling, and predict the choice with highest sequence-to-sequence language modeling probability.",
"For finetuned T5 (including UnifiedQA which is SOTA), we use the same setup as Khashabi et al. (2020).",
"CommonsenseQA 2.0 (CSQA2) (Talmor et al., 2021) is a binary classification dataset where we need to judge whether commonsense statements are true or false.",
"We only do inference with the finetuned model, due to poor calibration of zero-shot models on this dataset.",
"We use finetuned Unicorn (Lourie et al., 2021), which is the current SOTA, following the setup in Talmor et al. (2021).",
"QASC (Khot et al., 2020) is an 8-way multiple-choice QA dataset about grade school science.",
"This dataset also includes two pieces of background knowledge per question, whose composition fully answers the question.",
"We do inference with zero-shot T5 and finetuned T5 (including UnifiedQA which is SOTA), using the same setups as CSQA.",
"We study the impact of our knowledge generation method (shorthanded as K ) by comparing with the following baselines:",
"No knowledge ( ) We refer to inference without any knowledge statements as the vanilla baseline.",
"Random sentences ( R ) Sampling random sentences from the language model without conditioning on the question.",
"We use the same implementation setup as our knowledge generation method (i.e. also using GPT-3, with the same hyperparameters).",
"Context sentences ( C ) Sampling sentences from the context of the question.",
"This is implemented by sampling text continuations of the question from the language model.",
"We use the same implementation setup as our knowledge generation method.",
"Template-generated knowledge ( T ) Self-talk (Shwartz et al., 2020) uses manually-designed templates to elicit knowledge statements from language models.",
"For fair comparison, we use GPT-3 as the knowledge generator in self-talk, and bound the number of generations to M = 20 per question.",
"Templates and other hyperparameters are kept the same as their original paper.",
"Retrieval-based knowledge ( IR ) Instead of being generated, knowledge can be retrieved from appropriate sources.",
"We consider the following retrieval-based methods.",
"For NumerSense, knowledge is retrieved from sentences in Wikipedia and GenericsKB.",
"For CSQA2, we use snippets returned by Google when querying the question.",
"For QASC, we use the associated fact sentences that are used to create each question.",
"Answers ( A ) Instead of generating knowledge, GPT-3 can be prompted to generate direct answers to questions.",
"In the prompts, we use the same input questions as those in knowledge generation, while replacing the knowledge statement with the ground truth answer.",
"We consider two baselines: (1) Generate one answer per question and use this to measure the performance of the few-shot GPT-3 inference model; (2) Generate M = 20 answers per question, and use these answers to prompt the SOTA inference models.",
"As we will show, our generated knowledge prompting method sets new state-of-the-art results on most datasets we evaluate on, and works well under both zero-shot and finetuned settings.",
"In particular, our knowledge generation outperforms naive baselines as well as template-based knowledge generation, and is on-par with retrieval-based systems.",
"models following our task setups.",
"New state-of-the-art.",
"We apply our method on top of the same inference model used in the previous state-of-the-art.",
"On NumerSense, we achieve a 3157 A B 1 B 2 C D 1 D 2 Dataset NumerSense CSQA CSQA CSQA2 QASC QASC Inference Model T5-11b T5-11b UQA-11b-ft Unicorn-ft T5-11b UQA-11b-ft dev test core test all dev dev dev test dev test dev test K n o w l e d g e G e n .",
"6% (66.18 72.47) improvement over the previous best method based on the zero-shot T5 model.",
"The previous state-of-the-art among non-retrieval methods on CSQA2 is based on the finetuned Unicorn model, upon which we improve by 2% (70.2 73.03).",
"For QASC, the previous best is based on the finetuned UnifiedQA model, upon which we improve by 3% (76.74 80.33).",
"Zero-shot settings.",
"Columns A , B 1 , and D 1 in Table 3 show that our method substantially improves zero-shot inference models, by 7% to 10% across NumerSense (64.05 72.47), CSQA (39.89 47.26), and QASC (44.89 55.00).",
"Finetuned settings.",
"Columns B 2 , C , and D 2 in Table 3 indicate that our method consistently improves upon the vanilla baseline set by finetuned inference models (though by smaller margins than in the zero-shot settings).",
"Table 3 reports the performance with different knowledge generation baselines.",
"Generally, random sentences barely help and even hurt the inference model, whereas context sentences of the question provide some gain.",
"In contrast, knowledge generated by our method consistently leads to substantial performance improvements, which implies that our knowledge is of high quality.",
"commonsense questions, underperforming our best models by 14% to 20% across all tasks.",
"Even when we use answers generated by few-shot GPT-3 to prompt the SOTA inference models, this still significantly falls behind our method on almost all the tasks and models we consider (with one exception CSQA with T5 inference).",
"Through the medium of knowledge , our method can effectively leverage useful information possessed by GPT-3 to help improve even the SOTA models on various commonsense reasoning tasks.",
"Our knowledge outperform template generated knowledge.",
"We compare our knowledge generation method with the template-based self-talk on the CSQA dev set.",
"(CSQA is the only task we experiment with that has self-talk templates avail-able.)",
"Our method leads to a larger improvement over the T5-11b baseline than self-talk (by 1.89%), showing that it is better at eliciting helpful knowledge from models.",
"Our knowledge is comparable with retrieval-based knowledge.",
"On NumerSense, the retrieved knowledge only improves inference performance by 0.18% on test-core and 1.02% on test-all, while our method further outperforms it by 8.83% and 7.37%, respectively.",
"This shows that knowledge retrieved from a loosely-related knowledge base can be far less useful than our generated knowledge.",
"On CSQA2, although we are not able to beat the web-retrieved knowledge, 3158 Figure 2: Performance with different number of generated knowledge statements per question (QASC dev set, T5-11b inference model).",
"our method still bridges the performance gap without referring to Google search.",
"For QASC, the retrieved knowledge is actually gold knowledge from a knowledge base that was used to construct the dataset.",
"As a result, our generated knowledge falls significantly short of the retrieved knowledge.",
"In summary, our generated knowledge is roughly comparable with retrieved knowledge in terms of downstream performance, and is most valuable when there is no appropriate in-domain knowledge base to retrieve from.",
"Better performance with more knowledge.",
"We analyze the impact of the number of generated knowledge statements, M , and show the results in Figure 2.",
"Generally, the performance increases with the quantity of knowledge statements.",
"It saturates at M = 20 and begins to decline when more knowledge statements are introduced, which may be because more noisy knowledge is generated.",
"The knowledge integration method.",
"In addition to the knowledge integration method described in 2.2, we experiment with two alternatives: Mixture-of-Experts (MoE) and Product-of-Experts (PoE) (Hinton, 2002).",
"These make the following modifications to Equation 1, respectively: MoE: p I ( a | q, K q ) (cid:88) 0 m M p I ( a | q m ) , (2) PoE: p I ( a | q, K q ) (cid:89) 0 m M p I ( a | q m ) .",
"(3) Figure 3: Improvement on top of different sizes of inference model (Numersense dev set).",
"best knowledge to rely on is best among the three.",
"Lightweight inference models and amplifica-tion.",
"We found that the size of inference model affects the magnitude of improvement.",
"Figure 3 shows the NumerSense performance gain on top of different sizes of inference model.",
"As we use smaller inference models, the performance gain increases drastically.",
"In particular, with our method the smallest T5 model is as powerful as the T5-3b baseline, and T5-large outperforms the GPT-3 baseline.",
"This indicates that model-generated knowledge can enable high performing, yet lightweight, inference models.",
"Furthermore, the improvement does not diminish as the inference model becomes as big as the knowledge generation model, as the inference by GPT-3 can benefit by 9.0% from the knowledge elicited from itself.",
"This indicates that our method can somewhat amplify the useful knowledge already possessed by the model, leading to better predictions.",
"The size of knowledge generation model.",
"Figure 4 shows the NumerSense performance gain when using different sizes of GPT-3 as the knowledge generation model.",
"On top of the T5-11b inference model, The 6.7B knowledge model gives 3159 Figure 5: Human evaluation of generated knowledge.",
"a 5.0% improvement, narrower than the 10.5% improvement given by the 175B knowledge model.",
"The 1.3B and 0.4B knowledge models do not give a significant improvement.",
"Therefore, we do not necessarily need the largest version of GPT-3 as the knowledge source, though we do need the model to be relatively large in order to generate useful and reliable knowledge.",
"We conduct a human evaluation on NumerSense and QASC to study the quality of generated knowledge and the interpretability of its impact on task performance.",
"Evaluation.",
"We report the quality of knowledge statements along four axes: (1) Grammaticality : whether it is grammatical; (2) Relevance : whether it is relevant to the topic or concepts mentioned on the question; (3) Factuality : whether it is (mostly) factually correct; and (4) Helpfulness : whether it helps answering the question in an either direct or indirect way, and may fall into one of the three categories: helpful (i.e. supports the correct answer), harmful (i.e. negates the correct answer or supports an incorrect answer), or neutral (neither helpful nor harmful).",
"These metrics are adapted from Shwartz et al. (2020) and are defined in Appendix A.3.",
"From each dataset, we sample up to 50 selected knowledge (2.2) that change the correctness of T5-11b's prediction (i.e. rectifies model prediction from wrong to right, or misleads model prediction from right to wrong).",
"The knowledge are labeled by two NLP experts and a moderate level of agreement was reached (Fleiss Kappa = 0 .",
"57 (Landis and Koch, 1977)).",
"To ensure objectivity, it is not revealed to the annotators whether the knowledge rectifies or misleads the model prediction.",
"Results.",
"Figure 5 summarizes the results.",
"The vast majority of selected knowledge are grammatical and relevant to the question, and 83% of them are factually correct.",
"72% are seen as being helpful for answering the question according the human evaluators, whereas 13% are harmful.",
"Out of the knowledge statements that rectify the model predictions, 93% are labeled as helpful by the human evaluators; in contrast, when the knowledge statement misleads the model, only 21% are labeled as helpful, and 39% harmful.",
"Of the knowledge deemed helpful by human and rectifies model prediction, 95% are factual, while of those deemed harmful by human and misleads model prediction, 86% are non-factual, suggesting that improving knowledge factuality is a promising path towards more helpful knowledge.",
"We also analyzed the nonselected knowledge and found that these statements have slightly lower factuality and helpfulness than the selected knowledge.",
"Table 5 shows a few examples where the generated knowledge rectifies model prediction.",
"Due to space constraints we only show the selected knowledge (2.2) for each question.",
"In all examples, the model without prompted knowledge assigns a higher score to an incorrect answer than the correct answer, while with knowledge prompting, the correct answer is assigned a much higher score.",
"Prompting with generated knowledge can transform commonsense reasoning into explicit reasoning procedures such as paraphrasing, induction, deduction, analogy, abductive reasoning, logical elimination, negation, and numerical reasoning.",
"Knowledge can be elicited from pretrained language models.",
"Numerous works have shown that pretrained language models implicitly contain a large amount of knowledge that can be queried via conditional generation (Davison et al., 2019; Petroni et al., 2019; Jiang et al., 2020).",
"Consequently, these models can directly perform inference on tasks like commonsense reasoning (Trinh and Le, 2018; Yang et al., 2020), text classification (Shin et al., 2020; Puri and Catanzaro, 2019), and natural language inference (Shin et al., 2020; Schick and Schtze, 2021).",
"Inspired by these observations, we elicit question-related knowledge in an explicit form from language models and use them to guide the inference.",
"Leveraging external knowledge for commonsense reasoning.",
"Some work uses external commonsense knowledge bases to make improvements on various NLP tasks, including commonsense reasoning.",
"One approach is to inject commonsense knowledge into language models, either by pretraining on knowledge bases (Ma et al., 2021; Chang et al., 2020; Mitra et al., 2019; Zhong et al., 2019) or finetuning the model so that it can reason with additional retrieved knowledge (Chang et al., 2020; Mitra et al., 2019; Bian et al., 2021).",
"Another direction is to ground the question into a knowledge graph and do inference with graph-based reasoning (Lin et al., 2019; Lv et al., 2020; Yasunaga et al., 2021).",
"A common prerequisite of these methods is a high-quality, high-coverage, in-domain commonsense knowledge base (Ma et al., 2019).",
"Some commonsense reasoning datasets are derived from existing knowledge bases; for example, CommonsenseQA (Talmor et al., 2019) is derived from ConceptNet (Speer et al., 2017), and Social IQA (Sap et al., 2019b) is derived from ATOMIC (Sap et al., 2019a).",
"For such datasets, it is natural to elicit related knowledge from the underlying knowledge base that derived them, and typically this would demonstrate considerable gains (Mitra et al., 2019; Chang et al., 2020).",
"However, if there is a domain mismatch between the dataset and the knowledge base, such gains tend to diminish (Mi-tra et al., 2019; Ma et al., 2019).",
"This becomes a bottleneck when encountering datasets that have no suitable knowledge base (e.g. NumerSense (Lin et al., 2020) and CommonsenseQA 2.0 (Talmor et al., 2021)), or when the system needs to handle commonsense queries that do not fit in any of the commonsense domains represented by an existing knowledge base.",
"Our work overcomes this diffi-3161 culty by leveraging pretrained language models as the source of commonsense knowledge.",
"Adding generated text during inference.",
"Recently, several works show that model performance on commonsense reasoning can be boosted by augmenting the question with model-generated text, such as clarifications, explanations, and implications.",
"Self-talk (Shwartz et al., 2020) elicits clari-fications to concepts in the question and appends them to the inference model input.",
"Contrastive explanations (Paranjape et al., 2021) prompts inference models with generated explanations that contrast between two answer choices.",
"The aforementioned methods depend on task-specific templates to inquire the generator, which means they are only capable of eliciting a limited variety of knowledge and require careful hand-crafting to transfer to new tasks.",
"Other explanation-based methods (Latcinnik and Berant, 2020; Rajani et al., 2019) finetune the generator model so that it produces explanations that are used for question augmentation.",
"DynaGen (Bosselut et al., 2021) uses pretrained commonsense models to generate implications of a question and expands the inference input with these generations.",
"However, its usage of COMeT (Bosselut et al., 2019) as the generator confines its applicability to the social commonsense domain.",
"Our work contributes to this general line of research, yet different from these previous methods that elicit knowledge with task-specific templates or from finetuned knowledge generators, our method requires only a few human-written demonstrations in the style of the task, making it much more flexible, easy-to-transfer, and engineering-efficient.",
"We introduce generated knowledge prompting, a simple method to elicit and integrate knowledge from language models so as to improve performance on commonsense reasoning tasks.",
"In particular, we generate knowledge statements by prompting a language model with task-specific, human-written, few-shot demonstrations of question-knowledge pairs.",
"We show that knowledge can be integrated by simply plugging it in at inference time, with no need to finetune the model for knowledge integration.",
"Our method shows effectiveness across multiple datasets, sets the new state-of-the-art on three commonsense reasoning tasks, and works under a variety of settings.",
"The method's success highlights language models as sources of flexible, high-quality knowledge for commonsense reasoning.",
"This work was funded in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) (funding reference number 401233309), DARPA MCS program through NIWC Pacific (N66001-19-2-4031), and the Allen Institute for AI.",
"We also thank Google Cloud Compute, as well as OpenAI.",
"We thank Daniel Khashabi, Vered Shwartz, Bhargavi Paranjape, Bill Yuchen Lin, Jonathan Herzig for their help with the experiments and evaluation."
] | [
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"method",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"result",
"method",
"result",
"objective",
"abstain",
"other",
"other",
"other"
] |
[
"Current summarization systems only produce plain, factual headlines, but do not meet the practical needs of creating memorable titles to increase exposure.",
"We propose a new task, Stylistic Headline Generation (SHG), to enrich the headlines with three style options (humor, romance and clickbait), in order to attract more readers.",
"With no style-specific article-headline pair (only a standard headline summarization dataset and mono-style corpora), our method TitleStylist generates style-specific headlines by combining the summarization and reconstruction tasks into a multitasking framework.",
"We also introduced a novel parameter sharing scheme to further disentangle the style from the text.",
"Through both automatic and human evaluation, we demonstrate that TitleStylist can generate relevant, fluent headlines with three target styles: humor, romance, and clickbait.",
"The attraction score of our model generated headlines surpasses that of the state-of-the-art summarization model by 9 .",
"68% , and even outperforms human-written references.",
"1 1 Introduction Every good article needs a good title, which should not only be able to condense the core meaning of the text, but also sound appealing to the readers for more exposure and memorableness.",
"However, currently even the best Headline Generation (HG) system can only fulfill the above requirement yet performs poorly on the latter.",
"For example, in Figure 1, the plain headline by an HG model Summ: Leopard Frog Found in New York City is less eye-catching than the style-carrying ones such as What's That Chuckle You Hear? It May Be the New Frog From NYC . Corresponding author.",
"To bridge the gap between the practical needs for attractive headlines and the plain HG by the current summarization systems, we propose a new task of Stylistic Headline Generation (SHG).",
"Given an article, it aims to generate a headline with a target style such as humorous, romantic, and click-baity.",
"It has broad applications in reader-adapted title generation, slogan suggestion, auto-fill for online post headlines, and many others.",
"SHG is a highly skilled creative process, and usually only possessed by expert writers.",
"One of the most famous headlines in American publications, Sticks Nix Hick Pix , could be such an example.",
"In contrast, the current best summarization systems are at most comparable to novice writers who provide a plain descriptive representation of the text body as the title (Cao et al., 2018b,a; Lin et al., 2018; Song et al., 2019; Dong et al., 2019).",
"These systems usually use a language generation model that mixes styles with other linguistic patterns and inherently lacks a mechanism to control the style explicitly.",
"More fundamentally, the training data comprise of a mixture of styles (e.g., the Gigaword dataset (Rush et al., 2017)), obstructing the models from learning a distinct style.",
"In this paper, we propose the new task SHG, to emphasize the explicit control of style in headline generation.",
"We present a novel headline generation model, TitleStylist, to produce enticing titles with target styles including humorous, romantic, and click-baity.",
"Our model leverages a multitasking framework to train both a summarization model on headline-article pairs, and a Denoising Autoen-coder (DAE) on a style corpus.",
"In particular, based on the transformer architecture (Vaswani et al., 2017), we use the style-dependent layer normalization and the style-guided encoder-attention to disentangle the language style factors from the text.",
"This design enables us to use the shared content to generate headlines that are more relevant to the articles, as well as to control the style by plugging in a set of style-specific parameters.",
"We validate the model on three tasks: humorous, romantic, and click-baity headline generation.",
"Both automatic and human evaluations show that TitleStylist can generate headlines with the desired styles that appeal more to human readers, as in Figure",
"1. The main contributions of our paper are listed below: To the best of our knowledge, it is the first research on the generation of attractive news headlines with styles without any supervised style-specific article-headline paired data.",
"Through both automatic and human evaluation, we demonstrated that our proposed TitleStylist can generate relevant, fluent headlines with three styles (humor, romance, and clickbait), and they are even more attractive than human-written ones.",
"Our model can flexibly incorporate multiple styles, thus efficiently and automatically providing humans with various creative headline options for references and inspiring them to think out of the box.",
"Our work is related to summarization and text style transfer.",
"Headline generation is a very popular area of research.",
"Traditional headline generation methods mostly focus on the extractive strategies using linguistic features and handcrafted rules (Luhn, 1958; Edmundson, 1964; Mathis et al., 1973; Salton et al., 1997; Jing and McKeown, 1999; Radev and McK-eown, 1998; Dorr et al., 2003).",
"To enrich the diversity of the extractive summarization, abstractive models were then proposed.",
"With the help of neural networks, Rush et al. (2015) proposed attention-based summarization (ABS) to make Banko et al. (2000)'s framework of summarization more powerful.",
"Many recent works extended ABS by utilizing additional features (Chopra et al., 2016; Takase et al., 2016; Nallapati et al., 2016; Shen et al., 2016, 2017a; Tan et al., 2017; Guo et al., 2017).",
"Other variants of the standard headline generation setting include headlines for community question answering (Higurashi et al., 2018), multiple headline generation (Iwama and Kano, 2019), user-specific generation using user embeddings in recommendation systems (Liu et al., 2018), bilingual headline generation (Shen et al., 2018) and question-style headline generation (Zhang et al., 2018a).",
"Only a few works have recently started to focus on increasing the attractiveness of generated headlines (Fan et al., 2018; Xu et al., 2019).",
"Fan et al. (2018) focuses on controlling several features of the summary text such as text length, and the style of two different news outlets, CNN and DailyMail.",
"These controls serve as a way to boost the model performance, and the CNNand DailyMail-style control shows a negligible improvement.",
"Xu et al. (2019) utilized reinforcement learning to encourage the headline generation system to generate more sensational headlines via using the readers' comment rate as the reward, which however cannot explicitly control or manipulate the styles of headlines.",
"Shu et al. (2018) proposed a style transfer approach to transfer a non-clickbait headline into a clickbait one.",
"This method requires paired news articles-headlines data for the target style; however, for many styles such as humor and romance, there are no available headlines.",
"Our model does not have this limitation, thus enabling transferring to many more styles.",
"Our work is also related to text style transfer, which aims to change the style attribute of the text while",
"preserving its content.",
"First proposed by Shen et al. (2017b), it has achieved great progress in recent years (Xu et al., 2018; Lample et al., 2019; Zhang et al., 2018b; Fu et al., 2018; Jin et al., 2019; Yang et al., 2018; Jin et al., 2020).",
"However, all these methods demand a text corpus for the target style; however, in our case, it is expensive and technically challenging to collect news headlines with humor and romance styles, which makes this category of methods not applicable to our problem.",
"The model is trained on a source dataset S and target dataset T .",
"The source dataset S = { ( a ( i ) , h ( i ) ) } Ni =1 consists of pairs of a news article a and its plain headline h .",
"We assume that the source corpus has a distribution P ( A, H ) , where A = { a ( i ) } Ni =1 , and H = { h ( i ) } Ni =1 .",
"The target corpus T = { t ( i ) } Mi =1 comprises of sentences t written in a specific style (e.g., humor).",
"We assume that it conforms to the distribution P ( T ) .",
"Note that the target corpus T only contains style-carrying sentences, not necessarily headlines it can be just book text.",
"Also no sentence t is paired with a news article.",
"Overall, our task is to learn the conditional distribution P ( T | A ) using only S and T .",
"This task is fully unsupervised because there is no sample from the joint distribution P ( A, T ) .",
"For summarization, we adopt a sequence-to-sequence (Seq2Seq) model based on the Transformer architecture (Vaswani et al., 2017).",
"As in Figure 2, it consists of a 6-layer encoder E ( ; E ) and a 6-layer decoder G ( ; G ) with a hidden size of 1024 and a feed-forward filter size of 4096.",
"For better generation quality, we initialize with the MASS model (Song et al., 2019).",
"MASS is pretrained by masking a sentence fragment in the encoder, and then predicting it in the decoder on large-scale English monolingual data.",
"This pretraining is adopted in the current state-of-the-art systems across various summarization benchmark tasks including HG.",
"To disentangle the latent style from the text, we adopt a multitask learning framework (Luong et al., 2015), training on summarization and DAE simultaneously (as shown in Figure 3).",
"Supervised Seq2Seq Training for ES and GS With the source domain dataset S , based on the encoder-decoder architecture, we can learn the conditional distribution P ( H | A ) by training z S = ES ( A ) and HS = GS ( z S ) to solve the supervised Seq2Seq learning task, where z S is the learned latent representation in the source domain.",
"The loss function of this task is LS ( ES , GS ) = E ( a , h ) S [ log p ( h | a ; ES , GS )] , (1) where ES and GS are the set of model parameters of the encoder and decoder in the source domain and p ( h | a ) denotes the overall probability of generating an output sequence h given the input article a , which can be further expanded as follows: p ( h | a ; ES , GS ) = L (cid:89) t =1 p ( h t |{ h 1 , ..., h t 1 } , z S ; GS ) , (2) where L is the sequence length.",
"DAE Training for ET and GT For the target style corpus T , since we only have the sentence t without paired news articles, we train z T = ET ( t ) and t = GT ( z T ) by solving an unsupervised reconstruction learning task, where z T is the learned latent representation in the target domain, and t is the corrupted version of t by randomly deleting or blanking some words and shuffling the word orders.",
"To train the model, we minimize the reconstruction error LT : LT ( ET , GT ) = E t T [ log p ( t | t )] , (3) where ET and GT are the set of model parameters for the encoder and generator in the target domain.",
"We train the whole model by jointly minimizing the supervised Seq2Seq training loss LS and the unsupervised denoised auto-encoding loss LT via multitask learning, so the total loss becomes L ( ES , GS , ET , GT ) = LS ( ES , GS ) + (1 ) LT ( ET , GT ) , (4) where is a hyper-parameter.",
"More constraints are necessary in the multitask training process.",
"We aim to infer the conditional distribution as P ( T | A ) = GT ( ES ( A )) .",
"However, without samples from P ( A, T ) , this is a challenging or even impossible task if ES and ET , or GS and GT are completely independent of each other.",
"Hence, we need to add some constraints to the network by relating ES and ET , and GS and GT .",
"The simplest design is to share all parameters between ES and ET , and apply the same strategy to GS and GT .",
"The intuition behind this design is that by exposing the model to both summarization task and style-carrying text reconstruction task, the model would acquire some sense of the target style while summarizing the article.",
"However, to encourage the model to better disentangle the content and style of text and more explicitly learn the style contained in the target corpus T , we share all parameters of the encoder between two domains, i.e., between ES and ET , whereas we divide the parameters of the decoder into two types: style-independent parameters ind and style-dependent parameters dep .",
"This means that only the style-independent parameters are shared between GS and GT while the style-dependent parameters are not.",
"More specifically, the parameters of the layer normalization and encoder attention modules are made style-dependent as detailed below.",
"et al., 2016), we make the scaling and shifting parameters for layer normalization in the transformer architecture un-shared for each style.",
"This style layer normalization approach aims to transform a layer's activation x into a normalized activation z specific to the style s : z = s ( x ) s , (5) where and are the mean and standard deviation of the batch of x , and s and s are style-specific parameters learned from data.",
"Specifically, for the transformer decoder architecture, we use a style-specific self-attention layer normalization and final layer normalization for the source and target domains on all six decoder layers.",
"Type",
"2. Style-Guided Encoder Attention Our model architecture contains the attention mechanism, where the decoder infers the probability of the next word not only conditioned on the previous words but also on the encoded input hidden states.",
"The attention patterns should be different for the summarization and the reconstruction tasks due to their different inherent nature.",
"We insert this thinking into the model by introducing the style-guided encoder attention into the multi-head attention module, which is defined as follows: Q = query W sq (6) K = key W k (7) V = value W v (8) Att( Q , K , V ) = Softmax (cid:18) QK tr d model (cid:19) V , (9) where query , key , and value denote the triple of inputs into the multi-head attention module; W sq , W k , and W v denote the scaled dot-product matrix for affine transformation; d model is the dimension of the hidden states.",
"We specialize the dot-product matrix W sq of the query for different styles, so that Q can be different to induce diverse attention patterns.",
"We compile a rich source dataset by combining the New York Times (NYT) and CNN, as well as three target style corpora on humorous, romantic, and click-baity text.",
"The average sentence length in the NYT, CNN, Humor, Romance, and Clickbait datasets are 8.8, 9.2, 12.6, 11.6 and 8.7 words, respectively.",
"The source dataset contains news articles paired with corresponding headlines.",
"To enrich the training corpus, we combine two datasets: the New York Times (56K) and CNN (90K).",
"After combining these two datasets, we randomly selected 3,000 pairs as the validation set and another 3,000 pairs as the test set.",
"We first extracted the archival abstracts and headlines from the New York Times (NYT) corpus (Sandhaus, 2008) and treat the abstracts as the news articles.",
"Following the standard preprocessing procedures (Kedzie et al., 2018), 2 we filtered out advertisement-related articles (as they are very different from news reports), resulting in 56,899 news abstracts-headlines pairs.",
"We then add into our source set the CNN summarization dataset, which is widely used for training abstractive summarization models (Hermann et al., 2015).",
"3 We use the short summaries in the original dataset as the news abstracts and automatically parsed the headlines for each news from the dumped news web pages, 4 and in total collected 90,236 news abstract-headline pairs.",
"Humor and Romance For the target style datasets, we follow (Chen et al., 2019) to use humor and romance novel collections in BookCor-pus (Zhu et al., 2015) as the Humor and Romance datasets.",
"5 We split the documents into sentences, tokenized the text, and collected 500K sentences as our datasets.",
"Clickbait We also tried to learn the writing style from the click-baity headlines since they have shown superior attraction to readers.",
"Thus we used The Examiner SpamClickBait News dataset, denoted as the Clickbait dataset.",
"6 We collected 500K headlines for our use.",
"Some examples from each style corpus are listed in Table",
"1. 2 https://github.com/kedz/ summarization-datasets 3 We use CNN instead of the DailyMail dataset since DailyMail headlines are very long and more like short summaries.",
"4 https://cs.nyu.edu/kcho/DMQA/ 5 https://www.smashwords.com/ 6 https://www.kaggle.com/therohk/ examine-the-examiner Style Examples Humor The crowded beach like houses in the burbs and the line ups at Walmart.",
"Neural Headline Generation (NHG) We train the state-of-the-art summarization model, MASS (Song et al., 2019), on our collected news abstracts-headlines paired data.",
"Gigaword-MASS We test an off-the-shelf headline generation model, MASS from (Song et al., 2019), which is already trained on Gigaword, a large-scale headline generation dataset with around 4 million articles.",
"7 Neural Story Teller (NST) It breaks down the task into two steps, which first generates headlines from the aforementioned NHG model, then applies style shift techniques to generate style-specific headlines (Kiros et al., 2015).",
"In brief, this method uses the Skip-Thought model to encode a sentence into a representation vector and then manipulates its style by a linear transformation.",
"Afterward, this transformed representation vector is used to initialize a language model pretrained on a style-specific corpus so that a stylistic headline can be generated.",
"More details of this method can refer to the official website.",
"8 7 https://github.com/harvardnlp/ sent-summary 8 https://github.com/ryankiros/ neural-storyteller Fine-Tuned We first train the NHG model as mentioned above, then further fine-tuned it on the target style corpus via DAE training.",
"Multitask We share all parameters between ES and ET , and between GS and GT , and trained the model on both the summarization and DAE tasks.",
"The model architecture is the same as NHG.",
"To evaluate the performance of the proposed TitleStylist in generating attractive headlines with styles, we propose a comprehensive twofold strategy of both automatic evaluation and human evaluation.",
"We randomly sampled 50 news abstracts from the test set and asked three native-speaker annotators for evaluation to score the generated headlines.",
"Specifically, we conduct two tasks to evaluate on four criteria: (1) relevance, (2) attractiveness, (3) language fluency, and (4) style strength.",
"For the first task, the human raters are asked to evaluate these outputs on the first three aspects, relevance, attractiveness, and language fluency on a Likert scale from 1 to 10 (integer values).",
"For relevance , human annotators are asked to evaluate how semantically relevant the headline is to the news body.",
"For attractiveness , annotators are asked how attractive the headlines are.",
"For fluency , we ask the annotators to evaluate how fluent and readable the text is.",
"After the collection of human evaluation results, we averaged the scores as the final score.",
"In addition, we have another independent human evaluation task about the style strength we present the generated headlines from TitleStylist and baselines to the human judges and let them choose the one that most conforms to the target style such as humor.",
"Then we define the style strength score as the proportion of choices.",
"Apart from the comprehensive human evaluation, we use automatic evaluation to measure the generation quality through two conventional aspects: summarization quality and language fluency.",
"Note that the purpose of this two-way automatic evaluation is to confirm that the performance of our model is in an acceptable range.",
"Good automatic evaluation performances are necessary proofs to compliment human evaluations on the model effectiveness.",
"Summarization Quality We use the standard automatic evaluation metrics for summarization with the original headlines as the reference: BLEU (Pa-pineni et al., 2002), METEOR (Denkowski and Lavie, 2014), ROUGE (Lin, 2004) and CIDEr (Vedantam et al., 2015).",
"For ROUGE, we used the Files2ROUGE 9 toolkit, and for other metrics, we used the pycocoeval toolkit.",
"10 Language Fluency We fine-tuned the GPT-2 medium model (Radford et al., 2019) on our collected headlines and then used it to measure the perplexity (PPL) on the generated outputs.",
"11 4.4 Experimental Details We used the fairseq code base (Ott et al., 2019).",
"During training, we use Adam optimizer with an initial learning rate of 5 10 4 , and the batch size is set as 3072 tokens for each GPU with the parameters update frequency set as",
"4. For the random corruption for DAE training, we follow the standard practice to randomly delete or blank the word with a uniform probability of 0 .",
"2 , and randomly shuffled the word order within 5 tokens.",
"All datasets are lower-cased.",
"is set as 0.5 in experiments.",
"For each iteration of training, we randomly draw a batch of data either from the source dataset or from the target style corpus, and the sampling strategy follows the uniform distribution with the probability being equal to .",
"The human evaluation is to have a comprehensive measurement of the performances.",
"We conduct experiments on four criteria, relevance, attraction, fluency, and style strength.",
"We summarize the human evaluation results on the first three criteria in Table 2, and the last criteria in Table",
"4. Note that through automatic evaluation, the baselines NST, Fine-tuned, and Gigaword-MASS perform poorer than other methods (in Section 5.2), thereby we removed them in human evaluation to save unnecessary work for human raters.",
"Relevance We first look at the relevance scores in Table",
"2. It is interesting but not surprising that the pure summarization model NHG achieves the highest relevance score.",
"The outputs from NHG 9 https://github.com/pltrdy/files2rouge 10 https://github.com/Maluuba/nlg-eval 11 PPL on the development set is 42.5 Style Settings Relevance Attraction Fluency None NHG 6.21 8.47 9.31 Human 5.89 8.93 9.33 Humor Multitask 5.51 8.61 9.11 TitleStylist 5.87 8.93 9.29 Romance Multitask 5.67 8.54 8.91 TitleStylist 5.86 8.87 9.14 Clickbait Multitask 5.67 8.71 9.21 TitleStylist 5.83 9.29 9.44 Table 2: Human evaluation on three aspects: relevance, attraction, and fluency.",
"are usually like an organic reorganization of several keywords in the source context (as shown in Table 3), thus appearing most relevant.",
"It is noteworthy that the generated headlines of our TitleStylist for all three styles are close to the original human-written headlines in terms of relevance, validating that our generation results are qualified in this aspect.",
"Another finding is that more attractive or more stylistic headlines would lose some relevance since they need to use more words outside the news body for improved creativity.",
"Attraction In terms of attraction scores in Table 2, we have three findings: (1) The human-written headlines are more attractive than those from NHG, which agrees with our observation in Section",
"1. (2) Our TitleStylist can generate more attractive headlines over the NHG and Multitask baselines for all three styles, demonstrating that adapting the model to these styles could improve the attraction and specialization of some parameters in the model for different styles can further enhance the attraction.",
"(3) Adapting the model to the Clickbait style could create the most attractive headlines, even out-weighting the original ones, which agrees with the fact that click-baity headlines are better at drawing readers' attention.",
"To be noted, although we learned the Clickbait style into our summarization system, we still made sure that we are generating relevant headlines instead of too exaggerated ones, which can be verified by our relevance scores.",
"Fluency The human-annotated fluency scores in Table 2 verified that our TitleStylist generated headlines are comparable or superior to the human-written headlines in terms of readability.",
"Apart from the human evaluation of the overall generation quality on four criteria, we also conducted a conventional automatic assessment to gauge only the summarization quality.",
"This evaluation does not take other measures such as the style strength into consideration, but it serves as important complimentary proof to ensure that the model has an acceptable level of summarization ability.",
"Table 5 summarizes the automatic evaluation results of our proposed TitleStylist model and all baselines.",
"We use the summarization-related evaluation metrics, i.e., BLEU, ROUGE, CIDEr, and METEOR, to measure how relevant the generated headlines are to the news articles, to some extent, by comparing them to the original human-written headlines.",
"In Table 5, the first row NHG shows the performance of the current state-of-the-art summarization model on our data, and Table 3 provides two examples of its generation output.",
"Our ultimate goal is to generate more attractive headlines than these while maintaining relevance to the news body.",
"From Table 5, the baseline Gigaword-MASS scored worse than NHG, revealing that directly applying an off-the-shelf headline generation model to new in-domain data is not feasible, although this model has been trained on more than 20 times larger dataset.",
"Both NST and Fine-tuned baselines present very poor summarization performance, and the reason could be that both of them cast the problem into two steps: summarization and style transfer, and the latter step is absent of the summarization task, which prevents the model from maintaining its summarization capability.",
"In contrast, the Multitask baseline involves the summarization and style transfer (via reconstruction training) processes at the same time and shows superior summarization performance even compared with NHG.",
"This reveals that the unsupervised reconstruction task can indeed help improve the supervised summarization task.",
"More importantly, we use two different types of corpora for the reconstruction task: one consists of headlines that are similar to the news data for the summarization task, and the other consists of text from novels that are entirely different from the news data.",
"However, NewsAbstract Turkey's bitter history with Kurds is figuring prominently in its calculations over how to deal with Bush administration's request to use Turkey as the base for thousands of combat troops if there is a war with Iraq; Recep Tayyip Erdogan, leader of Turkey's governing party, says publicly for the first time that future of Iraq's Kurdish area, which abuts border region of Turkey also heavily populated by Kurds, is weighing heavily on negotiations; Hints at what Turkish officials have been saying privately for weeks: if war comes to Iraq, overriding Turkish objective would be less helping Americans topple Saddam Hussein, but rather preventing Kurds in Iraq from forming their own state.",
"Reunified Berlin is commemorating 40th anniversary of the start of construction of Berlin wall, almost 12 years since Germans jubilantly celebrated reopening between east and west and attacked hated structure with sledgehammers; Some Germans are championing the preservation of wall at the time when little remains beyond few crumbling remnants to remind Berliners of unhappy division that many have since worked hard to heal and put behind them; What little remains of physical wall embodies era that Germans have yet to resolve for themselves; They routinely talk of 'wall in the mind' to describe social and cultural differences that continue to divide easterners and westerners.",
"unsupervised reconstruction training on both types of data can contribute to the summarization task, which throws light on the potential future work in summarization by incorporating unsupervised learning as augmentation.",
"We find that in Table 5 TitleStylist-F achieves the best summarization performance.",
"This implicates that, compared with the Multitask baseline where the two tasks share all parameters, specialization of layer normalization and encoder-attention parameters can make GS focus more on summarization.",
"It is noteworthy that the summarization scores for TitleStylist are lower than TitleStylist-F but still comparable to NHG.",
"This agrees with the fact that the GT branch more focuses on bringing in stylistic linguistic patterns into the generated summaries, thus the outputs would deviate from the pure summarization to some degree.",
"However, the relevance degree of them remains close to the baseline NHG, which is the starting point we want to improve on.",
"Later in the next section, we will further validate that these headlines are faithful to the new article through human evaluation.",
"We also reported the perplexity (PPL) of the generated headlines to evaluate the language fluency, as shown in Table",
"5. All outputs from baselines NHG and Multitask and our proposed TitleStylist show similar PPL compared with the test set (used in the fine-tuning stage) PPL 42.5, indicating that they are all fluent expressions for news headlines.",
"We progressively expand TitleStylist to include all three target styles (humor, romance, and clickbait) to demonstrate the flexibility of our model.",
"That is, we simultaneously trained the summarization task on the headlines data and the DAE task on the three target style corpora.",
"And we made the layer normalization and encoder-attention parameters specialized for these four styles (fact, humor, romance, and clickbait) and shared the other parameters.",
"We compared this multi-style version, TitleStylist-Versatile, with the previously presented single-style counterpart, as shown in Table",
"6. From this table, we see that the BLEU and ROUGE-L scores of TitleStylist-Versatile are comparable to TitleStylist for all three styles.",
"Besides, we conducted another human study to determine the better headline between the two models in terms of attraction, and we allow human annotators to choose both options if they deem them as equivalent.",
"The result is presented in the last column of Table 6, which shows that the attraction of TitleStylist-Versatile outputs is competitive to TitleStylist.",
"TitleStylist-Versatile thus generates multiple headlines in different styles altogether, which is a novel and efficient Style Corpus Model BLEU ROUGE-1 ROUGE-2 ROUGE-L CIDEr METEOR PPL ( ) Len.",
"We have proposed a new task of Stylistic Headline Generation (SHG) to emphasize explicit control of styles in headline generation for improved attraction.",
"To this end, we presented a multitask framework to induce styles into summarization, and proposed the parameters sharing scheme to enhance both summarization and stylization capabilities.",
"Through experiments, we validated our proposed TitleStylist can generate more attractive headlines than state-of-the-art HG models.",
"We appreciate all the volunteer native speakers (Shreya Karpoor, Lisa Orii, Abhishek Mohan, Paloma Quiroga, etc.) for the human evaluation of",
"our study, and thank the reviewers for their inspiring comments.",
"Joey Tianyi Zhou is partially supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project No. A18A1b0045)."
] | [
"abstain",
"objective",
"method",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"method",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"other",
"other",
"other"
] |
[
"In this paper, we explore the slot tagging with only a few labeled support sentences (a.k.a. few-shot).",
"Few-shot slot tagging faces a unique challenge compared to the other few-shot classification problems as it calls for modeling the dependencies between labels.",
"But it is hard to apply previously learned label dependencies to an unseen domain, due to the discrepancy of label sets.",
"To tackle this, we introduce a collapsed dependency transfer mechanism into the conditional random field (CRF) to transfer abstract label dependency patterns as transition scores.",
"In the few-shot setting, the emission score of CRF can be calculated as a word's similarity to the representation of each label.",
"To calculate such similarity, we propose a Label-enhanced Task-Adaptive Projection Network (L-TapNet) based on the state-of-the-art few-shot classification model TapNet, by leveraging label name semantics in representing labels.",
"Experimental results show that our model significantly outperforms the strongest few-shot learning baseline by 14.64 F1 scores in the one-shot setting.",
"1 1 Introduction Slot tagging (Tur and De Mori, 2011), a key module in the task-oriented dialogue system (Young et al., 2013), is usually formulated as a sequence labeling problem (Sarikaya et al., 2016).",
"Slot tagging faces the rapid changing of domains, and the labeled data is usually scarce for new domains with only a few samples.",
"Few-shot learning technique (Miller et al., 2000; Fei-Fei et al., 2006; Lake et al., 2015; Vinyals et al., 2016) is appealing in this scenario since it learns the model that borrows the prior experience from old domains and adapts to new domains quickly with only very few examples (usually one or two examples for each class).",
"Previous few-shot learning studies mainly focused on classification problems, which have been widely explored with similarity-based methods (Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018; Yan et al., 2018; Yu et al., 2018).",
"The ba-sic idea of these methods is classifying an (query) item in a new domain according to its similarity with the representation of each class.",
"The similarity function is usually learned in prior rich-resource domains and per class representation is obtained from few labeled samples (support set).",
"It is straightforward to decompose the few-shot sequence labeling into a series of independent few-shot classifica-tions and apply the similarity-based methods.",
"However, sequence labeling benefits from taking the dependencies between labels into account (Huang et al., 2015; Ma and Hovy, 2016).",
"To consider both the item similarity and label dependency, we propose to leverage the conditional random fields (Lafferty et al., 2001, CRFs) in few-shot sequence labeling (see Figure 1).",
"In this paper, we translate the emission score of CRF into the output of the similarity-based method and calculate the transition score with a specially designed transfer mechanism.",
"The few-shot scenario poses unique challenges in learning the emission and transition scores of CRF.",
"It is infeasible to learn the transition on the few labeled data, and prior label dependency in source domain cannot be directly transferred due to discrepancy in label set.",
"To tackle the label discrepancy problem, we introduce the collapsed dependency transfer mechanism.",
"It transfers label dependency information from source domains to target domains by abstracting domain-specific labels into abstract domain-independent labels and modeling the label dependencies between these abstract labels.",
"It is also challenging to compute the emission scores (word-label similarity in our case).",
"Popular few-shot models, such as Prototypical Network (Snell et al., 2017), average the embeddings of each label's support examples as label representations, which often distribute closely in the embedding space and thus cause misclassification.",
"To remedy this, Yoon et al. (2019) propose TapNet that learns to project embedding to a space where words of different labels are well-separated.",
"We introduce this idea to slot tagging and further propose to improve label representation by leveraging the semantics of label names.",
"We argue that label names are often semantically related to slot words and can help word-label similarity modeling.",
"For example in Figure 1, word rain and label name weather are highly related.",
"To use label name semantic and achieve good-separating in label representation, we propose Label-enhanced TapNet (L-TapNet) that constructs an embedding projection space using label name semantics, where label representations are well-separated and aligned with embeddings of both label name and slot words.",
"Then we calculate similarities in the projected embedding space.",
"Also, we introduce a pair-wise embedding mechanism to representation words with domain-specific context.",
"One-shot and five-shot experiments on slot tagging and named entity recognition show that our model achieves significant improvement over the strong few-shot learning baselines.",
"Ablation tests demonstrate improvements coming from both L-TapNet and collapsed dependency transfer.",
"Further analysis for label dependencies shows it captures non-trivial information and outperforms transition based on rules.",
"Our contributions are summarized as follows: (1) We propose a few-shot CRF framework for slot tagging that computes emission score as word-label similarity and estimate transition score by transferring previously learned label dependencies.",
"(2) We introduce the collapsed dependency transfer mechanism to transfer label dependencies across domains with different label sets.",
"(3) We propose the L-TapNet to leverage semantics of label names to enhance label representations, which help to model the word-label similarity.",
"We define sentence x = ( x 1 , x 2 , . . . , x n ) as a sequence of words and define label sequence of the sentence as y = ( y 1 , y 2 , . . . , y n ) .",
"A domain D = (cid:8) ( x ( i ) , y ( i ) ) (cid:9) ND i =1 is a set of ( x , y ) pairs.",
"For each domain, there is a corresponding domain-specific label set LD = { (cid:96) i } Ni =1 .",
"To simplify the description, we assume that the number of labels N is same for all domains.",
"As shown in Figure 2, few-shot models are usually first trained on a set of source domains {D 1 , D 2 , . . . } , then directly work on another set of unseen target domains {D (cid:48) 1 , D (cid:48) 2 , . . . } without fine-tuning.",
"A target domain D (cid:48) j only contains few labeled samples, which is called support set S = (cid:8) ( x ( i ) , y ( i ) ) (cid:9) NS i =1 .",
"S usually includes k examples (K-shot) for each of N labels (N-way).",
"The K-shot sequence labeling task is defined as follows: given a K-shot support set S and an input query sequence x = ( x 1 , x 2 , . . . , x n ) , find x 's best label sequence y : y = ( y 1 , y 2 , . . . , y n ) = arg max y p ( y | x , S ) .",
"In this section, we first show the overview of the proposed CRF framework ( 3.1).",
"Then we discuss how to compute label transition score with collapsed dependency transfer ( 3.2) and compute emission score with L-TapNet ( 3.3).",
"Conditional Random Field (CRF) considers both the transition score and the emission score to find the global optimal label sequence for each input.",
"Following the same idea, we build our few-shot slot tagging framework with two components: Transition Scorer and Emission Scorer.",
"We apply the linear-CRF to the few-shot setting by modeling the label probability of label y given query sentence x and a K-shot support set S : p ( y | x , S ) = 1 Z exp( TRANS ( y ) + EMIT ( y , x , S )) , where Z = (cid:88) y (cid:48) Y exp( TRANS ( y (cid:48) ) + EMIT ( y (cid:48) , x , S )) , TRANS ( y ) = (cid:80) ni =1 f T ( y i 1 , y i ) is the Transition Sample (1) Support Set : search [O] songs [O] of [O] celine [B-time] dion [I-time] play [O] black [B-music] bird [I-music] of [O] beatles [B-artist] Query (x,y) : play [O] the [O] hey [B-music] jude [B-music] Label set : {O, B-music, I-music, B-artist, I-artist} Support Set : are [O] there [O] hospitals [B-org] near [B-dist] me [I-dist] show [O] the [O] closest [B-dist] rest [B-pos] station [I-pos] Query (x,y) : where [O] is [O] the [O] nearest [B-dist] shop [B-pos] Label set : {O, B-dist, I-dist, B-pos, I-pos} Music Source Domains Navigation News Weather Target Domain Sample (2) Sample (i) Sample (n) Sample (1) Sample (k) Support Set : is [O] it [O] strong [B-weather] wind [I-weather] outside [O] will [O] it [O] snow [B-weather] next [B-team] friday [I-team] Query x : will it rain tonight Label set : {O, B-weather, I-weather, B-time, I-time} Train Few-shot Model Test Emission Scorer Transition Scorer CRF Figure 2: Overviews of training and testing.",
"Scorer output and EMIT ( y , x , S ) = (cid:80) ni =0 f E ( y i , x , S ) is the Emission Scorer output.",
"is a scaling parameter which balances weights of the two scores.",
"We take LCRF = log( p ( y | x , S )) as loss function and minimize it on data from source domains.",
"After the model is trained, we employ Viterbi algorithm (Forney, 1973) to find the best label sequence for each input.",
"The transition scorer component captures the dependencies between labels.",
"2 We model the label dependency as the transition probability between two labels: f T ( y i 1 , y i ) = p ( y i | y i 1 ) .",
"Conventionally, such probabilities are learned from training data and stored in a transition matrix TN N , where N is the number of labels.",
"For example, T B-loc , B-team corresponds to p ( B-loc | B-team ) .",
"But in the few-shot setting, a model faces different label sets in the source domains (train) and the target domains (test).",
"This mismatch on labels blocks the trained transition scorer directly working on a target domain.",
"model-2 Here, we ignore Start and End labels for simplicity.",
"In practice, Start and End are included as two additional abstract labels.",
"ing the transition probabilities between abstract labels.",
"Intuitively, we collapse specific labels into three abstract labels: O , B and I .",
"To distinguish whether two labels are under the same or different semantics, we model transition from B and I to the same B ( sB ), a different B ( dB ), the same I ( sI ) and a different I ( dI ).",
"We record such abstract label transition with a Table T 3 5 (see Figure 3).",
"For example, T B,sB = p ( B (cid:96) m | B (cid:96) m ) is the transition probability of two same B labels.",
"And T B,dI = p ( I (cid:96) n | B (cid:96) m ) is the transition probability from a B label to an I label with different types, where (cid:96) m (cid:54) = (cid:96) n .",
"T O,sB and T O,sI respectively stands for the probability of transition from O to any B or I label.",
"To calculate the label transition probability for a new domain, we construct the transition matrix T by filling it with values in T .",
"Figure 3 shows the filling process, where positions in the same color are filled by the same values.",
"For example, we fill T B-loc , B-team with value in T B,dB .",
"As shown in Figure 4, the emission scorer independently assigns each word an emission score with regard to each label:",
"In few-shot setting, a word's emission score is calculated according to its similarity to representations of each label.",
"To compute such emission, we propose the L-TapNet by improving TapNet (Yoon et al., 2019) with label semantics and prototypes.",
"TapNet is the state-of-the-art few-shot image classification model.",
"Previous few-shot models, such as Prototypical Network, average the embeddings of each label's support example as label representations and directly compute word-label similarity in word embedding space.",
"Different from them, TapNet calculates word-label similarity in a projected embedding space, where the words of different labels are well-separated.",
"That allows TapNet to reduce misclassification.",
"To achieve this, TapNet leverages a set of per-label reference vectors = [ 1 ; ; N ] as label representations.",
"and construct a projection space based on these references.",
"Then, a word x 's emission score for label (cid:96) j is calculated as its similarity to reference j : f E ( y j , x , S ) = Softmax { SIM ( M ( E ( x )) , M ( j ) } , where M is a projecting function, E is an embedder and SIM is a similarity function.",
"TapNet shares the references across different domains and constructs M for each specific domain by randomly associating the references to the specific labels.",
"Task-Adaptive Projection Space Construction Here, we present a brief introduction for the construction of projection space.",
"Let c j be the average of the embedded features for words with label (cid:96) j in support set S .",
"Given the = [ 1 ; ; N ] and support set S , TapNet constructs the projector M such that (1) each c j and corresponding reference vector j align closely when projected by M .",
"(2) words of different labels are well-separated when projected by M .",
"To achieve these, TapNet first computes the alignment bias between c j and j in original embedding space, then it finds a projection M that eliminates this alignment bias and effectively separates different labels at the same time.",
"Specifically, TapNet takes the matrix solution of a linear error nulling process as the embedding projector M .",
"For the detail process, refer to the original paper.",
"As mentioned in the introduction, we argue that label names often semantically relate to slot words and can help word-label similarity modeling.",
"To enhance TapNet with such information, we use label semantics in both label representation and construction of projection space.",
"Projection Space with Label Semantics Let prototype c j be the average of embeddings of words with label (cid:96) j in support set.",
"And s j is semantic representation of label (cid:96) j and Section 3.3.3 will introduce how to obtain it in detail.",
"Intuitively, slot values ( c j ) and corresponding label name ( s j ) often have related semantics and they should be close construct Query Support Set S softmax{SIM( ,( ))} Prototype References Label Semantic c B-weather p is [O] it [O] strong [B-weather] wind [I-weather] outside [O] will [O] it [O] snow [B-weather] next [B-team] friday [I-team] Linear Error Nulling Projection Space will it rain tonight Figure 4: Emission Scorer with L-TapNet.",
"in embedding space.",
"So, we find a projector M that aligns c j to both j and s j .",
"The difference with TapNet is that it only aligns c j to references j but we also require alignments with label representation.",
"The label-enhanced reference is calculated as: j = (1 ) j + s j , where is a balance factor.",
"Label semantics s j makes M specific for each domain.",
"And reference j provides cross domain generalization.",
"Then we construct an M by linear error nulling of alignment error between label enhanced reference j and c j following the same steps of TapNet.",
"Emission Score with Label Semantic For emission score calculation, compared to TapNet that only uses domain-agnostic reference as label representation, we also consider the label semantics and use the label-enhanced reference j in label representation.",
"Besides, we further incorporate the idea of Prototypical Network and represent a label using a prototype reference c j as j = (1 ) c j + j .",
"Finally, the emission score of x is calculated as its similarity to label representation : f E ( y j , x , S ) = Softmax { SIM ( M ( E ( x )) , M ( j ) } , where SIM is the dot product similarity function and E is a word embedding function which will be introduced in the next section.",
"For the word embedding function E , we proposed a pair-wise embedding mechanism.",
"As shown in Figure 5, a word tends to mean differently when concatenated to a different context.",
"To tackle the representation challenges for similarity computation, we consider the special query-support setting in few-shot learning and embed query and blackbird pet music play the blackbird 2: i want to play with the dog 1: play the hey jude of beatles Separate Embedding Pair-wise Embedding Pair ?",
"support words pair-wisely.",
"Such pair-wise embedding can make use of domain-related context in support sentences and provide domain adaptive embeddings for the query words.",
"This will further help to model the query words' similarity to domain-specific labels.",
"To achieve this, we represent each word with self-attention over both query and support words.",
"We first copy query sentence x for NS = |S| times, and pair them with all support sentences.",
"Then the NS pairs are passed to a BERT (Devlin et al., 2019) to get NS embeddings for each query word.",
"We represent each word as the average of NS embeddings.",
"Now, representations of query words are conditioned on domain-specific context.",
"We use BERT as it can naturally capture the relation between sentence pairs.",
"To get label representation s , we first concatenate abstract label name (e.g., begin and inner ) and label name (e.g., weather ).",
"Then, we insert a [CLS] token at the first position, and input them into a BERT.",
"Finally, the representation of [CLS] is used as the label semantic embedding .",
"sequence labeling task: name entity recognition (NER).",
"Due to space limitation, we only present the detailed results for 1-shot/5-shot slot tagging, which transfers the learned knowledge from source domains (training) to an unseen target domain (test-ing) containing only a 1-shot/5-shot support set.",
"The results of NER are consistent and we present them in the supplementary Appendix B. 4.1 Settings Dataset For slot tagging, we exploit the snips dataset (Coucke et al., 2018), because it contains 7 domains with different label sets and is easy to simulate the few-shot situation.",
"The domains are Weather (We), Music (Mu), PlayList (Pl), Book (Bo), Search Screen (Se), Restaurant (Re) and Creative Work (Cr).",
"Information about original datasets is shown in Appendix A. To simulate the few-shot situation, we construct the few-shot datasets from original datasets, where each sample is the combination of a query data ( x q , y q ) and corresponding K-shot support set S .",
"Table 1 shows the overview of the experiment data.",
"Few-shot Data Construction Different from the simple classification of single words, slot tagging is a structural prediction problem over the entire sentence.",
"So we construct support sets with sentences rather than single words under each tag.",
"As a result, the normal N-way K-shot few-shot definition is inapplicable for few-shot slot tagging.",
"We cannot guarantee that each label appears K times while sampling the support sentences, because different slot labels randomly co-occur in one sentence.",
"For example in Figure 1, in the 1-shot support set, label [B-weather] occurs twice to ensure all labels appear at least once.",
"So we approximately construct K-shot support set S following two criteria: (1) All labels within the domain appear at least K times in S .",
"(2) At least one label will appear less than K times in S if any ( x , y ) pair is removed from it.",
"Algorithm 1 shows the detail process.",
"3 Here, we take the 1-shot slot tagging as an example to illustrate the data construction procedure.",
"For each domain, we sample 100 different 1-shot support sets.",
"Then, for each support set, we sample 20 unincluded utterances as queries (query set).",
"Each support-query-set pair forms one few-shot episode .",
"3 Due to the removing step, Algorithm 1 has a preference for sentences with more slots.",
"So in practice, we randomly skip removing by the chance of 20%.",
"Eventually, we get 100 episodes and 100 20 samples (1 query utterance with a support set) for each domain.",
"Evaluation To test the robustness of our framework, we cross-validate the models on different domains.",
"Each time, we pick one target domain for testing, one domain for development, and use the rest domains as source domains for training.",
"So for slot tagging, all models are trained on 10,000 samples, and validated as well as tested on 2,000 samples respectively.",
"When testing model on a target domain, we evaluate F1 scores within each few-shot episode.",
"4 Then we average 100 F1 scores from all 100 episodes as the final result to counter the randomness from support-sets.",
"All models are evaluated on same support-query-set pairs for fairness.",
"To control the nondeterministic of neural network training (Reimers and Gurevych, 2017), we report the average score of 10 random seeds.",
"Hyperparameters We use the uncased BERT-Base (Devlin et al., 2019) to calculate contextual embeddings for all models.",
"We use ADAM (Kingma and Ba, 2015) to train the models with batch size 4 and a learning rate of 1e-5.",
"For the CRF framework, we learn the scaling parameter during training, which is important to get stable results.",
"For L-TapNet, we set as 0.5 and as 0.7.",
"We fine-tune BERT with Gradual Unfreezing trick (Howard and Ruder, 2018).",
"For both proposed and baseline models, we take early 4 For each episode, we calculate the F1 score on query samples with conlleval script: https: //www.clips.uantwerpen.be/conll2000/chunking/conlleval.txt stop in training and fine-tuning when there is no loss decay withing a fixed number of steps.",
"Bi-LSTM is a bidirectional LSTM (Schuster and Paliwal, 1997) with GloVe (Pennington et al., 2014) embedding for slot tagging.",
"It is trained on the support set and tested on the query samples.",
"SimBERT is a model that predicts labels according to cosine similarity of word embedding of non-fine-tuned BERT.",
"For each word x j , SimBERT finds its most similar word x (cid:48) k in support set, and the label of x j is predicted to be the label of x (cid:48) k .",
"TransferBERT is a domain transfer model with the NER setting of BERT (Devlin et al., 2019).",
"We pretrain the it on source domains and select the best model on the same dev set of our model.",
"We deal with label mismatch by only transferring bottleneck feature.",
"Before testing, we fine-tune it on target domain support set.",
"Learning rate is set as 1e-5 in training and fine-tuning.",
"WarmProtoZero (WPZ) (Fritzler et al., 2019) is a few-shot sequence labeling model that regards sequence labeling as classification of every single word.",
"It pre-trains a prototypical network (Snell et al., 2017) on source domains, and utilize it to do word-level classification on target domains without training.",
"Fritzler et al. (2019) use randomly initialized word embeddings.",
"To eliminate the in-fluence of different embedding methods, we further implement WPZ with the pre-trained embedding of GloVe (Pennington et al., 2014) and BERT.",
"Matching Network (MN) is similar to WPZ.",
"The only difference is that we employ the matching network (Vinyals et al., 2016) with BERT embedding for classification.",
"Results of 1-shot Setting Table 2 shows the 1-shot slot tagging results.",
"Each column respectively shows the F1 scores of taking a certain domain as target domain (test) and use others as source domain (train & dev).",
"As shown in the tables, our L-TapNet+CDT achieves the best performance.",
"It outperforms the strongest few-shot learning baseline WPZ+BERT by average F1 scores of 14.64.",
"Our model significantly outperforms Bi-LSTM and TransferBERT, indicating that the number of labeled data under the few-shot setting is too scarce for both conventional machine learning and transfer Model 1-shot Slot Tagging We Mu Pl Bo Se Re Cr Ave.",
"Moreover, the performance of SimBERT demonstrates the superiority of metric-based methods over conventional machine learning models in the few-shot setting.",
"The original WarmProtoZero (WPZ) model suffers from the weak representation ability of its word embeddings.",
"When we enhance it with GloVe and BERT word embeddings, its performance improves significantly.",
"This shows the importance of embedding in the few-shot setting.",
"Matching Network (MN) performs poorly in both settings.",
"This is largely due to the fact that MN pays attention to all support word equally, which makes it vulnerable to the unbalanced amount of O-labels.",
"More specifically, those models that are fine-tuned on support set, such as Bi-LSTM and TransferBERT, tend to predict tags randomly.",
"Those systems can only handle the cases that are easy to generalize from support examples, such as tags for proper noun tokens (e.g. city name and time).",
"This shows that fine-tuning on extremely limited examples leads to poor generalization ability and undertrained classifier.",
"And for those metric based methods, such as WPZ and MN, label prediction is much more reasonable.",
"However, these models are easy to be confused by similar labels, such as current location and geographic poi .",
"It indicates the necessity of well-separated label representations.",
"Also illegal label transitions are very common, which can be well tackled by the proposed collapsed dependency transfer.",
"To eliminate unfair comparisons caused by additional information in label names, we propose the L-WPZ+CDT by enhancing the WarmProtoZero (WPZ) model with label name representation same to L-TapNet and incorporating it into the proposed CRF framework.",
"It combines label name embedding and prototype as each label representation.",
"Its improvements over WPZ mainly come from label semantics, collapsed dependency transfer and pair-wise embedding.",
"L-TapNet+CDT outperforms L-WPZ+CDT by 4.79 F1 scores demonstrating the effectiveness of embedding projection.",
"When compared with TapNet+CDT, L-TapNet+CDT achieves an improvement of 4.54 F-score on average, which shows that considering label semantics and prototype helps improve emission score calculation.",
"Results of 5-shots Setting Table 3 shows the results of 5-shots experiments, which verify the proposed model's generalization ability in more shots situations.",
"The results are consistent with 1-shot setting in general trending.",
"Ablation Test To get further an understanding of each component in our method (L-TapNet+CDT), we conduct ablation analysis on both 1-shot and 5-shots setting in Table 4.",
"Each component of our method is removed respectively, including: collapsed dependency transfer , pair-wise embedding , label semantic , and prototype reference .",
"When collapsed dependency transfer is removed, we directly predict labels with emission score and huge F1 score drops are witnessed in all settings.",
"This ablation demonstrates a great necessity for considering label dependency.",
"For our method without pair-wise embedding, we represent query and support sentences independently.",
"We address the drop to the fact that support sentences can provide domain-related context, and pair-wise embedding can leverage such context and provide domain-adaptive representation for words in query sentences.",
"This helps a lot when computing a word's similarity to domain-specific labels.",
"When we remove the label-semantic from L-TapNet, the model degenerates into TapNet+CDT enhanced with prototype in emission score.",
"The drops in results show that considering label name can provide better label representation and help to model word-label similarity.",
"Further, we also tried to remove the inner and beginning words in label representation and observe a 0.97 F1-score drop on 1-shot SNIPS.",
"It shows that distinguishing B-I labels in label semantics can help tagging.",
"And if we calculate emission score without the prototype reference, the model loses more performance in 5-shots setting.",
"This meets the intuition that prototype allows model to benefit more from the increase of support shots, as prototypes are directly derived from the support set.",
"Analysis of Collapsed Dependency Transfer While collapsed dependency transfer (CDT) brings significant improvements, two natural questions arise: whether CDT just learns simple transition rules and why it works.",
"To answer the first question, we replace CDT with transition rules in Table 5, 5 which shows CDT can bring more improvements than transition rules.",
"To have a deeper insight into the effectiveness of CDT, we conduct an accuracy analysis of it.",
"We assess the label predicting accuracy of different types of label bi-grams.",
"The result is shown in Table 6.",
"We further summarize the bi-grams into 2 categories: Border includes the bi-grams across the border of a slot span; Inner is the bi-grams within a slot span.",
"We argue that improvements of Inner show successful reduction of illegal label transition from CDT.",
"Interestingly, we observe that CDT also brings improvements by correctly predict the first and last token of a slot span.",
"The results of Border verified our observation that CDT may helps to decide the boundaries of slot spans more accurately, which is hard to achieve by adding transition rules.",
"Traditional few-shot learning methods depend highly on hand-crafted features (Fei-Fei, 2006; Fink, 2005).",
"Classical methods primarily focus on metric learning (Snell et al., 2017; Vinyals et al., 2016), which classifies an item according to its similarity to each class's representation.",
"Recent efforts (Lu et al., 2018; Schwartz et al., 2019) propose to leverage the semantics of class name to enhance class representation.",
"However, different from us, these methods focus on image classification where effects of name semantic are implicit and label dependency is not required.",
"process-5 Transition Rule: We greedily predict the label for each word and block the result that conflicts with previous label.",
"ing has been explored for classification tasks, including text classification (Sun et al., 2019; Geng et al., 2019; Yan et al., 2018; Yu et al., 2018), entity relation classification (Lv et al., 2019; Gao et al., 2019; Ye and Ling, 2019), and dialog act prediction (Vlasov et al., 2018).",
"However, few-shot learning for slot tagging is less investigated.",
"Luo et al. (2018) investigated few-shot slot tagging using additional regular expressions, which is not comparable to our model due to the usage of regular expressions.",
"Fritzler et al. (2019) explored few-shot named entity recognition with the Prototypical Network, which has a similar setting to us.",
"Compared to it, our model achieves better performance by considering both label dependency transferring and label name semantics.",
"Zero-shot slot tagging methods (Bapna et al., 2017; Lee and Jha, 2019; Shah et al., 2019) share a similar idea to us in using label name semantics, but has a different setting as few-shot methods are additionally supported by a few labeled sentences.",
"Chen et al. (2016) investigate using label name in intent detection.",
"In addition to learning directly from limited example, another research line of solving data scarcity problem in NLP is data augmentation (Fader et al., 2013; Zhang et al., 2015; Liu et al., 2017).",
"For data augmentation of slot tagging, sentence generation based methods are explored to create additional labeled samples (Hou et al., 2018; Shin et al., 2019; Yoo et al., 2019).",
"In this paper, we propose a few-shot CRF model for slot tagging of task-oriented dialogue.",
"To compute transition score under few-shot setting, we propose the collapsed dependency transfer mechanism, which transfers the prior knowledge of the label dependencies across domains with different label sets.",
"And we propose L-TapNet to calculate emission score, which improves label representation with label name semantics.",
"Experiment results validate that both the collapsed dependency transfer and L-TapNet can improve the tagging accuracy.",
"We sincerely thank Ning Wang and Jiafeng Mao for the help on both paper and experiments.",
"We are grateful for the helpful comments and suggestions from the anonymous reviewers.",
"This work was supported by the National Natural Science Foundation of China (NSFC) via grant 61976072, 61632011 and 61772153."
] | [
"objective",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"method",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"objective",
"objective",
"objective",
"abstain",
"other",
"other",
"other"
] |
[
"Abstract meaning representation (AMR) highlights the core semantic information of text in a graph structure.",
"Recently, pre-trained language models (PLMs) have advanced tasks of AMR parsing and AMR-to-text generation, respectively.",
"However, PLMs are typically pretrained on textual data, thus are sub-optimal for modeling structural knowledge.",
"To this end, we investigate graph self-supervised training to improve the structure awareness of PLMs over AMR graphs.",
"In particular, we introduce two graph auto-encoding strategies for graph-to-graph pre-training and four tasks to integrate text and graph information during pre-training.",
"We further design a unified framework to bridge the gap between pre-training and fine-tuning tasks.",
"Experiments on both AMR parsing and AMR-to-text generation show the superiority of our model.",
"To our knowledge, we are the first to consider pre-training on semantic graphs.",
"Abstract meaning representation (AMR; Banarescu et al. (2013)) is a semantic structure formalism.",
"It represents the meaning of a text in a rooted directed graph, where nodes represent basic semantic units such as entities and predicates, and edges represent their semantic relations, respectively.",
"One example is shown in Figure",
"1(a), with the corresponding sentence in Figure",
"1(b).",
"Serving as a structural representation, AMR has been shown useful for NLP tasks such as text summarization (Liu et al., 2015; Liao et al., 2018; Chen et al., 2021), machine translation (Song et al., 2019), information extraction (Huang et al., 2016; Zhang and Ji, 2021) and dialogue systems (Bai et al., 2021).",
"There are two fundamental NLP tasks concerning AMR, namely AMR parsing (Flanigan et al., 2014; Konstas et al., 2017; Lyu and Titov, 2018; Guo and Lu, 2018; Zhang et al., 2019a; Cai and Lam, 2020; Bevilacqua et al., 2021) and AMR-to-text generation (Konstas et al., 2017; Song et al., possible :domain go-01 :polarity negative :arg0 boy",
"2018; Zhu et al., 2019; Zhao et al., 2020; Bai et al., 2020; Ribeiro et al., 2021a).",
"As shown in Figure 1, the former transforms a textual input ( e.g. , a sentence) into a corresponding AMR structure, and the latter transforms an AMR input into a fluent and grammatical sentence that conveys the same meaning.",
"A common challenge to both tasks is that AMR exists in the form of a graph structure, which is difficult for neural models to learn with limited human-curated data.",
"Recently, large-scale pre-trained sequence-to-sequence (seq2seq) language models (Lewis et al., 2020; Raffel et al., 2020) have been shown useful for both tasks above.",
"The basic idea is to linearize AMR structures into a sequence form, so that both AMR parsing and AMR-to-text generation can be solved as standard seq2seq tasks, using a pre-trained language model fine-tuned on task-specific data.",
"In this way, semantic knowledge learned in self-supervised text-to-text ( t2t ) pretraining can benefit both text-to-graph ( t2g ) and graph-to-text ( g2t ) transformation.",
"Intuitively, structural knowledge from AMR can be a useful complement to semantic knowledge from text.",
"A natural question is whether similar self-supervision strategy can be useful for AMR graphs, so that graph-to-graph ( g2g ) denoise auto-encoder training can serve as effective addition to t2t pre-training, before a model is fine-tuned on t2g and g2t tasks.",
"We investigate this problem in this paper.",
"In particular, there are three questions of interest.",
"First, as mentioned before, is g2g 6001 pre-training complementary to t2t pre-training?",
"Second, what is the most effective way to combine t2t and g2g training?",
"Third, is silver data useful for AMR self-supervision training, and what is the most effective way of making use of such data?",
"Taking BART (Lewis et al., 2020) as the seq-to-seq model, we introduce two strategies for g2g pre-training and propose four tasks to combine t2t and g2g training.",
"To reduce the gap among different pre-training tasks and between pre-training and fine-tuing, we unify all pre-training tasks and fine-tuning tasks in a general framework.",
"Experimental results on standard benchmarks show that: 1) graph pre-training achieves significant improvements over the state-of-the-art systems; 2) silver data are useful for our pre-training framework; 3) our pre-training framework is a better way than fine-tuning to make use of silver data and; 4) our model is more robust than existing systems in unseen domains.",
"Our final models give the best reported results on both AMR parsing and AMR-to-text generation tasks, with a large margin of improvement over the previous best results.",
"To our knowledge, we are the first to consider graph-to-graph self-supervised training on semantic graphs.",
"We release code at https://github.com/muyeby/AMRBART .",
"AMR Parsing.",
"Early AMR parsing systems use statistical methods (Flanigan et al., 2014, 2016; Wang et al., 2015a,b).",
"With the advance in deep learning, various neural models are developed for AMR parsing.",
"Those models can be categorized into: 1) neural transition-based parsers (Ballesteros and Al-Onaizan, 2017; Liu et al., 2018; Fernandez Astudillo et al., 2020; Zhou et al., 2021); 2) sequence-to-graph parsers (Zhang et al., 2019a; Lyu et al., 2020; Cai and Lam, 2020) and; 3) sequence-to-sequence parsers (Konstas et al., 2017; Peng et al., 2017, 2018; Zhang et al., 2019b; Xu et al., 2020; Bevilacqua et al., 2021).",
"Recently, pretraining techniques have significantly boosted the performance of AMR parsing.",
"For example, Lyu and Titov (2018), Zhang et al. (2019a,b) and Cai and Lam (2020) use BERT (Devlin et al., 2019) for sentence encoding; Bevilacqua et al. (2021) fine-tune BART for sequence-to-AMR generation.",
"Xu et al. (2020) pre-train a model on relevant seq2seq learning tasks (e.g., machine translation (Bahdanau et al., 2015), syntactic parsing (Zhu et al., 2013)) before fine-tuning on AMR parsing.",
"Similar to those methods, we consider using pre-trained models to improve the model capacity.",
"However, previous studies focus on fine-tuning language models trained on text data for AMR parsing task, in contract, we focus on integrating structural information into the pre-training.",
"In addition, our method does not require information from auxiliary tasks.",
"AMR-to-Text Generation.",
"On a coarse-grained level, we categorize existing AMR-to-text generation approaches into two main classes: Graph-to-sequence models that adopt a graph encoder to process an AMR graph and use a sequence decoder for generation (Beck et al., 2018; Damonte and Cohen, 2019; Zhu et al., 2019), and sequence-to-sequence models that linearize an AMR graph into a sequence and solve it as a seq2seq problem using randomly initialized (Konstas et al., 2017) or pretrained models (Mager et al., 2020; Ribeiro et al., 2021a; Bevilacqua et al., 2021).",
"This work follows a seq2seq manner, but we use an encoder that integrates AMR and text information.",
"The closest to our work, Ribeiro et al. (2021b) integrate AMR structures into pre-trained T5 (Raffel et al., 2020) using adapters (Houlsby et al., 2019) for AMR-to-text generation.",
"However, they do not pre-train on AMR graphs, and their method cannot solve both AMR parsing and AMR-to-text generation tasks as they require the full AMR structure as the input.",
"Graph Self-supervised Learning.",
"Kipf and Welling (2016) introduce a variational graph auto-encoder to allow self-supervised learning on graph data.",
"Hu et al. (2020a,b) propose local and global learning strategies to pre-train a graph neural network on large-scale protein ego-networks, aca-demic graphs and recommendation data.",
"Lu et al. (2021) enhance the graph learning strategies of Hu et al. (2020b) with dual adaptations.",
"While existing work considers graph neural networks, we pre-train a seq2seq model on AMR graphs.",
"In addition, we jointly pre-train on graphs and text for graph-text correlation modeling.",
"In contrast, existing work pre-trains models on graphs and in isolation with text pre-training.",
"To our knowledge, we are the first to consider AMR as a graph pre-training target.",
"We take BART (Lewis et al., 2020) as the basic seq2seq model (Section 3.1), and introduce graph pre-training strategies (Section 3.2) and an unified pre-training framework (Section 3.3) for both AMR parsing and AMR-to-text generation.",
"BART (Lewis et al., 2020) is a pre-trained denoising auto-encoder, which is implemented as a seq2seq model based on standard Transformer (Vaswani et al., 2017) architecture.",
"Typically, BART is trained to reconstruct original text based on a corrupted text generated by 5 noising functions: 1) Token Masking.",
"Tokens are randomly replaced by [mask] elements; 2) Token Deletion.",
"Tokens are randomly deleted from the input; 3) Text Infilling.",
"Text spans are randomly replaced by a single [mask] token; 4) Sentence Permutation.",
"Text is divided into segments and then shuffled; 5) Document Rotation.",
"A document is rotated to start with a random token.",
"In the fine-tuning, BART takes a complete text as input and maps it into a task-specific output sequence.",
"We linearize an AMR graph into a sequence, so that both AMR parsing and AMR-to-text generation can be performed using a seq2seq model.",
"In addition, it allows pre-training on AMR structures using BART.",
"Following Konstas et al. (2017), we adopt the depth-first search (DFS) algorithm, which is closely related to the linearized natural language syntactic trees (Bevilacqua et al., 2021).",
"For instance, the AMR graph in Figure 1 is linearized into: ( <Z0> possible :domain ( <Z1> go :arg0 ( <Z2> boy ) ) :polarity ( <Z3> negative ) ) , where <Z0> , <Z1> and <Z2> are special tokens to handle co-referring nodes.",
"To deal with such AMR symbols, we follow previous work (Bevilac-qua et al., 2021) and expand the vocabulary by adding all relations and frames.",
"In addition, to distinguish between texts and AMR graphs, we add two special tokens, <g> and < / g> , to mark the beginning and end of AMR graphs, respectively.",
"We introduce two self-supervised training strategies to further pre-train a BART model on AMR",
"graphs.",
"As shown in Figure",
"2(a), the node/edge level denoising strategy encourages the model to capture local knowledge about nodes and edges.",
"The graph level denoising strategy (Figure",
"2(c)) enforces the model to predict a sub-graph, thus facilitating the graph-level learning.",
"1) Node/edge level denoising.",
"We apply a noise function on AMR nodes/edges to construct a noisy input graph.",
"In particular, the noise function is implemented by masking 15% nodes and 15% edges in each graph.",
"As shown in Figure",
"2(a), the node [go-01] and edge [:arg0] are replaced with two [mask] tokens.",
"2) Sub-graph level denoising.",
"This task aims to recover the complete graph when given part of the graph.",
"We randomly remove a sub-graph 1 from the graph and replace it with a [mask] token ( cf.",
"Figure",
"2(c)).",
"The masking probability is 0 .",
"35 .",
"The above standard pre-training and fine-tuning strategies are shown in Table",
"1(a), by using <s> and <g> for differentiating text and graphic information, respectively.",
"However, the model does not fully learn the interaction between textual and AMR information during pre-training.",
"To further address this issue, we consider a unified pretraining framework, which combines text and AMR sequences as input to the denoise auto-encoder.",
"In this way, dynamic masking can be carried out on the text, AMR or both ends, so that the model can learn to make use of one source of information for inferring the other.",
"This can benefit both a parser and a generation model by enforcing the learning of correspondence between text and AMR structures.",
"In addition, as shown in Table 1, there is a gap between standard pre-training and fine-tuning for AMR from/to text transduction.",
"Specifically, the input and output formats are same in the pre-training ( i.e. , t2t and g2g ) but different in the fine-tuning 1 We define a sub-graph has at least one edge and one node.",
"( i.e. , t2g and g2t ).",
"This gap restrains models to make the best use of pre-trained knowledge in the fine-tuning phase.",
"The unified pre-training framework can also benefit task-specific fine-tuning by eliminating the difference of input and output formats between pre-training and fine-tuning.",
"Formally, denoting the text and linearized graph sequence as t and g , where t = { x 1 , x 2 , ..., x n } and g = { g 1 , g 2 , ..., g n } .",
"t and g represent the noisy text and graph, respectively, and t and g refer to the empty text and graph, respectively.",
"As shown in Table",
"1(b), we unify the input format for both pre-training and fine-tuning to tg .",
"For consistency, all input sequences start with a text sequence and end with a graph sequence.",
"Joint Text and Graph Pre-training.",
"We introduce 4 auxiliary pre-training tasks to encourage information exchanging between graphs and text.",
"As shown in Table",
"1(b), the auxiliary tasks are: 1) Graph augmented text denoising ( tg2t ), where an AMR graph is taken as additional input to help masked text reconstruction; 2) Text augmented graph denoising ( t g2g ), where text helps masked graph reconstruction; 3) Noisy graph augmented text denoising ( t g2t ), where the target text is generated based on a pair of masked text and masked graph; 4) Noisy text augmented graph denoising ( t g2g ), where a target graph is generated based on a pair of masked text and masked graph.",
"Dynamic masking rate.",
"Different from standard masking (Devlin et al., 2019) that uses a static masking rate, we adopt a dynamic masking rate p for task tg2t and t g2g .",
"Formally, at step t , we calculate the masking probability p as: p = 0 .",
"where 0 .",
"1 is the initial masking rate, T denotes the total training step.",
"p increases as t grows, as t approaches to T , the pre-training tasks tg2t and t g2g are closer to fine-tuning tasks.",
"Unified Pre-training and Fine-tuning.",
"In our unified framework, fine-tuning tasks can be viewed as having an empty text/graph in the original input, resulting in an input format of tg2t for AMR-to-text generation and tg2g for AMR parsing.",
"In this way, pre-training and fine-tuning tasks share the same input format, thus facilitating knowledge transfer from pre-training to fine-tuning.",
"To pre-train our model, we optimize the total loss ( L total ) which is calculated as:",
"L t2t = log P ( t | t , g ) , L g2g = log P ( g | t , g ) , L tg2t = log P ( t | t , g ) , L t g2g = log P ( g | t , g ) , L t g2t = log P ( t | t , g ) , L t g2g = log P ( g | t , g ) , L total = L t2t + L g2g + L tg2t + L t g2g + L t g2t + L t g2g , (2)",
"where L t2t and L g2g are standard pre-training loss on text (Section 3.1) and graph (Section 3.2), respectively.",
"L tg2t , L t g2g , L t g2t and L t g2g denote joint pre-training losses (Section 3.3), respectively.",
"For fine-tuning, the training objectives are: L amr2text = log P ( t | t , g ) , L text2amr = log P ( g | t , g ) , (3) where L amr2text and L text2amr are training loss of AMR-to-text generation and AMR parsing, respectively.",
"We evaluate the effectiveness of our model on five benchmarks and compare the results with state-of-the-art models on AMR parsing and AMR-to-text generation, respectively.",
"In addition to standard supervised training settings, we evaluate the robustness of our model in a zero-shot domain adaptation setting.",
"Table 2 shows the statistics of datasets.",
"Following Bevilacqua et al. (2021), we use the AMR2.0 (LDC2017T10) and AMR3.0 (LDC2020T02).",
"We also evaluate the model performance on New3 , The Little Prince ( TLP ) and Bio AMR ( Bio ) corpora.",
"For pre-training, we additionally use 200k silver data parsed by SPRING (Bevilacqua et al., 2021).",
"These data are randomly selected from Gigaword (LDC2011T07) corpus, which shares the same textual source with AMR data.",
"2 4.2 Settings We follow Bevilacqua et al. (2021) in preprocessing and post-processing AMR graphs, except for omitting the recategorization step which does not consistently improve model performance in our preliminary experiments.",
"Our model is built based on a vanilla BART 3 .",
"The best model and hyper-parameters are selected by performance on the validation set.",
"The detailed hyper-parameters are given in Appendix A. Metrics.",
"Following Bevilacqua et al. (2021), we evaluate on the AMR parsing benchmarks by using Smatch (Cai and Knight, 2013) and other fine-grained metrics.",
"4 Regarding AMR-to-text, we use three common Natural Language Generation measures, including BLEU (Papineni et al., 2002), CHRF++ (Popovic, 2017) and METEOR (Baner-jee and Lavie, 2005), tokenizing with the script provided with JAMR (Flanigan et al., 2014).",
"For AMR parsing , we consider following systems for comparison: 1) Lyu and Titov (2018; LyuT), a neural parser trained by jointly modeling alignments, concepts and relations; 2) Zhang et al. (2019b; Zhang+), a seq2seq approach that incrementally builds up an AMR via predicting semantic relations; 3) Zhou et al. (2020; Zhou+), an aligner-free parser enhanced by explicit dependency and latent structures; 4) Cai and Lam (2020a; CaiL), a graph-based parser that enhances incremental sequence-to-graph model with a graph-sequence iterative inference mechanism; 5) Bevilacqua et al. (2021; Bevilacqua+), a fine-tuned BART model that predicts a linearized AMR graph.",
"For AMR-to-text generation , the compared models are: 1) Zhu et al. (2019; Zhu+), a Transformer-based model that enhances self-attention with graph relations; 2) Zhang et al. (2020; Zhang+), a graph-to-sequence model which uses a dynamic graph convolutional networks for better graph modeling.",
"3) Bai et al. (2020; Bai+), a graph encoder (Zhu et al., 2019) with a structural decoder that jointly predicts the target text and the input structure; 4) Mager et al. (2020; Mager+), a fine-tuned GPT that predicts text based on a PENMAN linearized AMR graph; 5) Bevilacqua et al. (2021; Bevilacqua+), a fine-tuned BART that predicts text based on a DFS linearized AMR graph; 6) Ribeiro et al. (2021; Ribeiro+), a fine-tuned BART based on a PENMAN linearized AMR graph.",
"For a fair comparison, we leave out models based on T5 (Ribeiro et al., 2021a,b), which has about two times more parameters than BART. 6005 Setting AMR parsing AMR-to-text Full Model 83.6 45.6 Node/edge masking 83.4 45.1 Sub-graph masking 83.1 44.7 Table 4: Impact of two masking strategies.",
"Table 3 shows results on the validation set of AMR2.0 under different model settings, where we take a fine-tuned BART-based model (Bevilacqua et al., 2021) as our baseline.",
"We first study the effectiveness of pre-training only on text and graphs.",
"As shown in Table 3, both pre-training on the text ( tg2t ) and graph ( t g2g ) leads to better results, and combining them can give better results on both tasks.",
"Also, adding joint pre-training tasks improves the performance.",
"In particular, t g2g gives a Smatch improvement of 0 .",
"7 for AMR paring, and tg2t reaches a BLEU of 45 .",
"3 for AMR-to-text generation, which is 2 .",
"8 points higher than baseline.",
"Adding t g2g gives a Smatch of 83 .",
"2 for AMR parsing, and t g2t improves the baseline by 1 .",
"7 BLEU points for generation.",
"By combining t g2g and tg2t , the performance increase by 0 .",
"6 and 2 .",
"5 points on AMR parsing and AMR-to-text generation, respectively.",
"Similar trend can be observed by combining t g2g and t g2t .",
"Finally, using all 6 pre-training tasks, our model reach a result of 83 .",
"6 Smatch and 45 .",
"6 BLEU, respectively.",
"We also study the impact of two graph self-supervised training strategies.",
"In particular, we evaluate the performance after removing the node/edge or the sub-graph masking task independently.",
"As shown in Table 4, the performance decreases on both AMR parsing and AMR-to-text generation tasks without the node/edge level masking strategy.",
"The performance drop is larger when removing the sub-graph masking task, with a margin of 0 .",
"5 Smatch and 0 .",
"9 BLEU, respectively.",
"Figure",
"3(a) compares the performance of standard pre-training ( t2t , g2g ) and fine-tuning ( t2g , g2t ) with our unified framework.",
"The unified framework gives better results than standard versions on both tasks.",
"This confirms our assumption that our unified framework is helpful for reducing the gap between pre-training and fine-tuning.",
"Besides, we find that by unifying pre-training and fine-tuning formats, our model converges faster than the baseline during fine-tuning ( cf. Appendix C.1).",
"Figure",
"3(b) shows the model performance regarding different scales of silver data.",
"Even without silver data, the performance of our model is better than the baseline, indicating that graph pretraining is beneficial for downstream tasks when using various auxiliary tasks.",
"When silver data are available, the performance of both AMR parsing and AMR-to-text generation tasks increases as the scale of silver data increases, with a margin of 2 BLEU score.",
"We also fine-tune a BART model on silver data under our unified framework (i.e., tg2t and tg2g ), and find that our dual graph and text denoising tasks are more useful ( cf. Appendix C.2 for more analysis and discussion).",
"AMR parsing .",
"Table 5 lists the result of different models on AMR2.0 and AMR3.0.",
"Among previous works, Bevilacqua+ (2021, large) achieves the best results, consistently outperforming other systems.",
"Compared with the system of Bevilacqua et al. (2021), our model obtains significantly ( p <0.01) better Smatch scores in both base and large settings on both datasets.",
"In particular, our base model outperforms the Bevilacqua+ (2021, base) by 0 .",
"9 Smatch point on AMR2.0, and our large model obtains a Smatch of 85 .",
"4 and 84 .",
"2 on AMR2.0 and AMR3.0, respectively.",
"To our knowledge, these are the best-reported results, showing the effectiveness of our method.",
"Besides, Bevilacqua+ (2021, large) s uses silver data for fine-tuning, yet does not lead to consistent improvement over Bevilacqua+ (2021, large).",
"In contrast, our large model gives 1 .",
"1 and 1 .",
"2 higher Smatch than Bevilacqua+ (2021, large) s on AMR2.0 and AMR3.0, respectively.",
"This indicates that our pre-training framework is a better way than fine-tuning to make use of silver data.",
"The main 6006 Model Smatch Unlab.",
"reason is that our models are pre-trained using a denoising auto-encoding manner, which is less sensitive to silver (or noisy) data than fine-tuning.",
"We also find that further fine-tuning our models on silver data (same with pre-training) cannot bring improvement ( cf. Appendix C.3).",
"AMR-to-text generation .",
"We report the results of different systems on AMR2.0 and AMR3.0 in Table 6, respectively.",
"With the help of BART, Ribeiro+ (2021) and Bevilacqua+ (2021, large) obtain significantly better results than previous graph-to-sequence and GPT-based models.",
"Compared with Bevilacqua+ (2021), our models ( base and large ) give significantly ( p <0.001) better results in terms of all evaluation metrics.",
"In particular, our base model achieves comparable or better performance than Bevilacqua+ (2021, large).",
"Compared with Bevilacqua+ (2021, large) s , our large model improves the performance by 3 .",
"9 and 2 .",
"7 points on AMR2.0 and AMR3.0, respectively.",
"Similar with AMR parsing, we observe that when fine-tuning our models on silver data cannot bring improvement for AMR-to-text generation task (Table 6 and Appendix C.3).",
"Zero-shot Domain Adaption.",
"We use the model trained on AMR2.0 to get predictions on out-of-domain test sets.",
"Table 7 shows the results on AMR parsing and AMR-to-text generation tasks.",
"Similar to in-domain experiments, our models achieve better results than existing methods.",
"In particular, our base model can give comparable performance than Bevilacqua+ (2021, large), and our large model obtains the best-reported results.",
"This indicates that Model BLEU CH.",
"our model is more robust to new domains, thanks to joint graph and text pre-training.",
"Regarding different domains, our method achieves bigger improvements on New3 than the other two domains.",
"This is intuitive, as pre-training strengthens the model representation power on the domain of graph pretraining data, and New3 is closer to it than other two datasets.",
"New3 (both tasks) and TLP (only AMR-to-text generation).",
"In contrast, our model gives consistent improvements on all 3 domains.",
"This can be because fine-tuning leads to catastrophic forgetting of distributional knowledge (Kirkpatrick et al., 2017).",
"Table 8 shows the effects of the graph size, graph diameter and reentrancies on the performance.",
"We split the test set of AMR2.0 into different groups and report the performance improvement over the baseline model (Bevilacqua et al., 2021).",
"All models are trained on AMR2.0.",
"We first consider graph size, which records the number of nodes in an AMR graph.",
"Our model consistently outperforms the baseline model on both tasks, with the performance gap growing on larger graphs.",
"This indicates that our system is more powerful in dealing with larger graphs.",
"The main reason is that our joint text and graph pre-training mechanism enhances the model with the ability to capture word or span level correlation between text and graph, which is helpful for dealing with long sequence and large graphs.",
"The graph depth is defined as the longest distance between the AMR node and root node.",
"A graph with deeper depth has more long-range dependencies.",
"For AMR parsing, our model gives a better Smatch than the baseline model on the first two groups of graphs, and a comparable score on graphs with a depth bigger than 6 .",
"For AMR-to-text generation, our model consistently improves over the baseline model on all graphs, and the improvements are bigger on deeper graphs.",
"This shows that our model is better for learning more complex graphs.",
"It can be that our graph masking strategies train the model to learn the relationships between a sub-graph and the remaining graph context, making it easier to understand deep graphs.",
"Reentrancy is the number of nodes that has multiple parents.",
"Reentrancies pose difficulties to both AMR parsing and AMR-to-text tasks (Damonte and Cohen, 2019; Szubert et al., 2020).",
"The more reentrancies, the harder the graph is to be understood.",
"Our method gives significantly ( p <0.01) better results on both tasks when the input graphs have less than 4 reentrancies.",
"For graphs with more than 4 reentrancies, the proposed model is 0 .",
"4 better on AMR-to-text generation task and comparable than the baseline model on AMR parsing task.",
"This means that our system has an overall better ability on learning reentrancies.",
"Table 9 presents two cases of AMR parsing, with the model outputs generated by our model and the baseline model, and the gold output given the same input sentence.",
"As shown in the first case, the baseline model omits the semantic unit hard , thus generates an incomplete AMR graph of a different meaning compared with the input sentence.",
"In contrast, our system preserves the concept hard and transfers the semantic relations correctly, thanks to the modeling of correspondence between text and graph during pre-training.",
"In the second case, the baseline output includes a cyclic sub-graph ( i.e. , ( z1 harm-01 :ARG1 z1 ) ), which is contrary to the grammar that AMRs should be acyclic .",
"Our system gives a valid AMR graph which is semantically similar with gold graph.",
"Table 10 lists two AMR graphs and model outputs of our AMR-to-text model and the baseline model.",
"In the first case, although the baseline generates a fluent sentence, it ignores the concept have-purpose-91 , resulting in that the generated sentence is of a different meaning compared with the input graph.",
"In the second AMR graph, before modifies the phrase won many championships .",
"However, in the baseline output, before is used to 6008 Text#1: It's getting hard to keep strong and keep carrying on with life.",
"system recovers all concepts and maps the modification relationship from the AMR graph to text correctly.",
"This indicates that our model generates more faithful sentences than the baseline.",
"We investigated graph pre-training as a complement to text pre-training for AMR parsing and AMR-to-text generation tasks, using a novel unified framework with dual graph and text denoising.",
"We find that graph pre-training is highly effective for both AMR parsing and AMR -to-text generation, and is a more effective way of making use of silver data compared with fine-tuning.",
"Our methods give the best results on multiple benchmarks for both tasks.",
"Yue Zhang is the corresponding author.",
"We would like to thank anonymous reviewers for their insightful comments.",
"This work is supported by the National Natural Science Foundation of China under grant No.61976180 and the Tencent AI Lab Rhino-Bird Focused Research Program."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"other",
"other",
"other"
] |
[
"Multi-source sequence generation (MSG) is an important kind of sequence generation tasks that takes multiple sources, including automatic post-editing, multi-source translation, multi-document summarization, etc.",
"As MSG tasks suffer from the data scarcity problem and recent pretrained models have been proven to be effective for low-resource downstream tasks, transferring pretrained sequence-to-sequence models to MSG tasks is essential.",
"Although directly finetuning pretrained models on MSG tasks and concatenating multiple sources into a single long sequence is regarded as a simple method to transfer pretrained models to MSG tasks, we conjecture that the direct finetuning method leads to catastrophic forgetting and solely relying on pretrained self-attention layers to capture cross-source information is not sufficient.",
"Therefore, we propose a two-stage finetuning method to alleviate the pretrain-finetune discrepancy and introduce a novel MSG model with a fine encoder to learn better representations in MSG tasks.",
"Experiments show that our approach achieves new state-of-the-art results on the WMT17 APE task and multi-source translation task using the WMT14 test set.",
"When adapted to document-level translation, our framework outperforms strong baselines significantly.",
"1 1 Introduction Thanks to the continuous representations widely used across text, speech, and image, neural networks that accept multiple sources as input have gained increasing attention in the community (Ive et al., 2019; Dupont and Luettin, 2000).",
"For example, multi-modal inputs that are complementary have proven to be helpful for many sequence generation tasks such as question answering (Antol et al., Corresponding author: Yang Liu 1 The source code is available at https://github.",
"2015), machine translation (Huang et al., 2016), and speech recognition (Dupont and Luettin, 2000).",
"In natural language processing, multiple textual inputs have also been shown to be valuable for sequence generation tasks such as multi-source translation (Zoph and Knight, 2016), automatic post-editing (Chatterjee et al., 2017), multi-document summarization (Haghighi and Vanderwende, 2009), system combination for NMT (Huang et al., 2020), and document-level machine translation (Wang et al., 2017).",
"We refer to this kind of tasks as multi-source sequence generation (MSG).",
"Unfortunately, MSG tasks face a severe challenge: there are no sufficient data to train MSG models.",
"For example, multi-source translation requires parallel corpora involving multiple languages, which are usually restricted in quantity and coverage.",
"Recently, as pretraining language models that take advantage of massive unlabeled data have proven to improve natural language understanding (NLU) and generation tasks substantially (Devlin et al., 2019; Liu et al., 2019; Lewis et al., 2020), a number of researchers have proposed to leverage pretrained language models to enhance MSG tasks (Correia and Martins, 2019; Lee et al., 2020; Lee, 2020).",
"For example, Correia and Martins (2019) show that pretrained autoencoding (AE) models Single-source finetuning Transfer A BCBC Transfer on labeled data for SSG Multi-source finetuning on labeled data for MSG Pretraining on unlabeled data for SSG Decoder Coarse-Enc.",
"As most recent pretrained sequence-to-sequence (Seq2Seq) models (Song et al., 2019; Lewis et al., 2020; Liu et al., 2020) have demonstrated their effectiveness in improving single-source sequence generation (SSG) tasks, we believe that pretrained Seq2Seq models can potentially bring more bene-fits to MSG than pretrained AE models.",
"Although it is easy to transfer Seq2Seq models to SSG tasks, transferring them to MSG tasks is challenging because MSG takes multiple sources as the input, leading to severe pretrain-finetune discrepancies in terms of both architectures and objectives.",
"A straightforward solution is to concatenate the representations of multiple sources as suggested by Correia and Martins (2019).",
"However, we believe this approach suffers from two major drawbacks.",
"First, due to the discrepancy between pretraining and MSG, directly transferring pretrained models to MSG tasks might lead to catastrophic forgetting (McCloskey and Cohen, 1989; Kirkpatrick et al., 2017) that results in reduced performance.",
"Second, the pretrained self-attention layers might not fully learn the representations of the concatenation of multiple sources because they do not make full use of the cross-source information.",
"Inspired by adding intermediate tasks for NLU (Pruksachatkun et al., 2020; Vu et al., 2020), we conjecture that inserting a proper intermediate task between them can alleviate the discrepancy.",
"In this paper, we propose a two-stage finetuning method named gradual finetuning .",
"Different from prior studies, our work aims to transfer pretrained Seq2Seq models to MSG (see Table 1).",
"Our approach first transfers from pretrained models to SSG and then transfers from SSG to MSG (see Figure 1).",
"Furthermore, we propose a novel MSG model with coarse and fine encoders to differentiate sources and learn better representations.",
"On top of a coarse encoder (i.e., the pretrained encoder), a fine encoder equipped with cross-attention layers (Vaswani et al., 2017) is added.",
"We refer to our approach as TRICE (a task-agnostic Transferring fRamework for multI-sourCe sEquence gener-ation), which achieves new state-of-the-art results on the WMT17 APE task and the multi-source translation task using the WMT14 test set.",
"When adapted to document-level translation, our framework outperforms strong baselines significantly.",
"Figure 1 shows an overview of our framework.",
"First, the problem statement is described in Section 2.1.",
"Second, we propose to use the gradual finetuning method (Section 2.2) to reduce the pretrain-finetune discrepancy.",
"Third, we introduce our MSG model, which consists of the coarse encoder (Sec-tion 2.3), the fine encoder (Section 2.4), and the decoder (Section 2.5).",
"As shown in Figure 1, there are three kinds of dataset: (1) the unlabeled multilingual dataset D p containing monolingual corpora in various languages, (2) the single-source parallel dataset D s involving multiple language pairs, and (3) the multi-source parallel dataset D m .",
"The general objective is to leverage these three kinds of dataset to improve multi-source sequence generation tasks.",
"Formally, let x 1: K = x 1 . . . x K be K source sentences, where x k is the k -th sentence.",
"We use x k,i to denote the i -th word in the k -th source sentence and y = y 1 . . . y J to denote the target sentence with J words.",
"The MSG model is given by P m ( y | x 1: K ; ) = J (cid:89) j =1 P ( y j | x 1: K , y <j ; ) , (1) where y j is the j -th word in the target, y <j = y 1 . . . y j 1 is a partial target sentence, P ( y j | x 1: K , y <j ; ) is a word-level generation probability, and are the parameters of the MSG model.",
"As training neural models on large-scale unlabeled datasets is time-consuming, it is a common practice to utilize pretrained models to improve downstream tasks by using transfer learning methods (Devlin et al., 2019).",
"As a result, we focus on leveraging single-source and multi-source parallel datasets to transfer pretrained Seq2Seq models to MSG tasks.",
"Curriculum learning (Bengio et al., 2009) aims to learn from examples organized in an easy-to-hard order, and intermediate tasks (Pruksachatkun et al., 2020; Vu et al., 2020) are introduced to alleviate the pretrain-finetune discrepancy for NLU.",
"Inspired by these studies, we expect that changing the training objective from pretraining to MSG gradually can reduce the difficulty of transferring pretrained models to MSG tasks.",
"Therefore, we propose a two-stage finetuning method named gradual finetuning.",
"The transferring process is divided into two stages (see Figure 1).",
"In the first stage, the SSG model is transferred from denoising autoencoding to the single-source sequence generation task, and the model architecture is kept unchanged.",
"In the second stage, an additional fine encoder (see Section 2.4) is introduced to transform the SSG model to the MSG model, and the MSG model is optimized on the multi-source parallel corpus.",
"Formally, we use p to denote the parameters of the SSG model.",
"Without loss of generality, the pretraining process can be described as follows: L p ( p ) = 1 |D p | (cid:88) z D p (cid:16) log P s ( z | z ; p ) (cid:17) , (2) p = argmin p (cid:110) L p ( p ) (cid:111) , (3) where z is a sentence that could be in many languages, z is the corrupted sentence obtained from z , P s is the probability modeled by the SSG model, and p are the learned parameters.",
"In this way, a powerful multilingual model is obtained by pretraining on the unlabeled multilingual dataset D p .",
"Then, in the first finetuning stage, let s be the parameters of the SSG model, which are initialized by p .",
"As the single-source parallel dataset D s is not always available, we can build it from the K -source parallel dataset D m .",
"Assume (cid:104) x 1: K , y (cid:105) is a training example in D m , a training example (cid:104) x , y (cid:105) in D s can be constructed by sampling one source from each K -source training example with a probability of 1 /K .",
"The first finetuning process is given by L s ( s ) = 1 |D s | (cid:88) (cid:104) x , y (cid:105)D s (cid:16) log P s ( y | x ; s ) (cid:17) , (4) s = argmin s (cid:110) L s ( s ) (cid:111) , (5) where s are the learned parameters.",
"The learned SSG model is capable of taking inputs in multiple languages.",
"In the second finetuning stage, m , the parameters of the coarse encoder, the decoder, and the embeddings, are initialized by s while are the randomly initialized parameters of the fine encoder.",
"Thus, = m are the parameters of the MSG model.",
"The second finetuning process can be described as L m ( ) = 1 |D m | (cid:88) (cid:104) x 1: K , y (cid:105)D m (cid:16) log P m ( y | x 1: K ; ) (cid:17) , (6) = argmin (cid:110) L m ( ) (cid:111) , (7) where P m is given by Eq.",
"(1).",
"As a result, the model is expected to learn from abundant unlabeled data and perform well on the MSG task.",
"In the following subsections, we will describe the MSG model architecture (see Figure 2) applied in the second finetuning stage.",
"In general, pretrained encoders are considered as strong feature extractors to learn meaningful representations (Zhu et al., 2019).",
"For this reason, Correia and Martins (2019) propose to use the pretrained multilingual encoder to encode the bilingual input pair of APE.",
"Since MSG tasks usually KVI like music .",
"have multiple sources involving different languages and pretrained multilingual Seq2Seq models like mBART (Liu et al., 2020) usually rely on special tokens (e.g., < en > ) to differentiate languages, concatenating multiple sources into a single long sentence will make the model confused about the language of the concatenated sentence (see Table 6).",
"Therefore, we propose to add additional segment embedding to differentiate sentences in different languages and encode source sentences jointly by a single pretrained multilingual encoder.",
"Formally, the input representation can be denoted by X k,i = E tok [ x k,i ] + E pos [ i ] + E seg [ k ] , (8) where X k,i is the input representation of the i th word in the k -th source sentence, and E tok , E pos , and E seg are the token, position, and seg-ment/language embedding matrices, respectively.",
"E tok and E pos are initialized by pretrained embedding matrices.",
"E seg is implemented as constant sinusoidal embeddings (Vaswani et al., 2017), which is denoted by E seg [ k ] 2 i = sin(1000 k/ 10000 2 i/d ) , where E seg [ k ] 2 i +1 is similar to E seg [ k ] 2 i and i is the dimension index while d is model dimension.",
"2 2 If the pretrained model already contains the seg-ment/language embedding matrix, then the pretrained one is used.",
"Then, the pretrained encoder is utilized to encode multiple sources: R ( i ) 1: K = FFN (cid:16) SelfAtt (cid:16) R ( i 1) 1: K (cid:17)(cid:17) , (9) where SelfAtt( ) and FFN( ) are the self-attention and feed-forward networks, respectively.",
"R ( i ) 1: K is the representation output by the i -th encoder layer, and R (0)1: K refers to X 1 . . . XK , where X k is equivalent to X k, 1 . . . X k,I k and I k is the number of tokens in the k -th source sentence.",
"However, we conjecture that indiscriminately modeling dependencies between words by the pretrained self-attention layers cannot capture cross-source information adequately.",
"To this end, we regard the pretrained encoder as the coarse encoder and introduce a novel fine encoder to learn better multi-source representations.",
"To alleviate the pretrain-finetune discrepancy, we adopt the gradual finetuning method to better transfer from single-source to multi-source.",
"In the first finetuning step, the coarse encoder is used to encode different sources individually.",
"As multiple sources are concatenated as a single source in which words interact by pretrained self-attentions, we conjecture that the cross-source information cannot be fully captured.",
"Hence, we propose to add a randomly initialized fine encoder, which consists of self-attentions, cross-attentions, and FFNs, on top of the pretrained coarse encoder to learn meaningful multi-source representations.",
"Specifically, the cross-attention sublayer is an essential part of the fine encoder because they perform fine-grained interaction between sources (see Table 5).",
"Formally, the architecture of the fine encoder can be described as follows.",
"First, the representations of multiple sources output by the coarse encoder are divided according to the boundaries of sources: R ( N c ) 1 , . . . , R ( N c ) K = Split (cid:16) R ( N c ) 1: K (cid:17) , (10) where N c is the number of the coarse encoder layers, Split( ) is the split operation.",
"Second, for each fine encoder layer, the representations are fed into a self-attention sublayer: B ( i ) k = SelfAtt (cid:16) A ( i 1) k (cid:17) , (11) where A ( i 1) k is the representation corresponding to the k -th source sentence output by the ( i 1) -th layer of the fine encoder, in other words, A (0) k = R ( N c ) k .",
"B ( i ) k is the representation output by the self-attention sublayer of the i -th layer.",
"Third, representations of source sentences interact through a cross-attention sublayer: O ( i ) \\ k = Concat (cid:16) B ( i ) 1 , . . . , B ( i ) k 1 , B ( i ) k +1 , . . . , B ( i ) K (cid:17) , (12) C ( i ) k = CrossAtt (cid:16) B ( i ) k , O ( i ) \\ k , O ( i ) \\ k (cid:17) , (13) where Concat( ) is the concatenation operation, O ( i ) \\ k is the concatenated representation except B ( i ) k , CrossAtt( Q, K, V ) is the cross-attention sublayer, C ( i ) k is the representation output by the cross-attention sublayer of the i -th layer.",
"Finally, the last sublayer is a feedforward network: A ( i ) k = FFN (cid:16) C ( i ) k (cid:17) .",
"After the N f -layer fine encoder, the representations corresponding to multiple sources are given to the decoder.",
"Given that representations of multiple sources are different from that of a single source, to better leverage representations of multiple sources, we let the",
"cross-attention sublayer take each source's representation as key/value separately and then combine the outputs by mean pooling.",
"3 Formally, the differences between our decoder and the traditional Transformer decoder are described below.",
"First, the input representations of the i -th decoder layer are fed into the self-attention sublayer to obtain G ( i ) j .",
"Second, a separated cross-attention sublayer is adopted by our framework to replace the traditional cross-attention sublayer: P ( i ) j,k = CrossAtt (cid:16) G ( i ) j , A ( N f ) k , A ( N f ) k (cid:17) , (15) H ( i ) j = MeanPooling (cid:16) P ( i ) j, 1 , . . . , P ( i ) j,K (cid:17) , (16) where A ( N f ) k is the output of the fine encoder derived by Eq.",
"(14), P ( i ) j,k is the representation corresponding to the k -th source, H ( i ) j is the combined result of the separated cross-attention sublayer, and the parameters of separated cross-attentions to leverage each source are shared.",
"Finally, a feedforward network is the last sublayer of a decoder layer.",
"In this way, the decoder in our framework can better handle representations of multiple sources.",
"We evaluated our framework on three MSG tasks: (1) automatic post-editing (APE), (2) multi-source translation, and (3) document-level translation.",
"For the APE task, following Correia and Martins (2019), we used the data from the WMT17 APE task (English-German SMT) (Chatterjee et al., 2019).",
"The dataset contains 23K dual-source examples (e.g., (cid:104) English source sentence, German translation, German post-edit (cid:105) ) for training in an extremely low -resource setting.",
"We also followed Correia and Martins (2019) to adopt pseudo data (Junczys-Dowmunt and Grundkiewicz, 2016; Ne-gri et al., 2018), which contains about 8M pseudo training examples, to evaluate our framework in a high -resource setting.",
"We adopted the dev16 for development and used test16 and test17 for testing.",
"For the multi-source translation task, following Zoph and Knight (2016), we used a subset of the WMT14 news dataset (Bojar et al., 2014), 3 There is little difference between the parallel attention combination strategy proposed by Libovick ` y et al. (2018) and our method.",
"which contains 2.4M dual-source examples (e.g., (cid:104) German source sentence, French source sentence, English translation (cid:105) ) for training, 3,000 from test13 for development, and 1,503 from test14 for testing.",
"4 It can be seen as a medium -resource setting.",
"For the document-level translation task, we used the dataset provided by Maruf et al. (2019) from IWSLT2017 (TED) and News Commentary (News), both including about 200K English-German training examples, which can be seen as low -resource settings.",
"For IWSLT2017, test16 and test17 were combined as the test set, and the rest served as the development set.",
"For News Commentary, test15 and test16 in WMT16 were used for development and testing, respectively.",
"We took the nearest preceding sentence as the context, and then constructed the dual-source example like (cid:104) German context, German current sentence, English translation (cid:105) .",
"We adopted mBART (Liu et al., 2020) as the pretrained Seq2Seq model.",
"We set both N c and N d to 12, and N f to",
"1. The model dimension, the filter size, and the number of heads are the same as mBART.",
"We adopted the vocabulary of mBART, which contains 250K tokens.",
"We used minibatch sizes of 256, 1,024, 4,096, and 16,384 tokens for extremely low-, low-, medium, and highresource settings, respectively.",
"We used the development set to tune the hyper-parameters and select the best model.",
"In inference, the beamsize was set to",
"4. Please refer to Appendix A.1 for more details.",
"We used case-sensitive BLEU ( multi-bleu.perl ) and TER for automatic post-editing.",
"For multi-source translation and document-level translation, SACREBLEU 5 (Post, 2018) and METEOR 6 was adopted for evaluation.",
"We used the paired bootstrap resampling (Koehn, 2004) for statistical significance tests.",
"Table 2 shows the results on the automatic post-editing task.",
"Our framework outperforms previous methods without pretraining (i.e., FORCEDATT , 4 A dual-source example can be obtained by matching two single-source examples.",
"DUALTRANS , and L2C OPY ) by a large margin and surpasses strong baselines with pretraining (i.e., DUALBERT and DUALBART ), which concatenate multiple sources into a single source, significantly in both extremely low and high -resource settings.",
"Notably, the performances of our framework in the extremely low -resource setting are comparable to results of strong baselines without pretraining in the high -resource setting and we achieve new state-of-the-art results on this benchmark.",
"Table 3 demonstrates the results on the multi-source translation task.",
"Our framework substantially outperforms both baselines without pretraining (i.e., MULTIRNN and DUALTRANS ) and with pretraining (i.e., single-source model MBARTTRANS and dual-source model DUALBART ).",
"Surprisingly, the single-source models with pretraining are inferior to the multi-source model without pretraining, which indicates that multiple sources play an important role in the translation task.",
"Table 4 shows the results on the document-level translation task.",
"Our framework achieves signifi-cant improvements over all strong baselines.",
"Unusually, the previous method for handling multiple sources (i.e., DUALBART ) fails to consistently outperform simple sentenceand document-level Transformer (i.e., MBART-TRANS and MBARTDOCTRANS ) while our framework outperforms these strong baselines significantly.",
"In general, our framework shows a strong generalizability across three different MSG tasks and four different data scales, which indicates that it is useful to alleviate the pretrain-finetune discrepancy by gradual finetuning and learn multi-source representations by fully capturing cross-source information.",
"In this subsection, we further conduct studies regarding the variants of the fine encoder, ablations of the other proposed components, and effect of freezing parameters.",
"Experiments are conducted on the APE task in the extremely low -resource setting.",
"The BLEU scores calculated on the development set are adopted as the evaluation metric.",
"Comparisons with the variants of the fine encoder.",
"Table 5 demonstrates comparisons with the variants of the fine encoder.",
"We find that the fine encoder (see Section 2.4) is effective (compared to None ), the cross-attention sublayer is important (compared to the one without cross-attention), and our approach outperforms FFN adapter, which is proposed by Zhu et al. (2019) to incorporate BERT into sequence generation tasks by inserting FFNs into each encoder layer.",
"We find that stacking more fine encoder layers even harms the performance (see the last three rows in Table 5) which rules out the option that the improvements owe to increasing of parameters.",
"Ablations on the other proposed components.",
"Table 6 shows the results of the ablation study.",
"We find that gradual finetuning method (see Section 2.2) is significantly beneficial.",
"Lines segment embedding and concatenated encoding show that concatenating multiple sources into a long sequence and adding sinusoidal segment embedding for the coarse encoder are helpful (see Section 2.3).",
"The line separated cross-attention reveals that taking each source's representation as key/value separately and then combine the outputs is better than concatenating all the representations and do the cross-attention jointly (see Section 2.5).",
"ing parameters initialized by pretrained models and parameters initialized randomly is essential for achieving good performance on MSG tasks.",
"We adopt adversarial evaluation similar to Li-bovick`y et al. (2018) which replaces one source with a randomly selected sentence.",
"As shown in Table 8, both sources play important parts and the French side is more important than the German side (Randomized Fr vs. Randomized De).",
"An example in multi-source translation task is shown in Table 9.",
"The four outputs at the bottom of the table are generated by the last four models in Table",
"3. We find that single-source models have different errors (e.g., each hospitals and trav-elling clinics) and multi-source models fix some errors because of taking two sources.",
"Additionally, DualBart still output erroneous weekly, while TRICE outputs weekend successfully.",
"We believe TRICE is better than baselines because multiple sources are complementary and the fine encoder could capture finer cross-source information, which helps correct translation errors.",
"Multi-source sequence generation includes multi-source translation (Zoph and Knight, 2016), automatic post-editing (Chatterjee et al., 2017), multi-document summarization (Haghighi and Vander-wende, 2009), system combination for NMT (Huang et al., 2020), and document-level machine translation (Wang et al., 2017), etc.",
"For these tasks, researchers usually leverage multi-encoder architectures to achieve better performance (Zoph and Knight, 2016; Zhang et al., 2018; Huang et al., 2019).",
"To address the data scarcity problem in MSG, some researchers generate pseudo corpora (Negri et al., 2018; Nishimura et al., 2020) to augment the corpus size while others try to make use of pretrained autoencoding models (e.g., BERT Models Normal Randomized Fr Randomized De BLEU METEOR BLEU METEOR BLEU METEORMBART-TRANS (De) 31.8 33.9 MBART-TRANS (Fr) 34.8 37.9 DUALBART 40.2 38.9 11.3 13.1 24.9 26.4 TRICE 41.5 39.8 13.5 15.0 23.0 23.9 Table 8: Adversarial evaluation on the multi-source translation task.",
"(Devlin et al., 2019) and XLM-R (Conneau et al., 2020)) to enhance specific MSG tasks (Correia and Martins, 2019; Lee et al., 2020; Lee, 2020).",
"Different from these works, we propose a task-agnostic framework to transfer pretrained Seq2Seq models to multi-source sequence generation tasks and demonstrate the generalizability of our framework.",
"In recent years, self-supervised methods have achieved remarkable success in a wide range of NLP tasks (Devlin et al., 2019; Liu et al., 2019; Conneau et al., 2020; Radford et al., 2019; Song et al., 2019; Lewis et al., 2020; Liu et al., 2020).",
"The architectures of pretrained models can be roughly divided into three categories: autoencoding (Devlin et al., 2019; Liu et al., 2019; Conneau et al., 2020), autoregressive (Radford et al., 2019), Seq2Seq (Song et al., 2019; Raffel et al., 2020; Lewis et al., 2020; Liu et al., 2020).",
"Some researchers propose to use pretrained autoencoding models to improve sequence generation tasks (Zhu et al., 2019; Guo et al., 2020) and the APE task (Correia and Martins, 2019).",
"For pretrained Seq2Seq models, it is convenient to use them to initialize single-source sequence generation models without further modification.",
"Different from these works, we transfer pretrained Seq2Seq models to multi-source sequence generation tasks.",
"We propose a novel task-agnostic framework, TRICE, to conduct transfer learning from single-source sequence generation including self-supervised pretraining and supervised generation to multi-source sequence generation.",
"With the help of the proposed gradual finetuning method and the novel MSG model equipped with coarse and fine encoders, our framework outperforms all baselines on three different MSG tasks in four different data scales, which shows the effectiveness and generalizability of our framework.",
"This work was supported by the National Key R&D Program of China (No. 2017YFB0202204), National Natural Science Foundation of China (No.61925601, No. 61772302).",
"We thank all anonymous reviewers for their valuable comments and suggestions on this work."
] | [
"abstain",
"abstain",
"method",
"objective",
"objective",
"result",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"objective",
"abstain",
"objective",
"result",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"objective",
"other",
"other"
] |
[
"Visual Question Answering (VQA) is a wellknown and challenging task that requires systems to jointly reason about natural language and vision.",
"Deep learning models in various forms have been the standard for solving VQA.",
"However, some of these VQA models are better at certain types of image-question pairs than other models.",
"Ensembling VQA models intelligently to leverage their diverse expertise is, therefore, advantageous.",
"Stacking With Auxiliary Features (SWAF) is an intelligent ensembling technique which learns to combine the results of multiple models using features of the current problem as context.",
"We propose four categories of auxiliary features for ensembling for VQA.",
"Three out of the four categories of features can be inferred from an image-question pair and do not require querying the component models.",
"The fourth category of auxiliary features uses model-specific explanations.",
"In this paper, we describe how we use these various categories of auxiliary features to improve performance for VQA.",
"Using SWAF to effectively ensemble three recent systems, we obtain a new state-of-the-art.",
"Our work also highlights the advantages of explainable AI models.",
"Visual Question Answering (VQA), the task of addressing open-ended questions about images (Ma-linowski and Fritz, 2014; Antol et al., 2015), has attracted significant attention in recent years (An-dreas et al., 2016a; Goyal et al., 2016; Agrawal et al., 2016; Teney et al., 2017).",
"Given an image and a natural language question about the image, the task is to provide an accurate natural language answer.",
"VQA requires visual and linguistic comprehension, language grounding as well as common-sense knowledge.",
"A variety of methods to address these challenges have been developed in recent years (Fukui et al., 2016; Xu and Saenko, 2016; Lu et al., 2016; Chen et al., 2015).",
"The vision component of a typical VQA system extracts visual features using a deep convolutional neural network (CNN), and the linguistic component encodes the question into a semantic vector using a recurrent neural network (RNN).",
"An answer is then generated conditioned on the visual features and the question vector.",
"Most VQA systems have a single underlying method that optimizes a specific loss function and do not leverage the advantage of using multiple diverse models.",
"One recent ensembling approach to VQA (Fukui et al., 2016) combined multiple models that use multimodal compact bilinear pooling with attention and achieved state-of-the-art accuracy on the VQA 2016 challenge.",
"However, their ensemble uses simple softmax averaging to combine outputs from multiple systems.",
"Also, their model is pre-trained on the Visual Genome dataset (Krishna et al., 2017) and they concatenate learned word embeddings with pre-trained GloVe vectors (Pennington et al., 2014).",
"Several other deep and non-deep learning approaches for solving VQA have also been proposed (Lu et al., 2016; Zhou et al., 2015; Noh et al., 2016).",
"Although these models perform fairly well on certain image-question (IQ) pairs, they fail spectacularly on certain other IQ pairs.",
"This led us to conclude that the various VQA models have learned to perform well on specific types of questions and images.",
"Therefore, there is an opportunity to combine these models intelligently so as to leverage their diverse strengths.",
"Ensembling multiple systems is a well known standard approach to improving accuracy in machine learning (Dietterich, 2000).",
"Stacking with Auxiliary Features (SWAF) (Rajani and Mooney, 2017) is a recent ensembling algorithm that learns to combine outputs of multiple systems using fea-2217 Figure 1: Random sample of images with questions and ground truth answers taken from the VQA dataset.",
"tures of the current problem as context.",
"In this paper, we use SWAF to more effectively combine several VQA models.",
"Traditional stacking (Wolpert, 1992) trains a supervised meta-classifier to appropriately combine multiple system outputs.",
"SWAF further enables the stacker to exploit additional relevant knowledge of both the component systems and the problem by providing auxiliary features to the meta-classifier.",
"Our approach extracts features from the IQ pair under consideration, as well as the component models and provides this information to the classifier.",
"The meta-classifier then learns to predict whether a specific generated answer is correct or not.",
"Explanations attempt to justify a system's predicted output and provide context for their decision that may also help SWAF.",
"We extract visual explanations from various deep learning models and use those as auxiliary features for SWAF.",
"Our contributions can be summarized as follows:",
"(a) developing novel auxiliary features that can be inferred from VQA questions and images;",
"(b) extracting visual explanations from several component models for each IQ pair and using those to also generate auxiliary features; and",
"(c) using SWAF to ensemble various VQA models and evaluating ablations of features while comparing our approach extensively to several individual as well as ensemble systems.",
"By effectively ensembling three leading VQA systems with SWAF, we demonstrate state-of-the-art performance.",
"VQA is the task of answering a natural language question about the content of an image by returning an appropriate word or phrase.",
"Figure 1 shows a sample of images and questions from the VQA 2016 challenge.",
"The dataset consists of images taken from the MS COCO dataset (Lin et al., 2014) with three questions and answers per image obtained through Mechanical Turk (Antol et al., 2015).",
"Table 1 summarizes the splits in the VQA dataset.",
"Several deep learning models have been developed that combine a computer vision component with a linguistic component in order to solve the VQA challenge.",
"Some of these models also use data-augmentation for pre-training.",
"We discuss the VQA models we use in Section",
"5. Images Questions Training 82,783 248,349 Validation 40,504 121,512 Test 81,434 244,302 Table 1: VQA dataset splits.",
"Stacking With Auxiliary Features (SWAF) is an ensembling technique that combines outputs from multiple systems using their confidence scores and task-relevant features.",
"It has previously been applied effectively to information extraction (Viswanathan et al., 2015), entity linking (Rajani and Mooney, 2016) and ImageNet object detection (Rajani and Mooney, 2017).",
"To the best of our knowledge, there has been no prior work on stacking for VQA, and we are the first to show how model-specific explanations can serve as an auxiliary feature.",
"The auxiliary features that we use are motivated by an analysis of the VQA dataset and also inspired by related work, such as using a Bayesian framework to predict the form of the answer from the question (Kafle and Kanan, 2016).",
"Deep learning models have been used widely on several vision and language problems.",
"However, they frequently lack transparency and are unable to explain their decisions (Selvaraju et al., 2017).",
"On the other hand, humans can justify their decisions with natural language as well as point to the visual evidence that supports their decision.",
"There are several advantages of having AI systems that can generate explanations that support their predictions (Johns et al., 2015; Agrawal et al., 2016).",
"These advantages have motivated recent work on explainable AI systems, particularly in computer vision (Antol et al., 2015; Goyal et al., 2016; Hendricks et al., 2016; Park et al., 2016).",
"However, there has been no prior work on using explanations for ensembling multiple models or improving performance on a challenging task.",
"In this 2218 Figure 2: Ensemble Architecture using Stacking with Auxiliary Features.",
"paper, we generate visual explanations for three different VQA models and use these explanations to develop auxiliary features that aid in effectively ensembling VQA systems.",
"In stacking, a meta-classifier is learned to combine the outputs of multiple underlying systems (Wolpert, 1992).",
"The stacker learns a classification boundary based on the confidence scores provided by individual systems for each possible output.",
"However, many times the scores produced by systems are not probabilities or not well calibrated and cannot be meaningfully compared.",
"In such circumstances, it is beneficial to also have other reliable auxiliary features, as in the SWAF approach.",
"SWAF provides the meta-classifier additional information, such as features of the current problem and provenance or explanation information about the output from individual systems.",
"This allows SWAF to learn which systems do well on which types of problems and when to trust agreements between specific systems.",
"The learned meta-classifier makes a binary decision whether or not to accept a particular output.",
"Figure 2 gives an overview of the SWAF approach.",
"For stacking VQA systems, we first form unique question-answer pairs across all of the systems' outputs before passing them through the stacker.",
"If a system generates a given output, then we use its probability estimate for that output, otherwise, the confidence is considered zero.",
"If a question-answer pair is classified as correct by the stacker, and if there are other answers that are also classified as correct for the same question, the output with the highest meta-classifier confidence is chosen.",
"For questions that do not have any answer classified as correct by the stacker, we choose the answer with lowest classifier confidence, which means it is least likely to be incorrect.",
"The reason we do this is that the online VQA scorer expects an answer for each question in the test set and penalizes the model for every unanswered question.",
"The confidence scores along with other auxiliary features form the complete set of features used by the stacker.",
"The auxiliary features are the backbone of the SWAF approach, enabling the stacker to intelligently learn to rely on systems' outputs conditioned on the supporting evidence.",
"We use a total of four different categories of auxiliary features for VQA.",
"Three of these types can be inferred directly from the image-question (IQ) pair and do not require querying the individual models.",
"For the fourth category of auxiliary features, we generate visual explanations for the component models and use these to create the explanation auxiliary features.",
"The first three categories of features are discussed below and the fourth category is discussed in the next section.",
"Antol et al. (2015) analyzed the VQA data and found that most questions fall into several types based on the first few words (e.g. questions beginning with What is..., Is there..., How many..., or Does the...).",
"Using the validation data, we discover such lexical patterns to define a set of question types.",
"The questions were tokenized and a question type was formed by adding one token at a time, up to a maximum of five, to the current substring.",
"The question What is the color of the vase? has the following types: What, What is, What is the, What is the color, What is the color of.",
"The prefixes that contain at least 500 questions were then retained as types.",
"We added a final type other for questions that do not fall into any of the predefined types, resulting in a total of 70 question types.",
"A 70-bit vector is used to encode the question type as a set of auxiliary features.",
"The original analysis of VQA answers found that they are 38% yes/no type and 12% numbers.",
"There is clearly a pattern in the VQA answers as well and we use the questions to infer some of these patterns.",
"We considered three answer types yes/no, number, and other.",
"The answer-type auxiliary features are encoded using a one-hot vector.",
"We classify all questions beginning with Does,Is,Was,Are, and Has as yes/no.",
"Ones beginning with How many, What time, What number are assigned number type.",
"These inferred answer types are not exhaustive but have good coverage.",
"The intuition behind using the question and answer types as auxiliary features is that some VQA models are better than others at handling certain types of questions and/or answers.",
"Making this information available at the time of classification aids the stacker in making a better decision.",
"We also use a bag-of-words (BOW) representation of the question as auxiliary features.",
"Words that occur at least five times in the validation set were included.",
"The final sparse vector representing a question was normalized by the number of unique words in the question.",
"In this way, we are able to embed the question into a single vector.",
"Goyal et al. (2016) showed that attending to specific words in the question is important in VQA.",
"Including a BOW for the question as auxiliary features equip the stacker to efficiently learn which words are important and can aid in classifying answers.",
"We also used deep visual features of the image as additional auxiliary features.",
"Specifically, we use the 4 , 096 features from VGGNet's (Simonyan and Zisserman, 2015) fc 7 layer .",
"This creates an embedding of the image in a single vector which is then used by the stacker.",
"Using such image features enables the stacker to learn to rely on systems that are good at identifying answers for particular types of images.",
"Recall that the individual VQA models fuse an embedding of the image along with an embedding of the question.",
"By using the question and image embeddings at the meta-classifier level, the stacker learns to discriminate between the component models based on a deeper representation of the IQ pair.",
"Recently, there has been work on analyzing regions of an image that deep-learning models focus on when making decisions (Goyal et al., 2016; Hendricks et al., 2016; Park et al., 2016).",
"This work shows that deep-learning models attend to relevant parts of the image when making a decision.",
"For VQA, the parts of images that the models focus on can be thought of as visual explanations for answering the question.",
"We use these visual explanations to construct auxiliary features for SWAF.",
"The idea behind using explanation features is that they enable the stacker to learn to trust the agreement between systems when they also agree on the heat-map explanation by looking at the right region of the image when generating an answer.",
"We use the GradCAM algorithm (Selvaraju et al., 2017) to generate model-specific explanatory heat-maps for each IQ pair.",
"This approach generates a class-discriminative localization-map for a given model based on its respective predicted output class in the following way.",
"First, the gradient of the score y c for the predicted class c is computed before the softmax layer with respect to the feature maps A k of a convolutional layer.",
"Then, the gradients flowing back are global average pooled to obtain the neuron importance weights.",
"| {z } The above weights capture the importance of a convolutional feature map k for the output class c , where Z is the total number of pixels in the feature map.",
"A ReLU over the weighted combination of the feature maps results in the required localization-map for the output class as follows: H c = ReLU ( X k w ck A k ) For each of the component VQA models, we generate the localization-map to be used as auxiliary features for ensembling.",
"Figure 3 shows a sample of IQ pairs from the VQA dataset and their respective heat-maps generated for three VQA models.",
"The localization-map generated by each VQA model serves as a visual explanation for the predicted output of that model.",
"We compare agreement between the localization-maps of the individual models to generate auxiliary features for SWAF.",
"We take the absolute gray-scale value of the localization-maps in of each model and compute their mean rank-correlation with the localization-map of every other model.",
"We rank the pixels according to their spatial attention and then compute the correlation between the two ranked lists.",
"The rank correlation protocol has been used in the past to compare machine-generated and human attention-maps as described by Das et al. (2016).",
"We also experimented with using the Earth Mover's Distance (EMD) in place of the rank-order correlation metric, as discussed in Section",
"6. We compare the localization-maps of each pair of VQA models, generating (cid:0) n 2 (cid:1) ex-planation agreement auxiliary features for SWAF, where n is the total number of models.",
"We use SWAF to combine three diverse VQA systems such that the final ensemble performs better than any individual component model even on questions with a low agreement.",
"The three component models are trained on the VQA training set.",
"Each of the three models is described below.",
"benchmark for the VQA dataset.",
"A VGGNet (Simonyan and Zisserman, 2015) is used to obtain embeddings for the image which is combined with an LSTM (Hochreiter and Schmidhu-ber, 1997) embedding of each question.",
"An LSTM with two hidden layers is used to obtain a 2 , 048 dimensional embedding of the question, followed by a fully-connected layer with tanh non-linearity to transform the embedding to 1 , 024 dimensions.",
"The l 2 normalized activations from the last hidden layer of VGGNet are used as a 4 , 096 dimensional image embedding.",
"The image embedding is first transformed to 1 , 024 dimensions by a fully-connected layer with tanh nonlinearity to match the dimensionality of the LSTM embedding of the question.",
"The transformed image and LSTM embeddings are then fused via element-wise multiplication.",
"The idea behind the HieCoAtt model is that in addition to using visual attention to focus on where to look, it is equally important to model what words to attend to in the question (question-attention) (Lu et al., 2016).",
"This model jointly reasons about the visual and language components using co-attention.",
"Question attention is modeled using a hierarchical architecture at word, phrase, and question levels.",
"HieCoAtt uses two types of co-attention parallel and alternating.",
"Parallel co-attention attends to the image and question simultaneously by calculating the similarity between image and question features at all pairs of image-locations and question-locations.",
"Alternating co-attention sequentially alternates between generating image and question attention by attending to the image based on the question summary vector and then attending to the question based on the attended image features.",
"The MCB model combines the vision and language vector representations using an outer product instead of the traditional approach of using concatenation or element-wise product or sum of the two vectors (Fukui et al., 2016).",
"Bilinear pooling computes the outer product between two vectors which, in contrast to the element-wise product, allows a multiplicative interaction between all elements of both vectors.",
"To overcome the challenge of high dimensionality due to the outer product, the authors adopt the idea of using Multimodal Compact Bilinear pooling (MCB) (Gao et al., 2016) to efficiently and expressively combine multimodal features.",
"The MCB model extracts representations for the image using the 152 -layer Residual Network (He et al., 2016) and an LSTM (Hochreiter and Schmidhuber, 1997) embedding of the question.",
"The two vector are pooled using MCB and the answer is obtained by treating the problem as a multi-class classification problem with 3 , 000 possible classes.",
"The best MCB model is an ensemble of seven attention models and uses data-augmentation for pre-training along with pre-trained GloVe word embeddings.",
"The best MCB model won the VQA 2016 challenge by obtaining the best performance on the test set.",
"We present experimental results on the VQA challenge using the SWAF approach and compare it to various baselines, individual and ensemble VQA models, as well as ablations of our SWAF algorithm on the standard VQA test set.",
"In addition to the three data splits given in Table 1, the VQA challenge divides the test set into test-dev and test-standard .",
"Evaluation on either split requires submitting the output to the competition's online server.",
"1 However, there are fewer restrictions on the number of submissions that can be made to the test-dev compared to the test-standard .",
"The test-dev is a subset of the standard test set consisting of randomly selected 60 , 864 (25%) questions.",
"We use the test-dev set to tune the parameters of the meta-classifier.",
"All the individual VQA models that we ensemble are trained only on the VQA training set and the SWAF meta-classifier is trained on the VQA validation set.",
"For the meta-classifier, we use a L 1 -regularized SVM classifier for generic stacking and stacking with only question/answer types as auxiliary features.",
"For the question, image, and explanation features, we found that a neural network with two hidden layers works best.",
"The first hidden layer is fully connected and the second has approximately half the number of neurons as the first layer.",
"The question and image features are high-dimensional and therefore a neural network classifier worked 1 www.visualqa.org/challenge.html 2222 Method All Yes/No Number Other DPPNet (Noh et al., 2016) 57.36 80.28 36.92 42.24 iBOWIMG (Zhou et al., 2015) 55.72 76.55 35.03 42.62 NMNs (Andreas et al., 2016b) 58.70 81.20 37.70 44.00 LSTM (Antol et al., 2015) 58.20 80.60 36.50 43.70 HieCoAtt (Lu et al., 2016) 61.80 79.70 38.70 51.70 MCB (Single system) (Fukui et al., 2016) 62.56 80.68 35.59 52.93 MCB (Ensemble) (Fukui et al., 2016) 66.50 83.20 39.50 58.00 Voting (MCB + HieCoAtt + LSTM) 60.31 80.22 34.92 48.83 Stacking 63.12 81.61 36.07 53.77 + Q/A type features 65.25 82.01 36.50 57.15 + Question features 65.50 82.26 38.21 57.35 + Image features 65.54 82.28 38.63 57.32 + Explanation features 67.26 82.62 39.50 58.34 Table 2: Accuracy results on the VQA test-standard set.",
"well.",
"We found that using late fusion (Karpa-thy et al., 2014) to combine the auxiliary features for the neural network classifier worked slightly better.",
"We used Keras with Tensorflow back-end (Chollet, 2015) for implementing the network.",
"We compare our approach to a voting baseline that returns the answer with maximum agreement, with ties broken in the favor of systems with higher confidence scores.",
"We also compare against other state-of-the-art VQA systems not used in our ensemble: iBowIMG (Zhou et al., 2015), DPPNet (Noh et al., 2016) and the Neural Module Networks (NMNs) (Andreas et al., 2016b).",
"The iBowIMG concatenates the image features with the bag-of-word question embedding and feeds them into a softmax classifier to predict the answer, resulting in performance comparable to other models that use deep or recursive neural networks.",
"The iBowIMG beats most VQA models considered in their paper.",
"The DPPNet, on the other hand, learns a CNN with some parameters predicted from a separate parameter prediction network.",
"Their parameter prediction network uses a Gated Recurrent Unit (GRU) to generate a question representation and maps the predicted weights to a CNN via hashing.",
"The DPPNet uses external data (data-augmentation) in addition to the VQA dataset to pre-train the GRU.",
"Another well-known VQA model is the Neural Module Network (NMN) that generates a neural network on the fly for each individual image and question.",
"This is done through choosing from various sub-modules based on the question and composing these to generate the neural network, e .",
"g",
"., the find[x] module outputs an attention map for detecting x .",
"To arrange the modules, the question is first parsed into a symbolic expression and using these expressions, modules are composed into a sequence to answer the query.",
"The whole system is trained end-to-end through backpropagation.",
"The VQA evaluation server, along with reporting accuracies on the full question set, also reports a break-down of accuracy across three answer categories.",
"The image-question (IQ) pairs that have answer type as yes/no, those that have number as their answer type and finally those that do not belong to either of the first two categories are classified as other.",
"Table 2 shows the full and category-wise accuracies.",
"All scores for the stacking models were obtained using the VQA test-standard server.",
"The table shows results for both single system and ensemble MCB models.",
"We used the single system MCB model as a component in our ensemble.",
"The ensemble MCB system, however, was the top-ranked system in the VQA 2016 challenge and it is pre-trained on the Visual Genome dataset (Krishna et al., 2017) as well as uses pre-trained GloVe vectors (Penning-ton et al., 2014).",
"On the other hand, our ensemble system does not use any external data and consists 2223 Figure 4: Results for auxiliary feature ablations on the VQA test-dev set.",
"The SWAF approach obtains a new state-of-the-art result on the VQA task.",
"The vanilla stacking approach itself beats the best individual model and adding the auxiliary features further boosts the performance.",
"Our SWAF model that uses all three sets of auxiliary features related to IQ pairs does particularly well on the more difficult other answer category, indicating that the auxiliary features provide crucial information at classification time.",
"To further analyze the SWAF results, we performed experiments with ablations of the auxiliary features.",
"Figure 4 shows the results on the test-dev set obtained when ablating each of the auxiliary feature sets.",
"We observe that deleting the Q/A type decreased performance the most and deleting the explanation features decreased performance the least.",
"This indicates that the Q/A type features are the most informative and the explanation features are the least informative for deciding the correct answer.",
"The voting baseline does not perform very well even though it is able to beat one of the component models.",
"The SWAF ablation results clearly indicate that there is an advantage to using each type of auxiliary feature.",
"Each of the auxiliary feature sets contributes to the final ensemble's performance, which is clear from Table",
"2. The voting and the vanilla stacking ensembles do not perform as well as SWAF.",
"This leads us to conclude that the performance gain is actually obtained from using the auxiliary features.",
"In particular, using explanations generated by various deep learning models as auxiliary features improved performance.",
"We observed that the localization-maps generated were fairly noisy, as is evident from Figure",
"3. Although the individual component systems agreed on an answer for many of the IQ pairs, the regions of the image they attend to varied significantly.",
"However, the rank correlation metric in the auxiliary features made the localization-maps useful for ensembling.",
"This is because, when training on the validation set, the stacker learns how to weight the auxiliary features, including those obtained using localization-maps.",
"In this way, it learns to trust only the localization-maps that are actually useful.",
"We also observed that there was a high positive correlation between the localization-maps generated by the HieCoAtt and MCB models, followed by the LSTM and MCB models, and then the LSTM and HieCoAtt models with several of the maps even negatively correlated between the last two models.",
"We also experimented with using Earth Mover's Distance (EMD) to compare heat-maps and found that it worked even better than rank-order correlation; however, it came at a cost of high computational complexity ( O ( n 3 ) vs. O ( n ) ).",
"Figure 4 shows the difference in performance obtained when explanation features calculated using either EMD or rank-order correlation are ablated from the final ensemble.",
"Clearly, using EMD to compare explanation maps has more impact on the system's accuracy.",
"Consistent with previous find-ings (Bylinskii et al., 2018), our results confirm that EMD provides a finer-grained comparison between localization maps.",
"Overall, our work shows that the utility of explanations is not limited to just developing human trust and making models more transparent.",
"Explanations can also be used to improve performance on a challenging task.",
"We have presented results for using stacking with auxiliary features (SWAF) to ensemble VQA systems.",
"We proposed four different categories of auxiliary features, three of which can be inferred from an image-question pair.",
"We showed that our model trained on these auxiliary features outperforms the individual component systems as well as other baselines to obtain a new state-of-the-art for VQA.",
"For the fourth category of features, we have proposed and evaluated the novel idea of using explanations to improve ensembling of multiple systems.",
"We demonstrated how visual explanations for VQA (represented as localization-maps) can be used to aid stacking with auxiliary 2224 features.",
"This approach effectively utilizes information on the degree to which systems agree on the explanation of their answers.",
"We showed that the combination of all of these categories of auxiliary features, including explanation, gives the best results.",
"We believe that integrating explanation with ensembling has a two-fold advantage.",
"First, as discussed in this paper, explanations can be used to improve the accuracy of an ensemble.",
"Second, explanations from the component systems could be used to build an explanation for the overall ensemble.",
"That is, by combining multiple component explanations, SWAF could also produce more comprehensible results.",
"Therefore, in the future, we would like to focus on explaining the results of an ensemble.",
"Another issue we plan to explore is using textual explanations (Park et al., 2016) for VQA.",
"We believe that the words in the question to which a system attends can also be used to improve ensembling.",
"Finally, we hope to apply our approach to additional problems beyond VQA.",
"This research was supported by the DARPA XAI program under the AFRL grant."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"other",
"method",
"abstain",
"other",
"other",
"method",
"other",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"objective",
"objective",
"abstain",
"result",
"method",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"result",
"method",
"other"
] |
[
"Most work in relation extraction forms a prediction by looking at a short span of text within a single sentence containing a single entity pair mention.",
"This approach often does not consider interactions across mentions, requires redundant computation for each mention pair, and ignores relationships expressed across sentence boundaries.",
"These problems are exacerbated by the document(rather than sentence-) level annotation common in biological text.",
"In response, we propose a model which simultaneously predicts relationships between all mention pairs in a document.",
"We form pairwise predictions over entire paper abstracts using an ecient self-attention encoder.",
"All-pairs mention scores allow us to perform multi-instance learning by aggregating over mentions to form entity pair representations.",
"We further adapt to settings without mention-level annotation by jointly training to predict named entities and adding a corpus of weakly labeled data.",
"In experiments on two Biocreative benchmark datasets, we achieve state of the art performance on the Biocreative V Chemical Disease Relation dataset for models without external KB resources.",
"We also introduce a new dataset an order of magnitude larger than existing human-annotated biological information extraction datasets and more accurate than distantly supervised alternatives.",
"With few exceptions (Swampillai and Stevenson, 2011; Quirk and Poon, 2017; Peng et al., 2017), nearly all work in relation extraction focuses on classifying a short span of text within a single sentence containing a single entity pair mention.",
"However, relationships between entities are often expressed across sentence boundaries or otherwise require a larger context to disambiguate.",
"For example, 30% of relations in the Biocreative V CDR dataset (3.1) are expressed across sentence boundaries, such as in the following excerpt expressing a relationship between the chemical azathioprine and the disease fibrosis : Treatment of psoriasis with azathioprine .",
"Azathioprine treatment benefited 19 (66%) out of 29 patients suering from severe psoriasis.",
"Haematological complications were not troublesome and results of biochemical liver function tests remained normal.",
"Minimal cholestasis was seen in two cases and portal fibrosis of a reversible degree in eight.",
"Liver biopsies should be undertaken at regular intervals if azathioprine therapy is continued so that structural liver damage may be detected at an early and reversible stage.",
"Though the entities' mentions never occur in the same sentence, the above example expresses that the chemical entity azathioprine can cause the side eect fibrosis .",
"Relation extraction models which consider only within-sentence relation pairs cannot extract this fact without knowledge of the complicated coreference relationship between eight and azathioprine treatment , which, without features from a complicated pre-processing pipeline, cannot be learned by a model which considers entity pairs in isolation.",
"Making separate predictions for each mention pair also obstructs multi-instance learning (Riedel et al., 2010; Surdeanu et al., 2012), a technique which aggregates entity representations from mentions in order to improve robustness to noise in the data.",
"Like the majority of relation extraction data, most annotation for biological relations is distantly supervised, and so we could benefit from a model which is amenable to multi-instance learning.",
"In addition to this loss of cross-sentence and cross-mention reasoning capability, traditional mention pair relation extraction models typically introduce computational ineciencies by independently extracting features for and scoring every pair of mentions, even when those mentions occur in the same sentence and thus could share representations.",
"In the CDR training set, this requires separately encoding and classifying each of the 5,318 candidate mention pairs independently, versus encoding each of the 500 abstracts once.",
"Though abstracts 872 are longer than e.g. the text between mentions, many sentences contain multiple mentions, leading to redundant computation.",
"However, encoding long sequences in a way which eectively incorporates long-distance context can be prohibitively expensive.",
"Long Short Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) are among the most popular token encoders due to their capacity to learn high-quality representations of text, but their ability to leverage the fastest computing hardware is thwarted due to their computational dependence on the length of the sequence each token's representation requires as input the representation of the previous token, limiting the extent to which computation can be parallelized.",
"Convolutional neural networks (CNNs), in contrast, can be executed entirely in parallel across the sequence, but the amount of context incorporated into a single token's representation is limited by the depth of the network, and very deep networks can be dicult to learn (Hochreiter, 1998).",
"These problems are exacerbated by longer sequences, limiting the extent to which previous work explored full-abstract relation extraction.",
"To facilitate ecient full-abstract relation extraction from biological text, we propose Bi-ane Relation Attention Networks (BRANs), a combination of network architecture, multi-instance and multi-task learning designed to extract relations between entities in biological text without requiring explicit mention-level annotation.",
"We synthesize convolutions and self-attention, a modification of the Transformer encoder introduced by Vaswani et al. (2017), over sub-word tokens to eciently incorporate into token representations rich context between distant mention pairs across the entire abstract.",
"We score all pairs of mentions in parallel using a bi-ane operator, and aggregate over mention pairs using a soft approximation of the max function in order to perform multi-instance learning.",
"We jointly train the model to predict relations and entities, further improving robustness to noise and lack of gold annotation at the mention level.",
"In extensive experiments on two benchmark biological relation extraction datasets, we achieve state of the art performance for a model using no external knowledge base resources in experiments on the Biocreative V CDR dataset, and outperform comparable baselines on the Biocreative VI ChemProt dataset.",
"We also introduce a new dataset which is an order of magnitude larger than existing gold-annotated biological relation extraction datasets while covering a wider range of entity and relation types and with higher accuracy than distantly supervised datasets of the same size.",
"We provide a strong baseline on this new dataset, and encourage its use as a benchmark for future biological relation extraction systems.",
"We designed our model to eciently encode long contexts spanning multiple sentences while forming pairwise predictions without the need for mention pair-specific features.",
"To do this, our model first encodes input token embeddings using self-attention.",
"These embeddings are used to predict both entities and relations.",
"The relation extraction module converts each token to a head and tail representation.",
"These representations are used to form mention pair predictions using a bi-ane operation with respect to learned relation embeddings.",
"Finally, these mention pair predictions are pooled to form entity pair predictions, expressing whether each relation type is expressed by each relation pair.",
"Our model takes in a sequence of N token embeddings in R d .",
"Because the Transformer has no innate notion of token position, the model relies on positional embeddings which are added to the input token embeddings.",
"2 We learn the position embedding matrix P m d which contains a separate d dimensional embedding for each position, limited to m possible positions.",
"Our final input representation for token x i is: x i = s i + p i where s i is the token embedding for x i and p i is the positional embedding for the i th position.",
"If i exceeds m , we use a randomly initialized vector in place of p i .",
"We tokenize the text using byte pair encoding (BPE) (Gage, 1994; Sennrich et al., 2015).",
"The BPE algorithm constructs a vocabulary of sub-word pieces, beginning with single characters.",
"Then, the algorithm iteratively merges the most frequent co-occurring tokens into a new token, which is added to the vocabulary.",
"This procedure continues until a pre-defined vocabulary size is met.",
"BPE is well suited for biological data for the following reasons.",
"First, biological entities often have unique mentions made up of meaningful subcomponents, such as 1,2-dimethylhydrazine .",
"Additionally, tokenization of chemical entities is challenging, lacking a universally agreed upon algorithm (Krallinger et al., 2015).",
"As we demonstrate in 3.3.2, the subword representations produced by BPE allow the model to formulate better predictions, likely due to better modeling of rare and unknown words.",
"Though our final model incorporates some convolutions, we retain the position embeddings.",
"Transformer is made up of B blocks.",
"Each Transformer block, which we denote Transformer k , has its own set of parameters and is made up of two subcomponents: multi-head attention and a series of convolutions 3 .",
"The output for token i of block k , b ( k ) i , is connected to its input b ( k 1) i with a residual connection (He et al., 2016).",
"Starting with b (0) i = x i : b ( k ) i = b ( k 1) i + Transformer k ( b ( k 1) i ) 2.2.1 Multi-head Attention Multi-head attention applies self-attention multiple times over the same inputs using separately normalized parameters (attention heads) and combines the results, as an alternative to applying one pass of attention with more parameters.",
"The intuition behind this modeling decision is that dividing the attention into multiple heads make it easier for the model to learn to attend to dierent types of relevant information with each head.",
"The self-attention updates input b ( k 1) i by performing a weighted sum over all tokens in the sequence, weighted by their importance for modeling token i .",
"Each input is projected to a key k , value v , and query q , using separate ane transformations with ReLU activations (Glorot et al., 2011).",
"Here, k , v , and q are each in R dH where H is the number of heads.",
"The attention weights a ijh for head h between tokens i and j are computed using scaled dot-product attention: a ijh = (cid:18) q Tih k jh d (cid:19) o ih = X j v jh (cid:12) a ijh with (cid:12) denoting element-wise multiplication and indicating a softmax along the j th dimension.",
"The scaled attention is meant to aid optimization by flattening the softmax and better distributing the gradients (Vaswani et al., 2017).",
"The outputs of the individual attention heads are concatenated, denoted [ ; ] , into o i .",
"All layers in the network use residual connections between the output of the multi-headed attention and its input.",
"Layer normalization (Ba et al., 2016), denoted LN ( ) , is then applied to the output.",
"The second part of our Transformer block is a stack of convolutional layers.",
"The sub-network used in 3 The original Transformer uses feed-forward connections, i.e. width-1 convolutions, whereas we use convolutions with width > 1.",
"Vaswani et al. (2017) uses two width-1 convolutions.",
"We add a third middle layer with kernel width 5, which we found to perform better.",
"Many relations are expressed concisely by the immediate local context, e.g. Michele's husband Barack , or labetalol-induced hypotension .",
"Adding this explicit n-gram modeling is meant to ease the burden on the model to learn to attend to local features.",
"We use C w ( ) to denote a convolutional operator with kernel width w .",
"Then the convolutional portion of the transformer block is given by: t (0) i = ReLU( C 1 ( m i )) t (1) i = ReLU( C 5 ( t (0) i )) t (2) i = C 1 ( t (1) i ) Where the dimensions of t (0) i and t (1) i are in R 4 d and that of t (2) i is in R d .",
"We project each contextually encoded token b ( B ) i through two separate MLPs to generate two new versions of each token corresponding to whether it will serve as the first (head) or second (tail) argument of a relation:",
"where L is a d L d tensor, a learned embedding matrix for each of the L relations.",
"In subsequent sections we will assume we have transposed the dimensions of A as d d L for ease of indexing.",
"Our data is weakly labeled in that there are labels at the entity level but not the mention level, making the problem a form of strong-distant supervision (Mintz et al., 2009).",
"In distant supervision, edges in a knowledge graph are heuristically applied to sentences in an auxiliary unstructured text corpus often applying the edge label to all sentences containing the subject and object of the relation.",
"Because this process is imprecise and introduces noise into the training data, methods like multi-instance learning were introduced (Riedel et al., 2010; Surdeanu et al., 2012).",
"In multi-instance learning, rather than looking at each distantly labeled mention pair in isolation, the model is trained over the aggregate of these mentions and a single update is made.",
"More recently, the weighting function of the instances has been expressed as neural network attention (Verga and McCallum, 2016; Lin et al., 2016; Yaghoobzadeh et al., 2017).",
"We aggregate over all representations for each mention pair in order to produce per-relation scores for each entity pair.",
"For each entity pair ( p head , p tail ) , let P head denote the set of indices of mentions of the entity p head , and let P tail denote the indices of mentions of the entity p tail .",
"Then we use the LogSumExp function to aggregate the relation scores from A across all pairs of mentions of p head and p tail : scores ( p head , p tail ) = log X i P head j P tail exp( A ij ) The LogSumExp scoring function is a smooth approximation to the max function and has the bene-fits of aggregating information from multiple predictions and propagating dense gradients as opposed to the sparse gradient updates of the max (Das et al., 2017).",
"In addition to pairwise relation predictions, we use the Transformer output b ( B ) i to make entity type predictions.",
"We feed b ( B ) i as input to a linear classifier which predicts the entity label for each token with per-class scores c i : c i = W (3) b ( B ) i We augment the entity type labels with the BIO encoding to denote entity spans.",
"We apply tags to the byte-pair tokenization by treating each subword within a mention span as an additional token with a corresponding Bor Ilabel.",
"We train both the NER and relation extraction components of our network to perform multi-class classification using maximum likelihood, where NER classes y i or relation classes r i are conditionally independent given deep features produced by our model with probabilities given by the softmax function.",
"In the case of NER, features are given by the per-token output of the transformer: 1 NNX i =1 log P ( y i | b ( B ) i ) In the case of relation extraction, the features for each entity pair are given by the LogSumExp over pairwise scores described in 2.4.",
"For E entity pairs, the relation r i is given by: 1 EEX i =1 log P ( r i | scores ( p head , p tail )) 875 We train the NER and relation objectives jointly, sharing all embeddings and Transformer parameters.",
"To trade o the two objectives, we penalize the named entity updates with a hyperparameter .",
"We evaluate our model on three datasets: The Biocreative V Chemical Disease Relation benchmark (CDR), which models relations between chemicals and diseases (3.1); the Biocreative VI ChemProt benchmark (CPR), which models relations between chemicals and proteins (3.2); and a new, large and accurate dataset we describe in 3.3 based on the human curation in the Chemical Toxicology Database (CTD), which models relationships between chemicals, proteins and genes.",
"The CDR dataset is annotated at the level of paper abstracts, requiring consideration of long-range, cross sentence relationships, thus evaluation on this dataset demonstrates that our model is capable of such reasoning.",
"We also evaluate our model's performance in the more traditional setting which does not require cross-sentence modeling by performing experiments on the CPR dataset, for which all annotations are between two entity mentions in a single sentence.",
"Finally, we present a new dataset constructed using strong-distant supervision (2.4), with annotations at the document level.",
"This dataset is significantly larger than the others, contains more relation types, and requires reasoning across sentences.",
"The Biocreative V chemical disease relation extraction (CDR) dataset 4 (Li et al., 2016a; Wei et al., 2016) was derived from the Comparative Toxicogenomics Database (CTD), which curates interactions between genes, chemicals, and diseases (Davis et al., 2008).",
"CTD annotations are only at the document level and do not contain mention annotations.",
"The CDR dataset is a subset of these original annotations, supplemented with human annotated, entity linked mention annotations.",
"The relation annotations in this dataset are also at the document level only.",
"The CDR dataset is concerned with extracting only chemically-induced disease relationships (drug-related side eects and adverse reactions) concerning the most specific entity in the document.",
"For example tobacco causes cancer could be marked as false if the document contained the more specific lung cancer .",
"This can cause true relations to be labeled as false, harming evaluation performance.",
"To address this we follow (Gu et al., 2016, 2017) 4 http://www.biocreative.org/ and filter hypernyms according to the hierarchy in the MESH controlled vocabulary 5 .",
"All entity pairs within the same abstract that do not have an annotated relation are assigned the NULL label.",
"In addition to the gold CDR data, Peng et al. (2016) add 15,448 PubMed abstracts annotated in the CTD dataset.",
"We consider this same set of abstracts as additional training data (which we subsequently denote +Data).",
"Since this data does not contain entity annotations, we take the annotations from Pubtator (Wei et al., 2013), a state of the art biological named entity tagger and entity linker.",
"See A.1 for additional data processing details.",
"In our experiments we only evaluate our relation extraction performance and all models (in-cluding baselines) use gold entity annotations for predictions.",
"The byte pair vocabulary is generated over the training dataset we use a budget of 2500 tokens when training on the gold CDR data, and a larger budget of 10,000 tokens when including extra data described above Additional implementation details are included in Appendix A. Data split Docs Pos Neg Train 500 1,038 4,280 Development 500 1,012 4,136 Test 500 1,066 4,270 CTD 15,448 26,657 146,057 Table 1: Data statistics for the CDR Dataset and additional data from CTD.",
"We compare against the previous best reported results on this dataset not using knowledge base features.",
"6 Each of the baselines are ensemble methods for withinand cross-sentence relations that make use of additional linguistic features (syntactic parse and part-of-speech).",
"Gu et al. (2017) encode mention pairs using a CNN while Zhou et al. (2016a) use an LSTM.",
"Both make cross-sentence predictions with featurized classifiers.",
"In Table 2 we show results outperforming the baselines despite using no linguistic features.",
"We show performance averaged over 20 runs with 20 random seeds as well as an ensemble of their averaged predictions.",
"We see a further boost in performance by adding weakly labeled data.",
"Table 3 shows the 5 https://www.nlm.nih.gov/mesh/download/ 2017MeshTree.txt 6 The highest reported score is from (Peng et al., 2016), but they use explicit lookups into the CTD knowledge base for the existence of the test entity pair.",
"eects of ablating pieces of our model.",
"CNN only' removes the multi-head attention component from the transformer block, no width-5' replaces the width-5 convolution of the feed-forward component of the transformer with a width-1 convolution and no NER' removes the named entity recognition multi-task objective (2.5).",
"To assess our model's performance in settings where cross-sentence relationships are not explicitly evaluated, we perform experiments on the Biocreative VI ChemProt dataset (CDR) (Krallinger et al., 2017).",
"This dataset is concerned with classifying into six relation types between chemicals and proteins, with nearly all annotated relationships occurring within the same sentence.",
"We compare our models against those competing in the ocial Biocreative VI competition (Liu et al., 2017).",
"We compare to the top performing team whose model is directly comparable with ours i.e. used a single (non-ensemble) model trained only on the training data (many teams use the development set as additional training data).",
"The baseline models are standard state of the art relation extraction models: CNNs and Gated RNNs with attention.",
"Each of these baselines uses mention-specific features encoding relative position of each token to the two target entities being classified, whereas our model aggregates over all mention pairs in each sentence.",
"It is also worth noting that these models use a large vocabulary of pre-trained word embeddings, giving their models the advantage of far more model parameters, as well as additional information from Model P R F1 CNN 50.7 43.0 46.5 GRU+Attention 53.0 46.3 49.5 BRAN 48.0 54.1 50.8 .01 Table 4: Precision, recall, and F1 results on the Biocreative VI Chem-Prot Dataset.",
"In Table 4 we see that even though our model forms all predictions simultaneously between all pairs of entities within the sentence, we are able to outperform state of the art models classifying each mention pair independently.",
"The scores shown are averaged across 10 runs with 10 random seeds.",
"Interestingly, our model appears to have higher recall and lower precision, while the baseline models are both precision-biased, with lower recall.",
"This suggests that combining these styles of model could lead to further gains on this task.",
"Existing biological relation extraction datasets including both CDR (3.1) and CPR (3.2) are relatively small, typically consisting of hundreds or a few thousand annotated examples.",
"Distant supervision datasets apply document-independent, entity-level annotations to all sentences leading to a large proportion of incorrect labels.",
"Evaluations on this data involve either very small (a few hundred) gold annotated examples or cross validation to predict the noisy, distantly applied labels (Mallory et al., 2015; Quirk and Poon, 2017; Peng et al., 2017).",
"We address these issues by constructing a new dataset using strong-distant supervision containing document-level annotations.",
"The Comparative Toxicogenomics Database (CTD) curates interactions between genes, chemicals, and diseases.",
"Each relation in the CTD is associated with a disambiguated entity pair and a PubMed article where the relation was observed.",
"To construct this dataset, we collect the abstracts for each of the PubMed articles with at least one curated relation in the CTD database.",
"As in 3.1, we use PubTator to automatically tag and disambiguate the entities in each of these abstracts.",
"If both entities in the relation are found in the abstract, we take the (abstract, relation) pair as a positive example.",
"The evidence for the curated relation could occur anywhere in the full text article, not just the abstract.",
"Abstracts with no recovered relations are discarded.",
"All other entity pairs with valid types and without an annotated relation that 877 Types Docs Pos Neg Total 68,400 166,474 1198,493 Chemical/Disease 64,139 93,940 571,932 Chemical/Gene 34,883 63,463 360,100 Gene/Disease 32,286 9,071 266,461 Table 5: Data statistics for the new CTD dataset.",
"occur in the remaining abstracts are considered negative examples and assigned the NULL label.",
"We additionally remove abstracts containing greater than 500 tokens 7 .",
"This limit removed about 10% of the total data including numerous extremely long abstracts.",
"The average token length of the remaining data was 230 tokens.",
"With this procedure, we are able to collect 166,474 positive examples over 13 relation types, with more detailed statistics of the dataset listed in Table 5.",
"We consider relations between chemical-disease, chemical-gene, and gene-disease entity pairs downloaded from CTD 8 .",
"We remove inferred relations (those without an associated PubMed ID) and consider only human curated relationships.",
"Some chemical-gene entity pairs were associated with multiple relation types in the same document.",
"We consider each of these relation types as a separate positive example.",
"The chemical-gene relation data contains over 100 types organized in a shallow hierarchy.",
"Many of these types are extremely infrequent, so we map all relations to the highest parent in the hierarchy, resulting in 13 relation types.",
"Most of these chemical-gene relations have an increase and decrease version such as increase_expression and de-crease_expression.",
"In some cases, there is also an aects relation (aects_expression) which is used when the directionality is unknown.",
"If the aects version is more common, we map decrease and increase to aects.",
"If aects is less common, we drop the aects examples and keep the increase and decrease examples as distinct relations, resulting in the final set of 10 chemical-gene relation types.",
"In Table 7 we list precision, recall and F1 achieved by our model on the CTD dataset, both overall and by relation type.",
"Our model predicts each of the relation types eectively, with higher performance on relations with more support.",
"In Table 8 we see that our sub-word BPE model out-performs the model using the Genia tokenizer (Kulick et al., 2012) even though our vocabulary size is one-fifth as large.",
"We see a 1.7 F1 point boost in predicting Pubtator NER labels for BPE.",
"This could be explained by the increased out-of-7 We include scripts to generate the unfiltered set of data as well to encourage future research 8 http://ctdbase.org/downloads/ Train Dev Test Total 120k 15k 15k Chemical/Disease marker/mechanism 41,562 5,126 5,167 therapeutic 24,151 2,929 3,059 Gene/Disease marker/mechanism 5,930 825 819 therapeutic 560 77 75 Chemical/Gene increase_expression 15,851 1,958 2,137 increase_MP 5,986 740 638 decrease_expression 5,870 698 783 increase_activity 4,154 467 497 aects_response 3,834 475 508 decrease_activity 3,124 396 434 aects_transport 3,009 333 361 increase_reaction 2,881 367 353 decrease_reaction 2,221 247 269 decrease_MP 798 100 120 Table 6: Data statistics for the new CTD dataset broken down by relation type.",
"vocabulary (OOV) rate for named entities.",
"Word training data has 3.01 percent OOV rate for tokens with an entity.",
"The byte pair-encoded data has an OOV rate of 2.48 percent.",
"Note that in both the word-tokenized and byte pair-tokenized data, we replace tokens that occur less than five times with a learned UNK token.",
"Figure 2 depicts the model's performance on relation extraction as a function of distance between entities.",
"For example, the blue bar depicts performance when removing all entity pair candidates (positive and negative) whose closest mentions are more than 11 tokens apart.",
"We consider removing entity pair candidates with distances of 11, 25, 50, 100 and 500 (the maximum document length).",
"The average sentence length is 22 tokens.",
"We see that the model is not simply relying on short range relationships, but is leveraging information about distant entity pairs, with accuracy increasing as the maximum distance considered increases.",
"Note that all results are taken from the same model trained on the full unfiltered training set.",
"Relation extraction is a heavily studied area in the NLP community.",
"Most work focuses on news and web data (Doddington et al., 2004; Riedel et al., 2010; Hendrickx et al., 2009).",
"9 Recent neural net-9 And TAC KBP: https://tac.nist.gov 878 P R F1 Total Micro F1 44.8 50.2 47.3 Macro F1 34.0 29.8 31.7 Chemical/Disease marker/mechanism 46.2 57.9 51.3 therapeutic 55.7 67.1 60.8 Gene/Disease marker/mechanism 42.2 44.4 43.0 therapeutic 52.6 10.1 15.8 Chemical/Gene increases_expression 39.7 48.0 43.3 increases_MP 26.3 35.5 29.9 decreases_expression 34.4 32.9 33.4 increases_activity 24.5 24.7 24.4 aects_response 40.9 35.5 37.4 decreases_activity 30.8 19.4 23.5 aects_transport 28.7 23.8 25.8 increases_reaction 12.8 5.6 7.4 decreases_reaction 12.3 5.7 7.4 decreases_MP 28.9 7.0 11.0 Table 7: BRAN precision, recall and F1 results for the full CTD dataset by relation type.",
"work approaches to relation extraction have focused on CNNs (dos Santos et al., 2015; Zeng et al., 2015) or LSTMs (Miwa and Bansal, 2016; Verga et al., 2016a; Zhou et al., 2016b) and replacing stage-wise information extraction pipelines with a single end-to-end model (Miwa and Bansal, 2016; Ammar et al., 2017; Li et al., 2017).",
"These models all consider mention pairs separately.",
"There is also a considerable body of work specifi-cally geared towards supervised biological relation extraction including protein-protein (Pyysalo et al., 2007; Poon et al., 2014; Mallory et al., 2015), drug-drug (Segura-Bedmar et al., 2013), and chemical-disease (Gurulingappa et al., 2012; Li et al., 2016a) interactions, and more complex events (Kim et al., 2008; Riedel et al., 2011).",
"Our work focuses on modeling relations between chemicals, diseases, genes and proteins, where available annotation is often at the documentor abstract-level, rather than the chem_gene chem_disease gene_disease all Dataset 0.0 0.1 0.2 0.3 0.4 0.5 0.6 F 1 S c o r e 11 25 50 100 500 Figure 2: Performance on the CTD dataset when restricting candidate entity pairs by distance.",
"Some previous work exists on cross-sentence relation extraction.",
"Swampillai and Stevenson (2011) and Quirk and Poon (2017) consider featurized classifiers over cross-sentence syntactic parses.",
"Most similar to our work is that of Peng et al. (2017), which uses a variant of an LSTM to encode document-level syntactic parse trees.",
"Our work diers in three key ways.",
"First, we operate over raw tokens negating the need for part-of-speech or syntactic parse features which can lead to cascading errors.",
"We also use a feed-forward neural architecture which encodes long sequences far more eciently compared to the graph LSTM network of Peng et al. (2017).",
"Finally, our model considers all mention pairs simultaneously rather than a single mention pair at a time.",
"We employ a bi-ane function to form pairwise predictions between mentions.",
"Such models have also been used for knowledge graph link prediction (Nickel et al., 2011; Li et al., 2016b), with variations such as restricting the bilinear relation matrix to be diagonal (Yang et al., 2015) or diagonal and complex (Trouillon et al., 2016).",
"Our model is similar to recent approaches to graph-based dependency parsing, where bilinear parameters are used to score head-dependent compatibility (Kiperwasser and Goldberg, 2016; Dozat and Manning, 2017).",
"We present a bi-ane relation attention network that simultaneously scores all mention pairs within a document.",
"Our model performs well on three datasets, including two standard benchmark biological relation extraction datasets and a new, large and high-quality dataset introduced in this work.",
"Our model out-performs the previous state of the art on the Biocreative V CDR dataset despite us-879 ing no additional linguistic resources or mention pair-specific features.",
"Comparative toxicogenomics database: a knowledgebase and discovery tool for chemical genedisease networks.",
"Nucleic acids research 37(suppl_1):D786D792.",
"George Doddington, Alexis Mitchell, Mark Przy-bocki, Lance Ramshaw, Stephanie Strassel, and Ralph Weischedel.",
"2004.",
"The automatic content extraction (ace) program tasks, data, and evaluation.",
"In Proceedings of the Fourth International Conference on Language Resources and Evaluation .",
"Timothy Dozat and Christopher D Manning.",
"2017.",
"Deep biane attention for neural dependency parsing.",
"5th International Conference on Learning Representations .",
"Philip Gage.",
"1994.",
"A new algorithm for data compression.",
"The C Users Journal 12(2):2338.",
"Xavier Glorot, Antoine Bordes, and Yoshua Bengio.",
"2011.",
"Deep sparse rectifier neural networks.",
"In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics .",
"pages 315323.",
"Jinghang Gu, Longhua Qian, and Guodong Zhou.",
"2016.",
"Chemical-induced disease relation extraction with various linguistic features.",
"Database 2016.",
"Jinghang Gu, Fuqing Sun, Longhua Qian, and Guodong Zhou.",
"2017.",
"Chemical-induced disease relation extraction via convolutional neural network.",
"Database 2017.",
"Our current model predicts only into a fixed schema of relations given by the data.",
"However, this could be ameliorated by integrating our model into open relation extraction architectures such as Universal Schema (Riedel et al., 2013; Verga et al., 2016b).",
"Our model also lends itself to other pairwise scoring tasks such as hypernym prediction, co-reference resolution, and entity resolution.",
"We will investigate these directions in future work.",
"We thank Ofer Shai and the Chan Zuckerberg Initiative / Meta data science team for helpful discussions.",
"We also thank Timothy Dozat and Kyubyong Park for releasing their code."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"result",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"other",
"other"
] |
[
"Transformer-based QA models use input-wide self-attention i.e. across both the question and the input passage at all layers, causing them to be slow and memory-intensive.",
"It turns out that we can get by without input-wide self-attention at all layers, especially in the lower layers.",
"We introduce DeFormer , a decomposed transformer, which substitutes the full self-attention with question-wide and passage-wide self-attentions in the lower layers.",
"This allows for question-independent processing of the input text representations, which in turn enables pre-computing passage representations reducing runtime compute drastically.",
"Furthermore, because DeFormer is largely similar to the original model, we can initialize DeFormer with the pre-training weights of a standard transformer, and directly fine-tune on the target QA dataset.",
"We show DeFormer versions of BERT and XLNet can be used to speed up QA by over 4.3x and with simple distillation-based losses they incur only a 1% drop in accuracy.",
"We open source the code at https://github.com/ StonyBrookNLP/deformer .",
"There is an increasing need to push question answering (QA) models in large volume web scale services (Google, 2019) and also to push them to resource constrained mobile devices for privacy and other performance reasons (Cao et al., 2019).",
"State-of-the-art QA systems, like many other NLP applications, are built using large pre-trained Transformers (e.g., BERT (Devlin et al., 2019), XLNet (Yang et al., 2019), Roberta (Liu et al., 2019)).",
"However, inference in these models requires prohibitively high-levels of runtime compute and memory making it expensive to support large volume deployments in data centers and infeasible to run on resource constrained mobile devices.",
"Our goal is to take pre-trained Transformer-based models and modify them to enable faster Decompose CLS My name SEP Your ?",
"inference for QA without having to repeat the pretraining.",
"This is a critical requirement if we want to explore many points in the accuracy versus speed trade-off because pre-training is expensive.",
"The main compute bottleneck in Transformer-based models is the input-wide self-attention computation at each layer.",
"In reading comprehension style QA, this amounts to computing self-attention over the question and the context text together.",
"This helps the models create highly effective question-dependent context representations and vice-versa.",
"Of these, building representations of the context takes more time because it is typically much longer than the question.",
"If the context can be processed independent of the question, then this expensive compute can be pushed offline saving significant runtime latency.",
"Can we process the context independent of the question, at least in some of the layers, without too much loss in effectiveness?",
"There are two empirical observations that indicate that this is possible.",
"First, previous studies have demonstrated that lower layers tend to focus on local phenomena such as syntactic aspects, while the higher layers focus on global (long distance) phenomena such as semantic aspects relevant for the target task (Tenney et al., 2019; Hao et al., 2019; Clark et al., 2019b).",
"Second, as we show later (see Section 2), in a standard BERT-based QA model, there is less variance in the lower layer representations of text when we vary the question.",
"This means that in the lower layers information from the question is not as critical to form text representations.",
"Together, these suggest that considering only local context in lower layers of Transformer and considering full global context in upper layers can provide speedup at a very small cost in terms of effectiveness.",
"Based on these observations, we introduce DeFormer a simple decomposition of pre-trained Transformer-based models, where lower layers in the decomposed model process the question and context text independently and the higher layers process them jointly (see Figure 1 for a schematic illustration).",
"Suppose we allow k lower layers in a n -layer model to process the question and context text independently.",
"DeFormer processes the context texts through k lower layers offline and caches the output from the k -th layer.",
"During runtime the question is first processed through the k -layers of the model, and the text representation for the k -th layer is loaded from the cache.",
"These two k -th layer representations are fed to the ( k + 1) -th layer as input and further processing continues through the higher layers as in the original model.",
"In addition to directly reducing the amount of runtime compute, this also reduces memory significantly as the intermediate text representations for the context are no longer held in memory.",
"A key strength of this approach is that one can make any pre-trained Transformer-based QA model faster by creating a corresponding DeFormer version that is directly fine-tuned on the target QA datasets without having to repeat the expensive pre-training.",
"Our empirical evaluation on multiple QA datasets show that with direct fine-tuning the decomposed model incurs only a small loss in accuracy compared to the full model.",
"This loss in accuracy can be reduced further by learning from the original model.",
"We want DeFormer to behave more like the original model.",
"In particular, the upper layers of DeFormer should produce representations that capture the same kinds of information as the corresponding layers in the original model.",
"We add two distillation-like auxiliary losses (Hinton et al., 2015), which minimize the output-level and the layer-level divergences between the decomposed and original models.",
"We evaluate DeFormer versions of two transformer-based models, BERT and XLNet on three different QA tasks and two sentence-sentence paired-input tasks 1 .",
"DeFormer achieves substantial speedup (2.7 to 4.3x) and reduction in memory (65.8% to 72.9%) for only small loss in effectiveness (0.6 to 1.8 points) for QA.",
"Moreover, we find that DeFormer version of BERT-large is faster than the original version of the smaller BERT-base model, while still being more accurate.",
"Ablations shows that the supervision strategies we introduce provide valuable accuracy improvements and further analysis illustrate that DeFormer provides good runtime vs accuracy trade-offs.",
"The standard approach to using transformers for question answering is to compute the self-attention over both question and the input text (typically a passage).",
"This yields highly effective representations of the input pair since often what information to extract from the text depends on the question and vice versa.",
"If we want to reduce complexity, one natural question to ask is whether we can decompose the Transformer function over each segment of the input, trading some representational power for gains in ability to push processing the text segment offline.",
"The trade-off depends on how important it is to have attention from question tokens when forming text representations (and vice versa) in the lower layers.",
"To assess this, we measured how the text representation changes when paired with different questions.",
"In particular, we computed the average passage representation variance when paired with different questions.",
"The variance is measured using cosine distance between the passage vectors and their centroid.",
"As Figure 2 shows that in the lower layers, the text representation does not change as much as it does in the upper layers, suggesting ignoring attention from question tokens in lower layers may not be a bad idea.",
"This is also in agreement with results on probing tasks which suggest that lower layers tend to model mostly local phenomena (e.g., POS, syntactic categories), while higher layers tend to model more semantic phenomena that are task dependent (e.g, entity co-reference) relying on wider contexts.",
"1 These simulate other information seeking applications where one input is available offline.",
"First, we formally define the computation of a Transformer for a paired-task containing two segments of text, T a and T b .",
"Let the token embedding representations of segment T a be A = [ a 1 ; a 2 ; ... ; a q ] and of T b be B = [ b 1 ; b 2 ; ... ; b p ] .",
"The full input sequence X can be expressed by concatenating the token representations from segment T a and T b as X = [ A ; B ] .",
"The Transformer encoder has n layers (denoted L i for layer",
"i), which transform this input sequentially: X l +1 = L i ( X l ) .",
"For the details of the Transformer layer, we refer the reader to (Vaswani et al., 2017).",
"We denote the application of a stack of layers from layer i to layer j be denoted as L i : j .",
"The output representations of the full Transformer, A n and B n can be written as: [ A n ; B n ] = L 1: n ([ A 0 ; B 0 ]) (1) Figure 3 shows a schematic of our model.",
"We decompose the computation of lower layers (up to layer k ) by simply removing the cross-interactions between T a and T b representations.",
"Here k is a hyper-parameter.",
"The output representations of the decomposed Transformer, A n and B n can be expressed as: [ A n ; B n ] = L k +1: n ([ L 1: k ( A 0 ); L 1: k ( B 0 )) (2) Transformer-based QA systems process the input question and context together through a stack of self-attention layers. So applying this decomposition to Transformer for QA allows us to process the question and the context text independently, which in turn allows us to compute the context text representations for lower layers offline. With this change the runtime complexity of each lower layer is reduced from O (( p + q ) 2 ) to O ( q 2 + c ) , where c denotes cost of loading the cached representation. 2.2 Auxiliary Supervision for DeFormer DeFormer can be used in the same way as the original Transformer. Since DeFormer retains much of the original structure, we can initialize this model with the pre-trained weights of the original Transformer and fine-tune directly on downstream tasks. However, DeFormer looses some information in the representations of the lower layers. The upper layers can learn to compensate for this during fine-tuning. However, we can go further and use the original model behavior as an additional source of supervision. Towards this end, we first initialize the parameters of DeFormer with the parameters of a pre-trained full Transformer, and fine-tune it on the downstream tasks. We also add auxiliary losses that make DeFormer predictions and its upper layer representations closer to the predictions and corresponding layer representations of the full Transformer. Knowledge Distillation Loss: We want the prediction distribution of DeFormer to be closer to that of the full Transformer. We minimize the KullbackLeibler divergence between decomposed Transformer prediction distribution PA and full Transformer prediction distribution PB : L kd = DKL ( PA (cid:107) PB ) Layerwise Representation Similarity Loss: We want the upper layer representations of DeFormer to be closer to those of full Transformer. We minimize the euclidean distance between token representations of the upper layers of decomposed Transformer and the full Transformer. Let v ji be the representation of the j th token in the i th layer in the full transformer, and let u ji be the corresponding Decompose Transformer Layer 1 Layer 2 Layer k Layer k+1 Layer n Layer 1 Layer 2 Layer k Layer k+1 Layer n Layer 1 Layer 2 Layer k Predictions Predictions Auxilliary Supervision (KD + LRS) CLS Tok1 Tok2 SEP Tok3 Tok4 CLS Tok1 Tok2 SEP Tok3 Tok4 DeFormer Transformer Encoder (lower layers) Transformer Encoder (upper layers) Figure 3: Decomposing Transformers up to layer k , which enables encoding each segment independently from layer 1 to layer k . Auxiliary supervision of upper layer information from the original model further helps the decomposed model to compensate for information loss in the lower layers. KD is Knowledge Distillation loss and LRS is Layerwise Representation Similarity loss. representation in DeFormer. For each of the upper layers k + 1 through n , we compute a layerwise representation similarity (lrs) loss as follows: L lrs = n (cid:88) i = k m (cid:88) j =1 (cid:107) v ij u ij (cid:107) 2 We add the knowledge distillation loss ( L kd ) and layerwise representation similarity loss ( L lrs ) along with the task specific supervision Loss ( L ts ) and learn their relative importance via hyper-parameter tuning: L total = L ts + L kd + L lrs (3) We use Bayesian Optimization (Mockus, 1975) to tune the , and instead of simple trial-and-error or grid/random search. This is aimed at reducing the number of steps required to find a combination of hyper-parameters that are close to the optimal one. 3 Evaluation 3.1 Datasets We use the pre-trained uncased BERT base and large 2 models on five different paired-input problems covering 3 QA tasks, and in addition two other sentence-sentence tasks 3 . 2 Whole Word Masking version 3 We pick these as additional datasets to show the utility of decomposition in other information seeking applications SQuAD v1.1 (Stanford Question Answering Dataset) (Rajpurkar et al., 2016) is an extractive question answering datasets containing > 100,000 question and answer pairs generated by crowd workers on Wikipedia articles. RACE (Lai et al., 2017) is reading comprehension dataset collected from the English exams that are designed to evaluate the reading and reasoning ability of middle and high school Chinese students. It has over 28,000 passages and 100,000 + questions. BoolQ (Clark et al., 2019a) consists of 15942 yes/no questions that are naturally occurring in unprompted and unconstrained settings. MNLI (Multi-Genre Natural Language Inference) (Williams et al., 2018) is a crowd-sourced corpus of 433k sentence pairs annotated with textual entailment information. QQP (Quora Question Pairs) (Iyer et al., 2019) consists of over 400,000 potential duplicate question pairs from Quora. For all 5 tasks, we use the standard splits provided with the datasets but in addition divide the original training data further to obtain a 10% split to use for tuning hyper-parameters (tune split), and use the original development split for reporting efficiency (FLOPs, memory usage) and effectiveness similar to QA, where one of the inputs can be assumed to be available offline. For instance, we may want to find answer (premise) sentences from a collection that support information contained in a query (hypothesis) sentence. Another use case is FAQ retrieval, where a user question is compared against a collection of previously asked questions. metrics (accuracy or F1 depending on the task). 3.2 Implementation Details We implement all models in TensorFlow 1.15 (Abadi et al., 2015) based on the original BERT (Devlin et al., 2019) and the XLNet (Yang et al., 2019) codebases. We perform all experiments on one TPU v3-8 node (8 cores, 128GB memory) with bfloat16 format enabled. We measure the FLOPs and memory consumption through the TensorFlow Profiler 4 . For DeFormer models, we tune the hy-perparameters for weighting different losses using bayesian optimizaiton libray (Nogueira, Fernando, 2019) with 50 iterations on the tune split (10% of the original training sets) and report the performance numbers on the original dev sets. The search range is [0.1, 2.0] for the 3 hyper-parameters.",
"We put the detail hyper-parameters in the section A. For DeFormer-BERT and DeFormer-XLNet, we compute the representations for one of the input segments offline and cache it.",
"For QA we cache the passages, for natural language inference, we cache the premise 5 and for question similarity we cache the first question 6 .",
"Table 1 shows the main results comparing performance, inference speed and memory requirements of BERT-base and DeFormer-BERT-base when using nine lower layers, and three upper layers (see Subsection 3.4 for the impact of the choice of up-per/lower splits).",
"We observe a substantial speedup and significant memory reduction in all the datasets, while retaining most of the original model's effectiveness (as much as 98.4% on SQuAD and 99.8% on QQP datasets), the results of XLNet in the same table demonstrates the decomposition effectiveness for different pre-trained Transformer architectures.",
"Table 2 shows that the decomposition brings 2x speedup in inference and more than half of memory reduction on both QQP and MNLI datasets, which take pairwise input sequences.",
"The effectiveness of decomposition generalizes further beyond QA tasks as long as the input sequences are paired.",
"r1.15/api_docs/python/tf/profiler/profile 5 One use case is where we want to find (premise) sentences from a collection that support information contained in a query (hypothesis) sentence.",
"Small Distilled or Large Decomposed?",
"Table 3 compares performance, speed and memory of BERT-base, BERT-large and DeFormer-BERT-large.",
"DeFormer-BERT-large is 1.6 times faster than the smaller BERT-base model.",
"Decomposing the larger model turns out to be also more effective than using the smaller base model (+2.3 points) This shows that with decomposition, a large Transformer can run faster than a smaller one which is half its size, while also being more accurate.",
"Distilling a larger model into a smaller one can yield better accuracy than training a smaller model from scratch.",
"As far as we know, there are two related but not fully comparable results.",
"(1) Tang et al. (2019) distill BERT to a small LSTM based model where they achieve 15x speedup but at a significant drop in accuracy of more than 13 points on MNLI.",
"(2) Sanh et al. (2019) distill BERT to a smaller six layer Transformer, which can provide 1.6x speedup but gives > 2 points accuracy drop on MNLI and > 3 points F1 drop on SQuAD.",
"A fair comparison requires more careful experimentation exploring different distillation sizes which requires repeating pre-training or data augmentation an expensive proposition.",
"Device Results: To evaluate the impact on different devices, we deployed the models on three different machines (a GPU, CPU, and a mobile phone).",
"Table 4 shows the average latency in answering a question measured on a subset of the SQuAD dataset.",
"On all devices, we get more than three times speedup.",
"Table 5 shows the contribution of auxiliary losses for fine-tuning DeFormer-BERT on SQuAD dataset.",
"The drop in effectiveness when not using Layerwise Representation Similarity (LRS in ta-ble), and Knowlege Distillation (KD) losses shows the utility of auxiliary supervision.",
"Figure 4a and figure 4b show how the effectiveness and inference speed of DeFormer-BERT changes as we change the separation layer.",
"Inference speedup scales roughly quadratically with respect to the number of layers with decomposed attention.",
"The drop in effectiveness, on the other hand, is negligible for separating at lower layers (until layer 3 for the base model and until layer 13 for the large model) and increases slowly after that Model Datasets Avg.",
"with a dramatic increase in the last layers closest to the output.",
"The separation layer choice thus allows trading effectiveness for inference speed.",
"The main difference between the original BERT and the DeFormer-BERT is the absence of cross",
"attention in the lower layers.",
"We analyze the differences between the representations of the two models across all layers.",
"To this end, we randomly select 100 passages from SQuAD dev dataset as well as randomly selecting 5 different questions that already exist in the dataset associated with each passage.",
"For each passage, we encode all 5 question-passage pair sequence using both the fine-tuned original BERT-base model and the DeFormer-1.1 1.2 1.3 1.4 1.6 1.9 2.2 2.6 3.2 4.3 6.4 -0.3 -0.3 -0.2 -0.9 -2.6 -2.9 -2.4 -3.1 -2.7 -3.3 Separation Layer F 1 s c o r e d r o p S p ee d u p -18.0 -12.0 -6.0 0.0 -1.0 2.0 5.0 8.0 1 2 3 4 5 6 7 8 9 10 11 Inference speedup F1 drop",
"BERT-base model, and compute their distance of the vector representations at each layer.",
"Figure 5 shows the averaged distances of both the question and passage at different layers.",
"The lower layer representations of the passage and questions for both models remain similar but the upper layer representations differ significantly, supporting the idea that lack of cross-attention has less impact in the lower layers than in the higher ones.",
"Also, using the auxiliary supervision of upper layers has the desired effect of forcing DeFormer to produce representations that are closer to the original model.",
"This effect is less pronounced for the question representations.",
"DeFormer enables caching of text representations that can be computed offline.",
"While a full-scale analysis of the detailed trade-offs in storage versus latency is beyond the scope of this paper, we present a set of basic calculations to illustrate that the storage cost of caching can be substantially smaller compared to the inference cost.",
"Assuming a use case of evaluating one million question-passage pairs daily, we first compute the storage requirements of the representations of these passages.",
"With the BERT-base representations we estimate this to be 226KB per passage and 226GB in total for 1 million passages.",
"The cost of storing this data and the added compute costs and reading these passages at the current vendor rates amounts to a total of $61.7 dollars per month.",
"To estimate inference cost, we use the compute times we obtain from our calculations and use current vendor rates for GPU workloads which amounts to $148.5 dollars to support the 1 million question-passage pair workload.",
"The substantial reduction in cost is because the storage cost is many orders of magnitude cheaper than using GPUs.",
"Details of these calculations are listed in the Appendix.",
"Speeding up inference in a model requires reducing the amount of compute involved.",
"There are two broad related directions of prior work:",
"(i) Compression techniques can be used to reduce model size through low rank approximation (Zhang et al., 2015; Kim et al., 2015; Tai et al., 2015; Chen et al., 2018), and model weights pruning (Guo et al., 2016; Han et al., 2015), which have been shown to help speedup inference in CNN and RNN based models.",
"For Transformers, Michel et al. (2019) explore pruning the attention heads to gain inference speedup.",
"This is an orthogonal approach that can be combined with our decomposition idea.",
"However, for the paired-input tasks we consider, pruning heads only provides limited speedup.",
"In more recent work Ma et al. (2019) propose approximating the quadratic attention computation with a tensor decomposition based multilinear attention model.",
"However, it is not clear how this multi-linear approximation can be applied to pre-trained Transformers like BERT.",
"(ii) Distillation techniques can be used to train smaller student networks to speedup inference.",
"Tang et al. (2019) show that BERT can be used to guide designing smaller models (such as single-layer BiLSTM) for multiple tasks.",
"But for the tasks we study, such very small models suffer a significant performance drop.",
"For instance there is a 13% accuracy degration on MNLI task.",
"Another closely related recent work is DistillBERT (Sanh et al., 2019), which trains a smaller BERT model (half the size of BERT-base) that runs 1.5 times faster Layer 0.00 0.25 0.50 0.75 1.00 1 2 3 4 5 6 7 8 9 10 11 12 BERT vs DeFormer-BERT BERT vs DeFormer-BERT w/o aux loss",
"than the original BERT-base.However, the distilled model incurs a significant drop in accuracy.",
"While more recent distillation works such as (Jiao et al., 2019) and (Sun et al., 2020) further improve the speedups, our decomposition also achieves similar accuracy performance.",
"More importantly, this distillation model usually undergo expensive pretraining on the language modeling tasks before they can be fine-tuned for the downstream tasks.",
"Previous QA neural models like BIDAF(Seo et al., 2016), QANet(Yu et al., 2018) and many others contain decomposition as part of their neural architecture design .",
"In contrast, the focus of our work is to show that large pre-trained Transformer models can be decomposed at the fine-tuning stage to bring effectiveness of SOTA pre-trained transformers at much lower inference latency.",
"In this work, we ask if can we speedup the inference of Transformer models without compressing or removing model parameters.",
"Part of the massive success of pre-trained Transformer models for many NLP task is due to a large amount of parameters capacity to enable complex language representations.",
"The decomposition we propose makes minimal changes retaining the overall capacity and structure of the original model but allows for faster inference by enabling parallel processing and caching of segments.",
"DeFormer applies to settings where the underlying model relies on input-wide self-attention layers.",
"Even with models that propose alternate ways to improve efficiency, as long as the models use input-wide self-attention, DeFormer can be applied as a complementary mechanism to further improve inference efficiency.",
"We leave an evaluation of applying DeFormer on top of other recent efficiency optimized models for future work.",
"Transformers have improved the effectiveness of NLP tools by their ability to incorporate large contexts effectively in multiple layers.",
"This however imposes a significant complexity cost.",
"In this work, we showed that modeling such large contexts may not always be necessary.",
"We build a decomposition of the transformer model that provides substantial improvements in inference speed, memory reduction, while retaining most of the original model's accuracy.",
"A key benefit of the model is that its architecture remains largely the same as the original model which allows us to avoid repeating pretraining and use the original model weights for fine-tuning.",
"The distillation techniques further reduce the performance gap with respect to the original model.",
"This decomposition model provides a simple yet strong starting point for efficient QA models as NLP moves towards increasingly larger models handling wider contexts.",
"We thank Google for supporting this research through the Google Cloud Platform credits."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"abstain",
"other",
"other",
"objective",
"method",
"other",
"objective",
"other",
"other",
"method",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"other"
] |
[
"Heavily overparameterized language models such as BERT, XLNet and T5 have achieved impressive success in many NLP tasks.",
"However, their high model complexity requires enormous computation resources and extremely long training time for both pretraining and fine-tuning.",
"Many works have studied model compression on large NLP models, but only focusing on reducing inference time while still requiring an expensive training process.",
"Other works use extremely large batch sizes to shorten the pre-training time, at the expense of higher computational resource demands.",
"In this paper, inspired by the Early-Bird Lottery Tickets recently studied for computer vision tasks, we propose EarlyBERT, a general computationally-efficient training algorithm applicable to both pre-training and fine-tuning of large-scale language models.",
"By slimming the self-attention and fully-connected sub-layers inside a transformer, we are the first to identify structured winning tickets in the early stage of BERT training.",
"We apply those tickets towards efficient BERT training, and conduct comprehensive pre-training and fine-tuning experiments on GLUE and SQuAD downstream tasks.",
"Our results show that EarlyBERT achieves comparable performance to standard BERT, with 35 45% less training time.",
"Code is available at https: //github.com/VITA-Group/EarlyBERT .",
"Large-scale pre-trained language models ( e.g., BERT (Devlin et al., 2018), XLNet (Yang et al., 2019), T5 (Raffel et al., 2019)) have significantly advanced the state of the art in the NLP field.",
"Despite impressive empirical success, their computational inefficiency has become an acute drawback in practice.",
"As more transformer layers are stacked Work was done when the author interned at Microsoft.",
"with larger self-attention blocks, model complexity increases rapidly.",
"For example, compared to BERT-Large model with 340 million parameters, T5 has more than 10 billion to learn.",
"Such high model complexity calls for expensive computational resources and extremely long training time.",
"Model compression is one approach to alleviating this issue.",
"Recently, many methods have been proposed to encode large NLP models compactly (Sanh et al., 2019; Jiao et al., 2019; Sun et al., 2019, 2020b).",
"However, the focus is solely on reducing inference time or resource costs, leaving the process of searching for the right compact model ever more costly.",
"Furthermore, most model compression methods start with a large pre-trained model, which may not be available in practice.",
"Recent work (You et al., 2020b) proposes to use large training batches, which significantly shortens pre-training time of BERT-Large model but demands daunting computing resources (1,024 TPUv3 chips).",
"In contrast, our quest is to find a general resource-efficient training algorithm for large NLP models, which can be applied to both pre-training and fine-tuning stages.",
"Our goal is to trim down the training time and avoid more costs of the total training resources ( e.g. , taking large-batch or distributed training).",
"To meet this challenge demand, we draw inspirations from recent work (You et al., 2020a) that explores the use of Lottery Ticket Hypothesis (LTH) for efficient training of computer vision models.",
"LTH was first proposed in Frankle and Carbin (2019) as an exploration to understand the training process of deep networks.",
"The original LTH substantiates a trainable sparse sub-network at initialization, but it cannot be directly utilized for efficient training, since the subnetwork itself has to be searched through a tedious iterative process.",
"In addition, most LTH works discussed only unstructured sparsity.",
"The study of You et al. (2020a) presents discoveries that structured lottery tickets can emerge in early stage of training ( i.e. , Early-Bird Ticket), and therefore a structurally sparse subnetwork can be identified with much lower costs, leading to practical efficient training algorithms.",
"Inspired by the success of LTH and Early-Bird Ticket, we propose EarlyBERT, a general efficient training algorithm based on structured Early-Bird Tickets.",
"Due to the vast differences between the architectures and building blocks of computer vision models and BERT, directly extending the method of You et al. (2020a) does not apply to our work.",
"By instead using network slimming (Liu et al., 2017) on the self-attention and fully-connected sub-layers inside a transformer, we are the first to introduce an effective approach that can identify structured winning tickets in the early stage of BERT training , that are successfully applied for efficient language modeling pre-training and fine-tuning.",
"Extensive experiments on BERT demonstrate that EarlyBERT can save 35 45% training time with minimal performance degradation, when evaluated on GLUE and SQuAD benchmarks.",
"Efficient NLP Models It is well believed that BERT and other large NLP models are considerably overparameterized (McCarley, 2019; Sun et al., 2019).",
"This explains the emergence of many model compression works, which can be roughly categorized into quantization (Shen et al., 2020; Zafrir et al., 2019), knowledge distillation (Sun et al., 2019; Jiao et al., 2019; Sanh et al., 2019; Sun et al., 2020a,b), dynamic routing (Fan et al., 2019; Xin et al., 2020), and pruning (Li et al., 2020; Wang et al., 2019; McCarley, 2019; Michel et al., 2019).",
"Almost all model compression methods focus on reducing inference time, while their common drawback is the reliance on fully-trained and heavily-engineered dense models, before proceeding to their compact, sparse versions which essentially transplants the resource burden from the inference to the training stage.",
"Pruning is the mainstream approach for compressing BERT so far (Gordon et al., 2020).",
"McCarley (2019) proposed to greedily and iteratively prune away attention heads contributing less to the model.",
"Wang et al. (2019) proposed to structurally prune BERT models using low-rank factorization and augmented Lagrangian (cid:96) 0 norm regularization.",
"McCarley (2019) pruned less important self-attention heads and slices of MLP layers by applying (cid:96) 0 regularization to the coefficient corresponding to each head/MLP layer.",
"Others aim to reduce the training time of transformer-based models via large-batch training and GPU model parallelism (You et al., 2020b; Shoeybi et al., 2019).",
"Our work is orthogonal to these works, and can be readily combined for further efficiency boost.",
"Lottery Ticket Hypothesis in Computer Vision Lottery Ticket Hypothesis (LTH) was firstly proposed in Frankle and Carbin (2019), which shed light on the existence of sparse sub-networks ( i.e. , winning tickets) at initialization with non-trivial sparsity ratio that can achieve almost the same performance (compared to the full model) when trained alone.",
"The winning tickets are identified by pruning fully trained networks using the so-called Iterative Magnitude-based Pruning (IMP).",
"However, IMP is expensive due to its iterative na-ture.",
"Moreover, IMP leads to unstructured sparsity, which is known to be insufficient in reducing training cost or accelerating training speed practically.",
"These barriers prevent LTH from becoming immediately helpful towards efficient training.",
"Morcos et al. (2019) studies the transferability of winning tickets between datasets and optimizers.",
"Zhou et al. (2019) investigates different components in LTH and observes the existence of super-masks in winning tickets.",
"Lately, You et al. (2020a) pioneers to identify Early-Bird Tickets, which emerge at the early stage of the training process, and contain structured sparsity when pruned with Network Slimming (Liu et al., 2017) which adopts channel pruning.",
"Early-bird tickets mitigate the two limitations of IMP aforementioned, and renders it possible to training deep models effi-ciently, by drawing tickets early in the training and then focusing on training this compact subnetwork only.",
"Chen et al. (2021) reveals the benefit of LTH in data-efficient training, but their focus is not on saving training resources.",
"Lottery Ticket Hypothesis in NLP All above works evaluate their methods on computer vision models.",
"For NLP models, previous work has also found that matching subnetworks exist in transformers and LSTMs (Yu et al., 2019; Renda et al., 2020).",
"Evci et al. (2020) derived an algorithm for training sparse neural networks according to LTH and applied it to character-level language modeling on WikiText-103.",
"For BERT models, a latest work (Chen et al., 2020b) found that the pre-trained BERT models contain sparse subnetworks, found by unstructured IMP at 40% to 90% sparsity, that are independently trainable and transferable to a range of downstream tasks with no performance degradation.",
"Their follow-up work (Chen et al., 2020a; Gan et al., 2021) pointed out similar phenomenons in pre-trained computer vision and vision-language models.",
"Another work (Prasanna et al., 2020) aims to find structurally sparse lottery tickets for BERT, by pruning entire attention heads and MLP layers.",
"Their experiments turn out that all subnetworks (good and bad) have compara-ble performance when fined-tuned on downstream tasks, leading to their all tickets are winning conclusion.",
"Nevertheless, both relevant works (Chen et al., 2020b; Prasanna et al., 2020) examine only the pre-trained BERT model, i.e. , finding tickets with regard to the fine-tuning stage on downstream tasks.",
"To our best knowledge, no existing study analyzes the LTH at the pre-training stage of BERT; nor has any work discussed efficient BERT training using LTH, for either pre-training or fine-tuning.",
"Our work makes the first attempt of introducing LTH to both efficient pre-training and efficient fine-tuning of BERT.",
"Our results also provide positive evidence that LTH and Early-Bird Tickets in NLP models are amendable to structured pruning.",
"In this section, we first revisit the original Lottery Ticket Hypothesis (LTH) (Frankle and Carbin, 2019) and its variant Early-Bird Ticket (You et al., 2020a), then describe our proposed EarlyBERT.",
"Denote f ( x ; ) as a deep network parameterized by and x as its input.",
"A sub-network of f can be characterized by a binary mask m , which has exactly the same dimension as .",
"When applying the mask m to the network, we obtain the sub-network f ( x ; (cid:12) m ) , where (cid:12) is the Hadamard product operator.",
"LTH states that, for a network initialized with 0 , an algorithm called Iterative Magnitude Pruning (IMP) can identify a mask m such that the sub-network f ( x ; 0 (cid:12) m ) can be trained to have no worse performance than the full model f following the same training protocol.",
"Such a subnetwork f ( x ; 0 (cid:12) m ) , including both the mask m and initial parameters 0 , is called a winning ticket .",
"The IMP algorithm works as follows: (1) initialize m as an all-one mask; (2) fully train f ( x ; 0 (cid:12) m ) to obtain a well-trained ; (3) remove a small portion of weights with the smallest magnitudes from (cid:12) m and update m ; (4) repeat (2) (3) until a certain sparsity ratio is achieved.",
"Two obstacles prevent LTH from being directly applied to efficient training.",
"First, the iterative process in IMP is essential to preserve the performance of LTH; however, this is computationally expensive, especially when the number of iterations is high.",
"Second, the original LTH does not pursue any structured sparsity in the winning tickets.",
"In practice, unstructured sparsity is difficult to be utilized for computation acceleration even when the sparsity ratio is high (Wen et al., 2016).",
"To mitigate these gaps, Early-Bird Tickets are proposed by You et al. (2020a), who discovers that when using structured mask m and a properly selected learning rate, the mask m quickly converges and the corresponding mask emerges as the winning ticket in the early stage of training.",
"The early emergence of winning tickets and the structured sparsity are both helpful in reducing computational cost in the training that follows.",
"You et al. (2020a) focuses on computer vision tasks with convolutional networks such as VGG (Simonyan and Zis-serman, 2014) and ResNet (He et al., 2016).",
"Inspired by this, we set out to explore whether there are structured winning tickets in the early stage of BERT training that can significantly accelerate language model pre-training and fine-tuning.",
"The proposed EarlyBERT 1 training framework consists of three steps: ( i ) Searching Stage : jointly train BERT and the sparsity-inducing coefficients to be used to draw the winning ticket; ( ii ) Ticket-drawing Stage : draw the winning ticket using the learned coefficients; and ( iii ) Efficient-training Stage : train EarlyBERT for pre-training or downstream fine-tuning.",
"Searching Stage To search for the key substructure in BERT, we follow the main idea of Network Slimming (NS) (Liu et al., 2017).",
"However, pruning in NS is based on the scaling factor in batch normalization, which is not used in most NLP models such as BERT.",
"Therefore, we make 1 EarlyBERT refers to the winning ticket discovered by the proposed 3-stage framework, which is equivalent to the resulting pruned BERT model after drawing the winning ticket.",
"We also interchangeably use EarlyBERT as the name of the proposed framework.",
"necessary modifications to the original NS so that it can be adapted to pruning BERT.",
"Specifically, we propose to associate attention heads and intermediate layers of the fully-connected sub-layers in a transformer with learnable coefficients, which will be jointly trained with BERT but with an additional (cid:96) 1 regularization to promote sparsity.",
"Some studies (Michel et al., 2019; Voita et al., 2019) find that the multi-head self-attention module of transformer can be redundant, presenting the possibility of pruning some heads from each layer of BERT without hurting model capacity.",
"A multihead attention module (Vaswani et al., 2017) is formulated as: MultiHead( Q, K, V ) = Concat(h 1 , . . . , h n ) WO h i = Attention( QW Qi , KW Ki , V W Vi ) , (1) where n is the number of heads, and the projections WO , WQ i , W Ki , W Vi are used for output, query, key and value.",
"Inspired by Liu et al. (2017), we introduce a set of scalar coefficients c hi ( i is the index of attention heads and h means head) inside h i : h i = c hi Attention( QW Qi , KW Ki , V W Vi ) .",
"After the self-attention sub-layer in each transformer layer, the output MultiHead( Q, K, V ) will be fed into a two-layer fully-connected network, in which the first layer increases the dimension of the embedding by 4 times and then reduces it back to the hidden size (768 for BERTBASE and 1,024 for BERTLARGE ).",
"We multiply learnable coefficients to the intermediate neurons: FFN( x ) = c f max(0 , xW 1 + b 1 ) W 2 + b 2 .",
"(3) These modifications allow us to jointly train BERT with the coefficients, using the following loss: L ( f ( ; ) , c ) = L 0 ( f ( ; ) , c ) + (cid:107) c (cid:107) 1 , (4) where L 0 is the original loss function used in pretraining or fine-tuning, c is the concatenation of all the coefficients in the model including those for attention heads and intermediate neurons, and is the hyper-parameter that controls the strength of regularization.",
"Note that in this step, the joint training of BERT and the coefficients are still as expensive as normal BERT training.",
"However, the winning strategy of EarlyBERT is that we only need to perform this joint training for a few steps, before the winning ticket emerges, which is much shorter than the full training process of pre-training or fine-tuning.",
"In other words, we can identify the winning tickets at a very low cost compared to the full training.",
"Then, we draw the ticket ( i.e. , the EarlyBERT), reset the parameters and train EarlyBERT that is computationally efficient thanks to its structured sparsity.",
"Next, we introduce how we draw EarlyBERT from the learned coefficients.",
"Ticket-drawing Stage After training BERT and coefficients c jointly, we draw EarlyBERT using the learned coefficients with a magnitude-based metric.",
"Note that we prune attention heads and intermediate neurons separately, as they play different roles.",
"We prune the attention heads whose coefficients have the smallest magnitudes, and remove these heads from the computation graph.",
"We also prune the rows in WO (see Eqn.",
"(1)) that correspond to the removed heads.",
"Note that this presents a design choice: should we prune the heads globally or layer-wisely ?",
"In this paper, we use layer-wise pruning for attention heads, because the number of heads in each layer is very small (12 for BERTBASE and 16 for BERTLARGE ).",
"We observe empirically that if pruned globally, the attention heads in some layers may be completely removed, making the network un-trainable.",
"Furthermore, Ramsauer et al. (2020) observes that attention heads in different layers exhibit different behaviors.",
"This also motivates us to only compare importance of attention heads within each layer.",
"Similar to pruning attention heads, we prune intermediate neurons in the fully-connected sublayers.",
"Pruning neurons is equivalent to reducing the size of intermediate layers, which leads to a reduced size of the weight matrices W 1 and W 2 in Eqn.",
"(3).",
"Between global and layer-wise pruning, empirical analysis shows that global pruning works better.",
"We also observe that our algorithm naturally prunes more neurons for the later layers than earlier ones, which coincides with many pruning works on vision tasks.",
"We leave the analysis of this phenomenon as future work.",
"Efficient-training Stage We then train EarlyBERT that we have drawn for pre-training or fine-tuning depending on the target task.",
"If we apply EarlyBERT to pre-training, the initialization 0 of BERT will be a random initialization, the same setting as the original LTH (Frankle and Carbin, 2019) and Early-Bird Tickets (You et al., 2020a).",
"If we apply EarlyBERT to fine-tuning, then 0 can be any pre-trained model.",
"We can also moderately reduce the training steps in this stage without sacri-ficing performance, which is empirically supported by the findings in Frankle and Carbin (2019); You et al. (2020a) that the winning tickets can be trained more effectively than the full model.",
"In practice, the learning rate can also be increased to speed up training, in addition to reducing training steps.",
"Different from unstructured pruning used in LTH and many other compression works (Frankle and Carbin, 2019; Chen et al., 2020b), structurally pruning attention heads and intermediate neurons in fully-connected layers can directly reduce the number of computations required in the transformer layer, and shrink the matrix size of the corresponding operations, yielding a direct reduction in computation and memory costs.",
"Early Emergence Following a similar manner in You et al. (2020a), we visualize the normalized mask distance between different training steps, to validate the early emergence of winning tickets.",
"In Figure 1, the axes in the plots are the number of training steps finished.",
"We only use one fully-connected sub-layer to plot Figure",
"1(b),1(d) due to high dimensionality.",
"In both pre-training and fine-Methods MNLI QNLI QQP SST-2 BERTBASE 83.16 90.59 90.34 91.70 EarlyBERT BASE 83.58 90.33 90.41 92.09 Random 82.26 88.87 0.12 91.17 Methods CoLA RTE MRPC BERTBASE 0.535 65.70 80.96 EarlyBERT BASE 0.527 66.19 81.54 Random 0.514 63.86 78.57 Table 1: Comparison between randomly-pruned models and EarlyBERT on GLUE tasks.",
"tuning, the mask converges in a very early stage of the whole training process.",
"Although we observe an increase of mask distance in fully-connected layers during pre-training (in Figure",
"1(b)), this can be easily eliminated by early stopping and using mask distance as the exit criterion.",
"An ablation study on how early stopping influences the performance of EarlyBERT is presented in Sec. 4.2.",
"Non-trivial Sub-network Here, by non-trivial we mean that with the same sparsity ratio as in EarlyBERT, randomly pruned model suffers from significant performance drop.",
"The performance drop happens even if we only prune attention heads.",
"We verify this by running fine-tuning experiments Methods MNLI QNLI QQP SST-2 SQuAD Time Saved 2 BERTBASE 83.16 90.59 90.34 91.70 87.50 EarlyBERT BASE 81.81 89.18 90.06 90.71 86.13 40 45% Random BASE 79.92 84.46 89.42 89.68 84.47 45 50% LayerDrop (Fan et al., 2019) 81.27 88.91 88.06 89.89 84.25 33% BERTLARGE 86.59 92.29 91.59 92.21 90.76 EarlyBERT LARGE 85.13 89.22 90.64 90.94 89.45 35 40% Random LARGE 78.45 84.46 89.89 88.65 88.79 40 45% LayerDrop (Fan et al., 2019) 85.12 91.12 88.88 89.97 89.44 33% Table 2: Performance of EarlyBERT (fine-tuning) compared with different baselines.",
"on BERTBASE .",
"Specifically, we prune 4 heads from each transformer layer in BERTBASE and EarlyBERT.",
"We fine-tune BERTBASE for 3 epochs with an initial learning rate 2 10 5 .",
"We run the searching stage for 0.2 epochs with = 1 10 4 , draw EarlyBERT with pruning ratio = 1 / 3 , and then fine-tune EarlyBERT for 2 epochs with doubled initial learning rate.",
"For the randomly pruned models, we randomly prune 4 heads in each layer and follow the same fine-tuning protocol as EarlyBERT.",
"The reported results of randomly pruned models are the average of 5 trials with different seeds for pruning.",
"The results on four tasks from GLUE benchmark (Wang et al., 2018) presented in Table 1 show that randomly pruned model consistently under-performs EarlyBERT with a significant gap, supporting our claim that EarlyBERT indeed identifies non-trivial sub-structures.",
"Backbone Models Following the official BERT implementation (Devlin et al., 2018; Wolf et al., 2019), we use both BERTBASE (12 transformer layers, hidden size 768, 3,072 intermediate neurons, 12 self-attention heads per layer, 110M parameters in total) and BERTLARGE (24 transformer layers, hidden size 1,024, 4,096 intermediate neurons, 16 self-attention heads per layer, 340M parameters in total) for experiments.",
"Datasets We use English Wikipedia (2,500M words) as the pre-training data.",
"For fine-tuning experiments and evaluation of models in the pretraining experiments, we use tasks from GLUE benchmark (Wang et al., 2018) and a question-answering dataset SQuAD v1.1 (Rajpurkar et al., 2016).",
"Note that as our goal is efficient pre-training and fine-tuning, we focus on larger datasets from GLUE (MNLI, QNLI, QQP and SST-2), as it is less meaningful to discuss efficient training on very small datasets.",
"We use the default training settings for pre-training and fine-tuning on both models.",
"To evaluate model performance, we use Matthew's correlation score for CoLA, matched accuracy for MNLI, F1-score for SQuAD v1.1, and accuracy in percentage for other tasks on GLUE.",
"We omit % symbols in all the tables on accuracy results.",
"Implementation Details For the vanilla BERT, we fine-tune on GLUE datasets for 3 epochs with initial learning rate 2 10 5 , and for 2 epochs on SQuAD with initial learning rate 3 10 5 ; we use AdamW (Loshchilov and Hutter, 2017) optimizer for both cases.",
"For pre-training, we adopt LAMB optimization technique (You et al., 2020b), which involves two phases of training: the first 9/10 of the total training steps uses a sequence length of 128, while the last 1/10 uses a sequence length of 512.",
"Pre-training by default has 8,601 training steps and uses 64k/32k batch sizes and 6 10 3 / 4 10 3 initial learning rates for the two phases, respectively.",
"All experiments are run on 16 NVIDIA V100 GPUs.",
"Training Time Measuring Protocol We strictly measure the training time saving of EarlyBERT on the QQP task in GLUE using CUDA benchmark mode.",
"To get rid of the influence of the hardware environment at our best, we individually measure the time elapsed during each step and calculate the average time per step over the whole training process.",
"The time for data I/O is excluded.",
"The training time of EarlyBERT includes both the searching stage and the efficient training stage.",
"The main results of EarlyBERT in fine-tuning are presented in Table",
"2. According to the observation of the early emergence of tickets in Sec. 3.3, we run the searching stage for 0.2 epochs (which accounts for less than 7% of the cost of a standard 3-epoch fine-tuning) with = 1 10 4 for all tasks.",
"When drawing EarlyBERT, we prune 4 heads in each layer from BERTBASE and 6 heads from BERTLARGE , and globally prune 40% intermediate neurons in fully-connected sub-layers in both models, instead of pruning only heads as in Table",
"1. After this, we re-train the EarlyBERT models for reduced training epochs (from 3 to 2) on GLUE benchmark and the learning rate scaled up by 2 times to buffer the effect of reduced epochs.",
"For SQuAD dataset, we keep the default setting, as we find SQuAD is more sensitive to the number of training steps.",
"The selection of these hyperparam-eters are based on the ablation studies that follow the main results in Table 2, in which we investigate the effects of the number of training epochs, learning rate during downstream fine-tuning, the regularization strength , and the pruning ratios on self-attention heads and intermediate neurons.",
"Several observations can be drawn from Table",
"2. First, in most tasks, EarlyBERT saves over 40% of the total training time with 4 self-attention heads pruned in each layer and 40% FC neurons pruned globally, without inducing much performance degradation.",
"Specifically, following the training time measurement protocol in Sec. 4.1, we observe that EarlyBERT saves 42.97% of the total training time of a full BERT model on QQP task.",
"The time saving slightly differs over various tasks, hence we report a range of saving time.",
"Here, Random BASE saves slightly more training time because random pruning skips the searching 10 4 10 3 10 2 88.55 88.43 88.42 # Pruned Heads 4 5 6 Layer-wise pruning 88.55 88.13 87.65 # Pruned Neurons 30% 40% 50% Layer-wise pruning 88.18 88.22 87.90 Global pruning 88.31 88.23 87.91 Table 3: Ablation of regularization strength and pruning ratios on self-attention heads and intermediate neurons.",
"stage in EarlyBERT BASE , but it induces much more accuracy drop.",
"EarlyBERT BASE can also outperform another strong baseline LayerDrop (Fan et al., 2019), which drops one third of the layers so that the number of remaining parameters are comparable to ours.",
"Note that LayerDrop models are fine-tuned for three full epochs, yet EarlyBERT is still competitive in most cases.",
"Second, we consistently observe obvious performance advantage of EarlyBERT over randomly pruned models, which provides another strong evidence that EarlyBERT does discover nontrivial key sparse structures.",
"Even though there still exists a margin between EarlyBERT and the baseline (You et al. (2020a) also observed similar phenomenon in their tasks), the existence of structured winning tickets and its potential for efficient training is highly promising.",
"We leave as future work to discover winning tickets of higher sparsity but better quality.",
"Ablation Studies on Fine-tuning We perform extensive ablation studies to investigate important hyper-parameter settings in EarlyBERT, using EarlyBERT BASE as our testing bed.",
"For all experiments, we use the average accuracy on the larger datasets from GLUE benchmark (MNLI, QNLI, QQP and SST-2) as the evaluation metric.",
"Number of training epochs and learning rate.",
"We first investigate whether we can properly reduce the number of training epochs, and if scaling the learning rate can help compliment the negative effect caused by reducing training steps.",
"Results in Figure 2 show that when we fine-tune EarlyBERT for fewer epochs on GLUE, up-scaling learning rate first helps to recover performance, and then causes decrease again.",
"We will use two epochs and 4 10 5 as learning rate Methods CoLA MNLI MRPC QNLI QQP RTE SST-2 SQuAD BERTBASE 0.45 81.40 84.07 89.86 89.80 60.29 90.48 87.60 EarlyBERT BASE 0.41 79.97 80.39 89.86 89.44 61.01 90.94 85.48 BERTLARGE 0.50 83.56 85.90 90.44 90.45 59.93 92.55 90.43 EarlyBERT LARGE 0.47 82.54 85.54 90.46 90.38 61.73 91.51 89.36 Table 4: Performance of EarlyBERT (pre-training) compared with BERT baselines.",
"for EarlyBERT on GLUE experiments.",
"Regularization strength .",
"A proper selection of the regularization strength decides the quality of the winning ticket, consequently the performance of EarlyBERT after pre-training/fine-tuning.",
"Results in Table 3 show that has marginal influence on EarlyBERT performance.",
"We use = 10 4 that achieves the best performance in following experiments.",
"Pruning ratios .",
"We further investigate the effects of different pruning ratios as well as layer-wise/global pruning on the performance of EarlyBERT.",
"As discussed in Sec. 3.2, we only consider layer-wise pruning for self-attention heads.",
"Table 3 shows that the performance monotonically decreases when we prune more self-attention heads from BERT; however, we see a slight increase and then a sharp decrease in accuracy, when the pruning ratio is raised for intermediate neurons in fully-connected sub-layers ( 40% pruning ratio seems to be the sweet spot).",
"We also observe consistent superiority of global pruning over layer-wise pruning for intermediate neurons.",
"Early-stop strategy for searching.",
"In Figure 1, we show the early emergence of winning tickets in BERT when trained with (cid:96) 1 regularization, suggesting we stop the searching stage early to save computation while still generating high-quality tickets.",
"Here, we study how the early-stop strategy influences the model performance.",
"We fine-tune EarlyBERT on QNLI following the same setting described earlier in this section, but stop the searching stage at different time points during the first epoch for searching.",
"Results in Figure 3 show ( i ) an abrupt increase in accuracy when we stop at 20% of the first epoch; ( ii ) slight increase when we delay the stop till the end of the first epoch.",
"Considering training efficiency, we think 20 40% makes suitable stopping time.",
"Trade-off Between Efficiency and Performance We vary the pruning ratios for the FC layers and the number of self-attention heads pruned in each layer in EarlyBERT, fine-tune the models on QQP in GLUE, and obtain the corresponding validation accuracies and training time savings following the protocol above.",
"Results are shown in Table 5.",
"We can see clear correlations between the training time saving and the accuracy the more FC neurons or self-attention heads pruned, the more training time saving yet the larger accuracy drop.",
"Moreover, for most combinations of these two hyper-parameters, the accuracy drop is within 1%, which also supports the efficiency of EarlyBERT.",
"search stage for 400 steps of training in the first training phase that uses a sequence length of 128 which only accounts for less than 3% of a standard pre-training, with = 1 10 4 .",
"When we draw EarlyBERT, similar to the settings in fine-tuning experiments, we prune 4 heads in each layer from BERTBASE and 6 heads from BERTLARGE ; however, we prune slightly fewer (30%) intermediate neurons in fully-connected sub-layers in both models, since we empirically observe that pre-training is more sensitive to aggressive intermediate neuron pruning.",
"In both phases of pre-training, we reduce the training steps to 80% of the default setting when training EarlyBERT (based on the ablation study shown in Figure 4).",
"Other hyper-parameters for pre-training follow the default setting described in Sec. 4.1.",
"All models are fine-tuned and evaluated on GLUE and SQuAD v1.1 with the default setting.",
"Different from fine-tuning experiments, the pretraining stage dominates the training time over the downstream fine-tuning, and thus we only consider the training time saving during pre-training.",
"Since the randomly pruned models do not have competitive performance in fine-tuning experiments as shown in Sec. 4.2, we focus on comparing EarlyBERT with the full BERT baseline.",
"From the results presented in Table 4, we can see that on downstream tasks with larger datasets such as QNLI, QQP and SST-2, we can achieve accuracies that are close to BERT baseline (within 1% accuracy gaps except for EarlyBERT BASE on MNLI and SQuAD).",
"However, on downstream tasks with smaller datasets, the patterns are not consistent: we observe big drops on CoLA and MRPC but improvement on RTE.",
"Overall, EarlyBERT achieves comparable performance while saving 30 35% training time thanks to its structured sparsity and reduction in training steps.",
"Reducing Training Steps in Pre-training We investigate whether EarlyBERT, when nonessential heads and/or intermediate neurons are pruned, can train more efficiently, and whether we can reduce the number of training steps in pretraining.",
"This can further help reduce training cost in addition to the efficiency gain from pruning.",
"We use EarlyBERT BASE -Self (only self-attention heads are pruned when drawing the winning ticket) as the testing bed.",
"Figure 4 shows the performance decreases more when we reduce the number of training steps to 60% or less.",
"Reducing it to 80% seems to be a sweet point with the best balance between performance and efficiency.",
"On one hand, two relevant works (Chen et al., 2020b; Prasanna et al., 2020) only investigate lottery tickets on pre-trained NLP models for fine-tuning on the downstream tasks, while EarlyBERT makes the first attempt of introducing lottery tickets to both fine-tuning and pre-training stages, and provides empirical evidence that NLP models are amendable to structured pruning.",
"On the other hand, EarlyBERT pursues structured sparsity while Chen et al. (2020b) promotes unstructured sparsity, which is hardware unfriendly and provides almost no acceleration, besides the high cost of IMP.",
"As an implicit comparison, Chen et al. (2020b) induces 0.4% accuracy drop on SQuAD v1 dataset compared to the BERT baseline with 40% unstructured sparsity (comparable with our settings in Section 4.2), while EarlyBERT induces 1.37% accuracy drop.",
"Note that Chen et al. (2020b) uses 6x training times (because IMP reaches 40% sparsity with 6 iterations) and 4.69x FLOPs, but EarlyBERT uses only 0.76x training times and FLOPs in contrast.",
"In this paper, we present EarlyBERT, an efficient framework for large-scale language model pretraining and fine-tuning.",
"Based on Lottery Ticket Hypothesis, EarlyBERT identifies structured winning tickets in an early stage, then uses the pruned network for efficient training.",
"Experimental results demonstrate that the proposed method is able to achieve comparable performance to standard BERT with much less training time.",
"Future work includes exploring more data-efficient strategies to enhance the current training pipeline."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain"
] |
[
"We propose Visual Query Detection (VQD), a new visual grounding task.",
"In VQD, a system is guided by natural language to localize a variable number of objects in an image.",
"VQD is related to visual referring expression recognition, where the task is to localize only one object.",
"We describe the first dataset for VQD and we propose baseline algorithms that demonstrate the difficulty of the task compared to referring expression recognition.",
"In computer vision, object detection is the task of identifying all objects from a specific closed-set of pre-defined classes by putting a bounding box around each object present in an image, e.g., in the widely used COCO dataset there are 80 object categories and an algorithm must put a box around all instances of each object present in an image (Lin et al., 2014).",
"Recent deep learning based models have significantly advanced the state-of-the-art in object detection (Ren et al., 2015b); however, many applications demand more nuanced detection of objects with specific attributes or objects in relation to each other.",
"Here, we study goal-directed object detection, where the set of possible valid objects is far greater than in the typical object detection problem.",
"Specifically, we introduce the Visual Query Detection (VQD) task (see Fig. 1).",
"In VQD, a system is given a query in natural language and an image and it must produce 0 N boxes that satisfy that query.",
"VQD has numerous applications, ranging from image retrieval to robotics.",
"VQD is related to the visual referring expression recognition (RER) task (Kazemzadeh et al., 2014); however, in RER every image has only a single correct box.",
"In contrast, in VQD there could be no valid outputs for a query or multiple valid outputs, making the task both harder and more useful.",
"As discussed later, existing RER datasets have Figure 1: Unlike VQD, object detection cannot deal with attributes and relations.",
"multiple annotation problems and have significant language bias problems.",
"VQD is also related to Visual Question Answering (VQA), where the task is to answer questions about images in natural language (Malinowski and Fritz, 2014; Antol et al., 2015).",
"The key difference is that in VQD the algorithm must generate image bounding boxes that satisfy the query, making it less prone to the forms of bias that plague VQA datasets.",
"1. We describe the first dataset for VQD, which will be publicly released.",
"2. We evaluate multiple baselines on our VQD dataset.",
"Over the past few years, a large amount of work has been done at the intersection of computer vision and natural language understanding, including visual madlibs (Yu et al., 2015; Tommasi et al., 2018), captioning (Farhadi et al., 2010; Kulkarni et al., 2013; Johnson et al., 2016; Liu et al., 2018), and image retrieval (Wan et al., 2014; Li et al., 2016).",
"For VQD, the most related tasks are VQA and RER, which we review in detail.",
"VQA systems take in an image and open-ended natural language question and then generate a text-based answer (Antol et al., 2015; Goyal et al., 2017; Acharya et al., 2019; Kafle et al., 2018).",
"Many VQA datasets have been created.",
"However, initial datasets, e.g., VQAv1 (Antol et al., 2015) and COCO-QA (Ren et al., 2015a), exhibited significant language bias in which many questions could be answered correctly without looking at the image, e.g., for VQAv1 it was possible to achieve 50% accuracy using language alone (Kafle and Kanan, 2016).",
"To address the bias issue, the VQAv2 dataset was created with a more balanced distribution for each possible answer to make algorithms analyze the image (Goyal et al., 2017), but it still had bias in the kinds of questions asked, with some questions being scarce, e.g., reasoning questions.",
"Synthetic datasets such as the CLEVR dataset (Johnson et al., 2017) addressed this by being synthetically generated to emphasize hard reasoning questions that are rare in VQAv1 and VQAv2.",
"The TDIUC dataset addresses bias using both synthetically generated and human gathered questions about natural images, with performance evaluated for 12 kinds of questions (Kafle and Kanan, 2017a).",
"While the state-of-the-art has rapidly increased on both synthetic and natural image VQA datasets, many models do not generalize across datasets (Shrestha et al., 2019).",
"Unlike VQA, RER algorithms must produce evidence to justify their outputs.",
"A RER algorithm outputs a box around the image location matching the input string, making it easier to tell if an algorithm is behaving correctly.",
"The RefCOCO and RefCOCO+ datasets for RER were collected from the two-player ReferIt' Game (Kazemzadeh et al., 2014).",
"The first player is asked to describe an outlined object and the second player has to correctly localize it from player one's description.",
"The test datasets are futher split into the testA' and testB' splits.",
"The split testA' contains object categories sampled randomly to be close to the original data distribution, while testB' contains objects sampled from the most frequent object categories, excluding categories such as sky', sand', floor', etc.",
"Since, there is a time limit on the game, the descriptions are short, e.g., guy in a yellow t-shirt,' pink,' etc.",
"Instead of playing a timed game, to create the RefCOCOg dataset for RER, one set of Amazon Mechanical Turk (AMT) users were asked to generate a description for a marked object in an image and other users marked the region corresponding to the description (Mao et al., 2016).",
"This resulted in more descriptive prompts compared to RefCOCO and RefCOCO+.",
"The Visual7W dataset for VQA includes a point-ing' task that is closely related to RER (Zhu et al., 2016).",
"Pointing questions require choosing which box of the four given boxes correctly answered a query.",
"Systems did not generate their own boxes, and there is always one correct box.",
"Cirik et al. (2018) showed that RER datasets suffer from biases caused by their dataset collection procedure.",
"For RefCOCOg, they found that randomly permuting the word in the referring expression caused only about a 5% drop in performance, suggesting that instead of relying on language structure, systems may be using some hidden correlations in language.",
"They further showed that an image only model that ignores the referring expression yielded a precision of 71.2% for top-2 best predictions.",
"They also found that predicting the object category given the image region produced an accuracy of 84.2% for top-2 best predictions.",
"By having 0 N boxes, VQD is harder for an image-only model to perform well.",
"We created VQDv1, the first dataset for VQD.",
"VQDv1 is created synthetically using annotations from Visual Genome (VG), COCO, and COCO Panoptic.",
"While this limits variety, it helps combat some kinds of bias and serves as an initial version of the dataset.",
"VQDv1 has three distinct query categories:",
"1. Object Presence (e.g., Show the dog in the image')",
"2. Color Reasoning (e.g., Which plate is white Type # Questions Simple 391,628 Color 172,005 Positional 57,904 Total 621,537 Table 1: VQDv1 Query Types Dataset # Images # Questions RefCOCO 19,994 142,209 RefCOCO+ 19,992 141,564 RefCOCOg 26,711 85,474 VQDv1 123,287 621,537 Table 2: VQDv1 compared to RER datasets. in color?')",
"The number of queries per type are given in the Table",
"1. The dataset statistics and example images and are shown in Fig. 2 and Fig. 3, respectively.",
"We show statistics for VQDv1 compared to RER datasets in Table",
"2. All images in VQDv1 are from COCO.",
"The ground truth bounding box annotations are derived from the COCO Panoptic annotations dataset (Kir-illov et al., 2018).",
"The questions are generated using multiple templates for each question type, which is an approach that has been used in earlier work for VQA (Kafle and Kanan, 2017a; Kafle et al., 2017).",
"The query objects and their attributes are extracted by integrating the annotations from images that have both COCO and VG annotations.",
"COCO annotations are focused on objects, while VG also has attribute and relationship information, e.g., size, color, and actions for scene objects.",
"Object presence questions require an algorithm to determine all instances of an object in an image without any relationship or positional attributes, for example, Show me the horse in the image' or Where is the chair?' We use all of the COCO things' labels and half of the COCO stuff' labels to generate these questions, making this task test the same capabilities as conventional object detection.",
"We filter some stuff' categories that do not have well defined bounding boxes such as water-other', floor-stone', etc.",
"We use multiple templates to create variety, e.g., Show the < object > in the image', Where are the < object > in the picture?' etc. 3.2 Color Reasoning Color questions test the presence of objects mod-ified by color attributes, e.g., Show me the cat which is grey in color' or Which blanket is blue in color?' Since, COCO has only object annotations, color attributes are derived from VG's attribute annotations.",
"We align every VG image annotation with COCO annotations to obtain (object, color) annotations for each bounding box.",
"When multiple color attributes for an object are present, the object is assigned a single color from that attribute set.",
"Positional reasoning questions test the location of objects with respect to other objects, e.g., Show the building behind horses', Which people are in front of the lighthouse?', and Show the rhino behind elephant.' We again use VG's relationship and attribute annotations to create these questions.",
"Counter-concept questions have no valid boxes as outputs, and we endeavor to create hard counter-concept questions for each category.",
"We ask Show me the zebra' only if there is a similar animal present (e.g., a cow), which was done by using COCO's super-categories.",
"Likewise, Show me the donut that is brown in color' is only asked if a brown donut does not exist in the image.",
"Our experiments are designed to probe the behavior of models on VQD compared to RER datasets.",
"To facilitate this, we created a variant of our VQDv1 dataset that had only a single correct bounding box.",
"To evaluate performance for the RER and 1 Obj' version of the VQDv1 dataset, systems only output a single bounding box during test time, so the Precision@1 metric is used.",
"For the 0-N Obj' version of the VQDv1 dataset, we use the standard PASCAL VOC metric AP IoU = .",
"50 from object detection, which calculates the average precision across the dataset using an intersection over union (IoU) greater than 0.5 criteria for matching with the ground truth boxes.",
"We implemented and evaluated four models for VQD.",
"All models are built on top of Faster R-CNN with a ResNet-101 backbone whose output bounding boxes pass through Non-Maximal Suppression RefCOCO RefCOCO+ RefCOCOg VQDv1 val testA testB val testA testB val test 1 Obj.",
"with a threshold of 0.7.",
"This acts as a region proposal generator that provides CNN features for each region.",
"The four models we evaluate are:",
"1. DETECT: A model that uses the full Faster R-CNN system to detect all trained COCO classes, and then outputs the boxes that have the same label as the first noun in the query.",
"2. RANDOM: Select one random Faster R-CNN proposal.",
"3. Query-Blind: A vision only model that does binary classification of each region proposal's CNN features using a 3 layer MultiLayer Per-ceptron(MLP) with 1024-unit hidden ReLU layers.",
"4. Vision+Query (V+Q): A model that does binary classification of each region proposal.",
"The query features are obtained from the last hidden layer of a Gated Recurrent Unit (GRU) network, and then they are concatenated with the CNN features and fed into a 3 layer MLP with 1024-unit hidden ReLU layers.",
"The primary reason for providing VQDv1 (1 obj.) and the RER results is to put the benefits of the VQD task in context.",
"To aid in this endeavor, we also include comparison results directly from the SLR models (Yu et al., 2017) for RER, which is a recent system for that task.",
"The Query-Blind and Vision+Query models are trained with binary cross-entropy loss.",
"We use a learning rate of 0.0004, and perform learning rate decay of 0.8 when the training loss plateaus continuously for five epochs.",
"The best model is selected based on the validation loss after training for 50 epochs.",
"Our main results are given in Table",
"3. Although simple, our Vision+Query model performs well across RER datasets, but it can also be applied to VQD tasks.",
"As expected, RANDOM performs poorly on both VQDv1 datasets.",
"DETECT beats RANDOM in the single object VQD setting by a large margin.",
"Since, most of the questions in the RER datasets ask about common COCO categories, choosing one of those objects might be enough to get decent performance; however, DETECT performs poorly when evaluated under 0N object settings in VQDv1.",
"To handle queries in VQD, models must be able to understand the context and comprehend multiple objects in isolation.",
"In this paper, we described our VQDv1 dataset as a test for visual grounding via goal-directed object detection.",
"VQDv1 has both simple object presence and complex questions with 0 N bounding boxes.",
"While VQDv1 contains only synthetically generated questions, this can help mitigate some forms of bias present in other VQA and RER datasets (Cirik et al., 2018; Kafle and Kanan, 2017b).",
"While it would be expensive, a large, carefully filtered, and well designed human annotated VQD dataset is the next step toward advancing visual grounding research.",
"Compared to VQA, we argue that it is harder to be right for the wrong reasons in VQD because methods must generate bounding boxes.",
"Compared to RER, we argue that it is harder to exploit bias in VQD since there are a variable number of boxes per image, making it considerably more difficult, as demonstrated by our experiments.",
"We believe the VQD approach has considerable value and can be used to advance visual grounding research.",
"This work was supported in part by a gift from Adobe Research.",
"The lab thanks NVIDIA for the donation of a GPU.",
"We also thank fellow lab members Kushal Kafle and Tyler Hayes for their comments and useful discussions."
] | [
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"other",
"other",
"other"
] |
[
"This paper focuses on the Data Augmentation for low-resource Natural Language Understanding (NLU) tasks.",
"We propose Prompt based D ata A ugmentation model ( PromDA ) which only trains small-scale Soft Prompt (i.e., a set of trainable vectors) in the frozen Pre-trained Language Models (PLMs).",
"This avoids human effort in collecting unlabeled in-domain data and maintains the quality of generated synthetic data.",
"In addition, PromDA generates synthetic data via two different views and filters out the low-quality data using NLU models.",
"Experiments on four benchmarks show that synthetic data produced by PromDA successfully boost up the performance of NLU models which consistently outperform several competitive baseline models, including a state-of-the-art semi-supervised model using unlabeled in-domain data .",
"The synthetic data from PromDA are also complementary with unlabeled in-domain data .",
"The NLU models can be further improved when they are combined for training.",
"Deep neural networks often require large-scale high-quality labeled training data to achieve state-of-the-art performance (Bowman et al., 2015).",
"However, constructing labeled data could be challenging in many scenarios (Feng et al., 2021).",
"In this paper, we study the low-resource Natural Language Understanding (NLU) tasks, including sentence classification and sequence labelling tasks, where only small labeled data is available.",
"Previous works often produce extra labeled data for the NLU models to learn.",
"Wang et al. (2021a) deploys the self-training framework to produce pseudo labelled training data from unlabeled in-domain data which could be expensive to obtain.",
"Xu et al. (2021) Work done during the internship at Microsoft STCA.",
"has shown that extracting domain-specific unlabeled data from the general corpus is not trivial.",
"Wei and Zou (2019); Dai and Adel (2020) expand the original small training data using automatic heuristic rules, such as randomly synonyms replacement, which effectively creates new training instances.",
"However, these processes may distort the text, making the generated syntactic data grammatically and semantically incorrect.",
"To solve the above dilemma, many existing works (Ding et al., 2020; Yang et al., 2020; Anaby-Tavor et al., 2020) resort to applying Language Models (LMs) or Pre-trained Language Models (PLMs) for data augmentation in a low-resource setting.",
"Given the labeled data, one can directly fine-tune PLMs to generate new synthetic data without additional human effort.",
"However, we argue that, in the low-resource NLU tasks, directly fine-tuning all parameters of PLMs with small training data (especially when there are less than 100 samples) could result in over-fitting and PLMs simply memorizes the training instances.",
"As a result, the generated synthetic data could be very similar to the original training instances and cannot provide new training signals to the NLU models.",
"Recently, several works (Lester et al., 2021; Li and Liang, 2021) propose prompt tuning, which only back-propagates the error to Soft Prompts (i.e., a sequence of continuous vectors prepended to the input of PLMs) instead of the entire model.",
"They show that prompt tuning is sufficient to be competitive with full model tuning while significantly reducing the amount of parameters to be tuned.",
"Thus, the prompt tuning is quite suitable to tackle the above over-fitting issue in low-resource generative fine-tuning, which spawns more novel samples relative to the small labeled data under the premise of ensuring generation quality.",
"allow tuning the additional soft prompts during fine-tuning on the small labeled training data.",
"In addition, we have observed that the initialization of soft prompts has a significant impact on fine-tuning, especially when the low-resource situation reaches an extreme extent.",
"To better initialize the prompt parameters for the data augmentation tasks, we propose task-agnostic Synonym Keyword to Sentence pre-training task to directly pre-train the prompt parameters of PLMs on their pre-training corpora.",
"This task simulates the process of generating entire training sample from partial fragment information (e.g., keywords).",
"Similar to previous works (Ding et al., 2020; Yang et al., 2020; Anaby-Tavor et al., 2020), we could fine-tune PLMs to produce complete synthetic data conditioned on the output tags.",
"We refer this as Output View Generation .",
"To boost the diversity of the generated samples, we introduce another fine-tuning generative task named Input View Generation , which takes the extracted keywords from the sample as the input and the sample as the output.",
"As NLG models trained from small training data still has a certain chance to generate low-quality samples, we leverage the NLU Consistency Filtering (Anaby-Tavor et al., 2020) to filter the generated samples.",
"We conduct experiments on four benchmarks: sequence labelling task CoNLL03 (Tjong Kim Sang and De Meulder, 2003) and Wikiann (Pan et al., 2017), sentence classification task SST-2 (Socher et al., 2013) and RT (Pang and Lee, 2005).",
"Experiment results show that NLU models trained on synthetic data from PromDA consistently outperform several competitive baseline models, including a state-of-the-art semi-supervised NLU models MetaST (Wang et al., 2021a) on Sequence Labelling task.",
"In addition, we find that the synthetic data from PromDA are also complementary with the unlabeled in-domain data .",
"The performance of NLU models can be further improved when both of them are combined.",
"Finally, we conduct diversity analysis and case study to further con-firm the synthetic data quality from PromDA .",
"Our source code is released at https://github.",
"com/GaryYufei/PromDA .",
"Prompt Learning The concept of prompt-based learning starts from the GPT3 model (Brown et al., 2020).",
"Previous works design different prompts to query language models to extract knowledge triples (Petroni et al., 2019) or classify sentences into pre-defined categories (Schick and Schtze, 2021) in the few-shot setting.",
"They construct various discrete prompts manually for these tasks.",
"To reduce the human effort in this selection process, (Gao et al., 2021) proposes to expand prompts using pre-trained language models.",
"However, the selection of discrete prompts is still an independent process and difficult to be optimized together with the downstream tasks in an end-to-end manner.",
"Ben-David et al. (2021) proposes a complicated two-stage model to connect between prompt generation and downstream tasks.",
"To solve this issue, (Lester et al., 2021; Li and Liang, 2021) propose to use soft prompts, which are sets of trainable vectors, in the frozen pre-trained language models.",
"Unlike the hard prompts, these vectors do not correspond to any real words.",
"It allows the optimization with the downstream tasks in an end-to-end manner.",
"As shown in Li and Liang (2021), PLMs with Soft Prompts can often perform better in the low-resource setting.",
"Generative Data Augmentation Hou et al. (2018) generates diverse utterances to improve dialogue understanding models.",
"Xia et al. (2019) uses a bilingual dictionary and an unsupervised machine translation model to expand low-resource machine translation training data.",
"Wu et al. (2019); Kumar et al. (2020) make use of the masking mechanism in many PLM pre-training objective functions (e.g., BERT (Devlin et al., 2019), BART (Lewis et al., 2020)) and produce new synthetic data by masking randomly chosen words in the original training instances.",
"Ding et al. (2020); Yang et al. (2020); Anaby-Tavor et al. (2020) apply LMs and PLMs to learn directly to generate new synthetic data for NLU tasks (i.e., sequence labeling and commonsense inference tasks after trained (fine-tuned) on the relatively large training data.",
"These works often directly apply off-the-shelf LMs or PLMs to generate synthetic data.",
"Wang et al. (2021b) proposes to use unlabelled data as hard prompt to generate synthetic data without any training, limiting its application in complicated NLP tasks.",
"To best of our knowledge, PromDA is the first PLMs with Soft Prompt that are especially designed for the data augmentation task.",
"This section first formulates the data augmentation for low-resource NLU task.",
"We then intro-4243 Figure 1: The Overall of PromDA .",
"duce the three important components in Our proposed Prompt-based Data Augmentation method ( PromDA ), including",
"i) prompt-based learning in pre-trained language models;",
"ii) dual synthetic data generation view and",
"iii) Consistency Filtering .",
"Figure 1 shows the overall of PromDA .",
"In the low-resource NLU tasks, only a set of labeled training data T = { ( x 1 , y 1 ) , , ( x n , y n ) } is available where n is relatively small (i.e., less than a hundred).",
"Data Augmentation generates synthetic labeled training data TLM = { ( x 1 , y 1 ) , , ( x n , y n ) } from the original labeled training data T using language models.",
"The goal is that the NLU models trained using T TLM outperform the NLU models only trained using T .",
"Fine-tuning is the prevalent way to adapt PLMs to specific down-stream tasks (Devlin et al., 2019).",
"However, for low-resource data augmentation, we expect the generated synthetic training data TLM to be different from T and to provide new information for NLU models to learn.",
"A fine-tuned PLM, which is biased towards a small number of training instances, may not be an optimal solution.",
"Prompt-based learning, starting from the zero-shot instructions in GPT3 (Brown et al., 2020), keeps the whole PLMs parameters frozen and only prepends the discrete natural language task instructions (e.g. translate to English) before the task inputs.",
"Freezing the PLMs parameters might help generalization during training.",
"However, finding suitable discrete task introductions cannot be easily optimized in an end-to-end fashion and requires extra human effort.",
"In this paper, inspired by the re-cent work (Lester et al., 2021; Li and Liang, 2021), we replace the task introductions with Soft Prompt (i.e., a sequence of continuous and trainable vec-tors).",
"During training, we only update the parameters of this Soft Prompt and fix all PLMs parameters.",
"We mainly focus on generating synthetic training data using seq2seq Transformer-based PLMs.",
"Unlike Lester et al. (2021) which only prepends Soft Prompt at the input layer, inspired by Adaptor (Houlsby et al., 2019) which adds trainable Multi-layer Perceptron (MLP) at each transformer layer, we prepend a sequence of trainable vectors at each transformer layer.",
"We denote P j = { p j 1 , , p jk } as the Soft Prompt at the j th layer.",
"The i th hidden states at the j th layer h ji in the Transformer model is defined as follows: h ji = p ji i k w i i > k j = 0 Trans ( h j 1 ) i Otherwise (1) where Trans () is the forward function the Transformer layer and w i is the fixed word embedding vector at the input layer.",
"Compared to (Lester et al., 2021), this allows gradients to be updated at each layer and better complete the learning tasks.",
"The parameter initialization of the Soft Prompt P has a significant impact on the generated synthetic data quality, especially in the low-resource Data Augmentation task.",
"Lester et al. (2021) proposes to further pre-train the full PLMs parameters, without the prompt parameters, to enhance 4244 Algorithm 1 Dual-View Data Augmentation: Given few-shot labeled dataset T , the number of iteration N ; return a trained NLU model MNLU .",
"the prompt capability.",
"However, this strategy (i.e., full PLM pre-training) introduces significant computation overhead and does not provide any insight about prompt initialization.",
"Instead, we propose to directly pre-train the parameters of the Soft Prompt with the frozen PLMs.",
"Given that data augmentation produces full syntactic data from partial information (e.g., output tags and keywords), we propose Synonym Keywords to Sentence pretraining task.",
"Given a chunk of text, we extract keywords using unsupervised keyword extraction algorithm Rake (Rose et al., 2010).",
"We randomly replace some of these extracted keywords with their synonyms, via WordNet (Fellbaum, 2010).",
"Given these synonym keywords, the Soft Prompt is pre-trained to reconstruct the original text chunks.",
"When applying this Soft Prompt for data augmentation, we only need to fine-tune the Soft Prompt with the few-shot labeled data T .",
"This pre-training process only happens once.",
"We only use the task-agnostic general-purpose pre-training corpus.",
"Previous works often restrict the encoder inputs to fixed keywords or limited labels, such as unconditional generation (Yang et al., 2020) and label-conditional generation (Anaby-Tavor et al., 2020).",
"The relatively small input space could result in similar outputs.",
"To enrich the input space, we propose Dual-View Data Augmentation that generates synthetic data from Input View , which is conditioned on the keywords in the input sentences, and Output View , which is conditioned on the output labels.",
"Table 1 shows examples of these two views.",
"As illustrated in Algorithm 1 (line 2 to 7), after fine-tuning the Soft Prompt in PLMs, PromDA first generates T 1 I and T 1 O from Input View and Output View , respectively.",
"PromDA then extracts output labels from T 1 I and keywords from T 1 O .",
"These new output labels and keywords are fed into the Output View and Input View in MLM to generate another two sets of new synthetic data T 2 O and T 2 I .",
"In this way, the resulting output text should maintain a higher level of diversity and include more novel words/phrases/knowledge.",
"Dual View via Prompt Ensemble Ensembles of different neural models can often achieve better performance (Hansen and Salamon, 1990).",
"Prompt-based learning provides an efficient way to model ensemble.",
"By training K sets of Soft Prompt , we create K models sharing the same frozen PLMs.",
"In our case, after prompt pre-training, we treat Input View and Output View as two independent models and use the Soft Prompt parameters P to initialize the parameters of P input and P output .",
"During the PromDA fine-tuning, the gradients from the Input View and Output View training instances are only applied to parameters P input and P output , respectively.",
"This prompt ensemble allows the two views to generate synthetic data independently.",
"As a result, the final output should include diverse real-world knowledge.",
"As PromDA is trained from small training data, it is possible to generate low-quality samples.",
"We leverage the NLU Consistency Filtering (Anaby-Tavor et al., 2020) to filter the generated samples.",
"Specifically, given synthetic data with generated labels produced by PromDA , we use the NLU models to label these data again and only keep the instances with consistent outputs from PromDA and the NLU models.",
"As shown in Algorithm 1 (line 8 to 12), M rNLU filters the raw synthetic data TLM into TLM which are combined with few-shot labeled data T to train new NLU models M r +1 NLU .",
"As M r +1 NLU is generally better than M rNLU , we iterate this process N times to obtain stronger NLU models.",
"This section first introduces experimental setup in Sec 4.1, and then presents main experiment results in Sec 4.2.",
"Sec 4.3 conducts ablation study.",
"In 4245 Sequence Labelling GT: [ Org All Fishermen 's Association] secretary [ Per N.J. Bose] said the strike would continue indefinitely.",
"Sec 4.4, We compare PromDA and unlabeled data, present diversity analysis and a case study.",
"We conduct experiments on Sentence Classification tasks SST2 (Socher et al., 2013) and RT (Pang and Lee, 2005) and Sequence Labeling tasks CoNLL03 (Tjong Kim Sang and De Meulder, 2003) and Wikiann (Pan et al., 2017).",
"For each benchmark, we conduct shot-10, 20, 50, 100 experiment.",
"In ShotK , we sample K labeled instances for each output tag from the full training data.",
"We repeatedly experiments 5 times and report the averaged micro-F1.",
"The Baseline model is BERT-BASE model only trained with few-shot training data T .",
"Given the newly generated synthetic data TLM , we train the same BERT-BASE model using the same set of hyper-parameters.",
"In sequence labeling tasks, we use rule-based data augmentation method SDANER (Dai and Adel, 2020) and MetaST (Wang et al., 2021a), a state-of-the-art self-training method, requiring additional unlabeled in-domain data .",
"For sentence classification tasks, rule-based EDA (Wei and Zou, 2019), Back-Translation ( BackT. ) and bert-based CBERT methods are used.",
"We adapt LAMBADA (Anaby-Tavor et al., 2020) as a PLM-based method for all tasks.",
"Implementation Details PromDA is built on the top of the T5-Large model (Raffel et al., 2020).",
"PromDA requires Prompt Pre-training and fine-tuning with down-stream tasks.",
"In both stages, we use Adafactor optimizer (Shazeer and Stern, 2018) with learning rate 1e-3 and weight decay 1e-5 to train the Soft Prompt parameters.",
"For pre-training, we use the realnewslike split in the T5 pre-training corpus C4 as the input.",
"The pre-training batch size is 72 and we pre-train PromDA for 100k steps.",
"We split the realnewslike dataset into train and development split (i.e., 10000 pages).",
"We will check the PPL on the development split every 5,000 steps.",
"We save the model with lowest PPL.",
"When fine-tuning on the few-shot data T , we set the batch size 32 and we train PromDA for 1,000 steps.",
"We only upgrade the fine-tuning step to 5,000 on the shot-50 and shot-100 for Wikiann and CoNLL03.",
"More experiment setup see Section A in the Appendix.",
"4.2 Main Results Sequence Labeling Tasks Table 2 summarizes the experiment results in shot-10 and shot-50.",
"In both settings, the performance of NLU models trained with the synthetic data from PromDA are boosted up by a large margin (i.e., 4.8% and 7.5% for CoNLL03 and Wikiann, respectively).",
"PromDA also outperforms rule-based SDANER and fully fine-tuned PLM LAMBADA methods.",
"In general, PLM-based approaches produce better synthetic data than SDANER does.",
"Surprisingly, the NLU models supported by PromDA achieve slightly better performance than MetaST which uses unlabeled in-domain data .",
"This shows that PromDA could potentially reduce extra human effort in collecting unlabeled in-domain data for the low-resource NLU tasks.",
"Figure 2 shows the performance in the shot-{10, 20, 50, 100} settings.",
"The NLU models supported by PromDA consistently outperform other systems in all settings.",
"Compared to Wikiann, the improvement margin in CoNLL03 is smaller.",
"This could because the performance of CoNLL03 baseline is relatively high.",
"Sentence Classification Tasks Table 3 shows the experiment results in shot-10 and shot-50.",
"Similar to the results in the sequence labeling tasks, adding the synthetic data from PromDA significantly boosts up the performance of NLU models (more than 10% in both benchmarks in shot-10).",
"PromDA also outperforms various competitive methods, including BackT.",
", CBERT and LAMBADA .",
"Although LAMBADA has higher level of flexibility and generates synthetic data from output tags, it only performs similar to CBERT .",
"This could be because of the over-fitting issues when fine-tuning with small training data.",
"Prompt-empowered PromDA successfully avoids this issue and produce high-quality synthetic data to support the NLU model training.",
"Figure 2 shows the performance in the shot-{10, 20, 50, 100} settings.",
"NLU models supported by PromDA consistently outperform all other systems in all setups.",
"Discussion LAMBADA performs consistently worse than PromDA (e.g., more than 10% F1 score gap in the SST2 and RT experiment).",
"This is because fully fine-tuned PLMs can easily memorize the limited labeled training data and produce similar synthetic data.",
"In contrast, the prompt-based learning allows PromDA to maintain high generalization ability and provide new training signals to the NLU models.",
"The results from PromDA are all statistical significant, compared to the Baseline model (paired student's t-test, p < 0.05).",
"We conduct ablation study for the components Prompt Pre-training , Dual-View Data Augmentation and Consistency Filtering on the CoNLL03 and SST2 Benchmark under the shot-10 setting.",
"Prompt Pre-Training In No PT , we directly fine-tune two separated PLMs to learn the Input View and Output View .",
"In No PT Pre-Training , we remove the Prompt Pre-training Task ( Synonym Keywords to Sentence ).",
"In Full Pre-Training , we apply the Prompt Pre-training Task to fine-tune the whole PLMs parameters.",
"Finally, in LM Adaptation : we replace PromDA with solution in Lester et al. (2021).",
"As shown in Table 4, the fully fine-tuned PLMs ( No PT ) performs worse than our proposed PromDA method (4.6% F1 score lower), showing the positive contribution of Soft Prompt for low-resource NLU Data Augmentation.",
"Further, removing PT Pre-training ( No PT Pre-Training ) or applying PT Pre-training to fine-tune all PLMs parameters ( Full Pre-Training ) also delegate the PT Pre-training performance by 3.1% and 6.0% F1 score, respectively, showing the importance of using PT Pre-training to learn a reasonable prompt initialization.",
"Similarly, LM Adaptation also fine-tunes the whole PLMs and achieves similar performance as Full Pre-Training .",
"It is recommended to directly train the prompt parameters.",
"Dual-View Data Augmentation Next, we show the effect of Dual-View Data Augmentation in PromDA .",
"Input Only and Output Only only generate synthetic data via the Input View and Output view , respectively.",
"These two Single-View models generate the same number of synthetic data as the PromDA does.",
"As shown in Table 4, the synthetic data from these two Single-View models 4247 DataSet C03 SST2 Ave.",
"successfully boost up the NLU model performance.",
"However, their corresponding NLU models perform worse than the ones supported by PromDA .",
"This shows that synthetic data from different views provide meaningful and different training signals to the NLU models.",
"Interestingly, NLU models trained on the Output view perform better than the ones trained on the Input View , indicating that output tags are more expressive signals to guide PLMs to generate high-quality synthetic data.",
"Finally, instead of training two views on the separated prompt parameters, we train two views on the same prompt parameters in Single Prompt .",
"The NLU models trained on Single Prompt synthetic data perform worse than the NLU models supported by PromDA , showing the importance of Prompt Ensemble for Dual-View Data Augmentation .",
"Setup w/o Filtering Iter-1 Iter-2 Iter-3 C03 72.0 76.7 77.6 77.5 SST2 69.2 77.5 79.7 81.4 Table 5: Ablation Study For Iteration-based NLU Consistency Filtering.",
"Consistency Filtering Finally, we examine the effect of Consistency Filtering in PromDA .",
"In table 5, we show the NLU model performance without any filtering ( w/o Filtering ) and with k iteration ( Iter-1 , Iter-2 and Iter-3 ).",
"The filtering has an important effect on the NLU performance.",
"Without removing low-quality synthetic data, the performance gap almost disappears.",
"The iteration filtering also has a positive effect on the NLU performance.",
"In particular, in the SST2 Benchmark, the NLU model performance increases ~4% F1 score after three iterations.",
"PromDA with T5-Base We verify whether PromDA could work with different pre-trained language models.",
"We replace the T5-Large model with the T5-base model.",
"The new PromDA can also improve the few-shot baseline models by a large margin.",
"On the SST2 shot-10 setup, the NLU model is improved from 66.1 to 76.3 F1 score, which also beats other models presented in Table 3.",
"PromDA in the high-resource setting To show the advantages of PromDA in the high-resource setting, We replace the few-shot training data with the full training data.",
"We find that PromDA can still improve the baseline model performance.",
"In SST2, after adding syntactic data, the NLU performance is improved from 90.8 to 92.3 F1 score.",
"Improvement Margin Difference As shown in Table 2 and 3, the improvement margins in the sentence classification tasks (i.e., more than 15% F1 score) are generally larger than the ones in the sequence labelling tasks (i.e., less than 10% F1 score).",
"This could because",
"i) the sequence labelling task is a more fine-grained and knowledge-intensive task than the sentence classification task;",
"ii) the synthetic data for the sequence labelling tasks includes entity type and boundary, which is more challenging for PLMs to generate, in particular for low-resource settings, compared to the sentence classification task.",
"PromDA and Unlabeled Data The above experiments are based on the assumption that no unlabeled data is available.",
"In this section, we explore the connection between PromDA and unlabeled data .",
"To incorporate unlabeled data into our NLU 4248 Sequence Labeling GT: It quoted an [ Org Interior Ministry] statement as saying [ Per Shabir Ahmad Muhammad Jalil] was executed in [ Loc Mecca] .",
"models, we apply the classic self-training framework (Scudder, 1965) to the NLU models.",
"Specifi-cally, for each unlabeled instance, we use the NLU models to label it and record the output tags and corresponding likelihood score.",
"The low likelihood score means predictions with less confidence.",
"We rank all unlabeled instances based on the likelihood score and remove instances at the bottom 20%.",
"Table 6 shows the experiment result of four benchmarks under the shot-10 setting.",
"The Effect of Unlabeled Data Domain We design three settings: Unlabeled In-domain Data ( UID ), Unlabeled Near-domain Data ( UND ) and Unlabeled General-domain Data ( UGD ) where the unlabeled data come from exactly same , similar and general-purpose domains.",
"We exchange the training data between CoNLL03 and Wikiann, and between SST2 and RT to simulate similar domains.",
"We randomly sample sentences from PLM pre-training corpus to simulate the general-purpose domain.",
"We note that unlabeled data domain has a great impact of the self-training performance.",
"Even a slight domain shift (i.e., UND ) delegates the NLU performance by 2.5%.",
"The performance of NLU models trained with unlabeled data from general-purpose corpus are even 3.2% lower than the NLU baseline models only trained with few-shot labeled data T .",
"Both sequence labeling tasks 4249 and sentence classification tasks follow this trend, but sequence labeling tasks is more sensitive to the unlabeled data domain.",
"Extra human effort is still required, for semi-supervised learning, to select suitable domains to collect unlabeled data.",
"Combining Unlabeled In-domain Data with PromDA We apply the above self-training algorithm to the final NLU models ( PromDA ) supported by PromDA with unlabeled in-domain data .",
"The resulting NLU models are further improved, on average, by 2.0% ( w/ UID in the last row).",
"More sophisticated semi-supervised learning algorithms may introduce more improvement.",
"This shows that",
"a) synthetic data from PromDA and unlabeled in-domain data provide different information to the NLU models;",
"b) PromDA successfully extracts the embedded knowledge in the PLMs and presents them in the generated synthetic data.",
"Diversity Analysis In Table 8, we show the diversity of the generated synthetic data from PromDA and other baseline models.",
"We sample 10 new synthetic data from each training instance.",
"We use Novel Mention (number of entity mentions or keywords not appearing in the training data) and Self-BLEU score (Zhu et al., 2018) to measure the diversity.",
"In general, simple generative data augmentation approaches (i.e, BackT. and CBERT ) can easily produce Novel Mentions, but their generated synthetic data lacks diversity (rel-atively low self-BLEU score).",
"The prompt-based learning helps PromDA to produce the most diverse synthetic data with the most Novel Mentions in both benchmarks.",
"Due to the over-fitting issues, LAMBADA produces synthetic data that are less or equal diverse than other baseline approaches.",
"Interestingly, the NLU models trained on these synthetic data achieve the second best performance.",
"This could because LAMBADA coherently generate the whole synthetic sentences, while others reply on the random and/or heuristic rules.",
"Synthetic Data Case Study Table 7 shows representative examples generated by our proposed PromDA and methods.",
"In the Sequence Labelling example, the rule-based SDANER shuffles the original word order and creates low-quality text.",
"The LAMBADA model generates a new synthetic instance by modifying three text spans in the original training instance (e.g., changing statement to newspaper).",
"In contrast, Our PromDA method generates a completely new and reasonable event Model NM Self-B F1 CoNLL03 SDANER 141.4 0.770 72.9 LAMBADA 107.6 0.761 75.0 PromDA 351 0.259 77.5 SST2 EDA 59.6 0.889 66.7 BackT.",
"in a bank, as well as correct and novel geographical locations in the generated synthetic data.",
"Similarly, in the sentence classification tasks, LAMBADA naively combines text chunks from two training instances in the second example.",
"PromDA mentions some keywords in the training data, but adds more information into the output.",
"In another example, PromDA comments on a screenwriter (not appearing in the training data) with a sequence of coherent words.",
"Finally, PromDA successfully moves the topic from the film The Saigon of 1952 to the Saigon in 70s.",
"In summary, PromDA can extract the embedded real-world knowledge from the PLMs and introduces these knowledge into a relatively long sentence in a fluent way.",
"In this paper, we present the first prompt-based pre-trained language model PromDA for low-resource NLU data augmentation.",
"Experiments on four benchmarks show the effectiveness of our proposed PromDA method.",
"In the future, we plan to expand PromDA to other NLP tasks, including question answering, machine reading comprehension and text generation tasks.",
"We thank anonymous reviewers for their insightful suggestions to improve this paper.",
"Yufei Wang, Can Xu, Qingfeng Sun, Huang Hu, Chongyang Tao, Xiubo Geng and Daxin Jiang are supported by Microsoft Software Technology Center at Asia (STCA).",
"Yufei Wang also receives a MQ Research Excellence Scholarship and a CSIRO's DATA61 Top-up Scholarship."
] | [
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"other",
"other",
"other"
] |
[
"NLP systems rarely give special consideration to numbers found in text.",
"This starkly contrasts with the consensus in neuroscience that, in the brain, numbers are represented differently from words.",
"We arrange recent NLP work on numeracy into a comprehensive taxonomy of tasks and methods.",
"We break down the subjective notion of numeracy into 7 subtasks, arranged along two dimensions: granularity (exact vs approximate) and units (ab-stract vs grounded).",
"We analyze the myriad representational choices made by over a dozen previously published number encoders and decoders.",
"We synthesize best practices for representing numbers in text and articulate a vision for holistic numeracy in NLP, comprised of design trade-offs and a unified evaluation.",
"Numbers are an integral part of text.",
"To understand a simple sentence like I woke up at 11 , we need not just literacy but also numeracy.",
"We must decode the string 11 to the quantity 11 and infer 11 to denote a time of the day, probably 11 a.m.",
"We need commonsense to reason that 11 a.m. is quite late in the morning.",
"This interpretation of 11 is strongly contextual, as I earn $11 per month evokes different units and value expectations.",
"Note how the semantics remains the same for both sentences if 11 was replaced by 10 , i.e., the context is tolerant to some variability.",
"Numbers are everywhere .",
"Reasoning with quantities and counts is crucial to understanding the world.",
"Evolutionary learning has given numerical cognition skills to several animals, including human beings (Dehaene, 2011).",
"Our ancient ancestors furthered numeracy by developing multiple number systems, similar to but independent from the evolution of languages.",
"Numeracy is an essential skill for language understanding, since numbers are often interspersed in text: the 6 million pages in English Wikipedia have over 150 million numbers.",
"Numbers are neglected.",
"In NLP, however, numbers are either filtered out explicitly during preprocessing (Graff et al., 2003), or treated the same as words, often collapsing them into an UNK token.",
"Subword tokenization approaches like BPE (Sennrich et al., 2016) and WordPiece (Wu et al., 2016) instead retain numbers, but split them into arbitrary tokens, for example 1234 might be split into two tokens as 12-34 or 123-4 or 1-234 .",
"Recent work has shown that these are suboptimal number representations (Wallace et al., 2019; Zhang et al., 2020).",
"On the DROP Question Answering benchmark, BERT performs five times worse when the answer is a number instead of a span of text (Dua et al., 2019).",
"Relatively simple strategies like switching from subword to char-level tokenization (Geva et al., 2020), or from decimal to scientific notation (Zhang et al., 2020) already boost performance.",
"Such results warrant a deeper study into the best number representations.",
"Numbers are important .",
"Given the ubiquity of numbers and their fundamental differences with words, enabling NLP systems to represent them effectively is beneficial for domains like scientific articles (Spithourakis and Riedel, 2018) and financial documents (Chen et al., 2019; Jiang et al., 2020).",
"Number understanding is also useful to detect sarcasm (Dubey et al., 2019) and to model dialogues involving price negotiations (Chawla et al., 2020).",
"Recent NLP progress towards numeracy has been sporadic but encouraging.",
"In this paper, we survey prior work and highlight the kind of numeracy targeted (e.g., arithmetic, measurement, numeration) as well as the kind of representation used (e.g., value embeddings, DigitRNNs).",
"We provide the first NLP-centric taxonomy of numeracy tasks (Section 2) and of number representations (Sec-tion 3) for the reader to succinctly comprehend the challenge posed by numeracy.",
"We synthesize key takeaways (Section 5) and propose a unifying vision for future research (Section 6).",
"There are several different aspects of numeracy.",
"The DROP dataset alone offers a wide variety of numeric reasoning questions such as retrieval-based ( How many yards did Brady run? ), count-based ( How many goals were scored? given a comprehension describing multiple goals), and simple arithmetic ( How many years after event 1 did event 2 occur? given dates of both events).",
"Besides downstream applications, there have also been probing experiments to evaluate whether NLP models can decode numbers from strings (e.g., 19 to 19.0 ), or estimate quantities (e.g., how tall are lions? ).",
"Such a diverse range of abilities are usually all referred to collectively as numeracy , which gives rise to confusion.",
"We limit this abuse of terminology and provide a neat taxonomy for arranging the different tasks proposed under numeracy.",
"Drawing from work in cognitive science (Feigen-son et al., 2004), we propose the following two dimensions to organize tasks within numeracy:",
"1. Granularity : whether the encoding of the number is (1) exact, e.g., birds have two legs , or (2) approximate, e.g., Jon is about 180 cms tall",
".",
"2. Units : whether the numbers are (1) abstract, e.g., 2+3=5, or (2) grounded, e.g., 2 apples + 3 apples = 5 apples.",
"While abstract mathematical tasks are easy to probe and create artificial datasets for, numbers grounded in units are challenging since they need to be understood in the context of words.",
"We now describe 7 numeracy tasks, arranged according to our taxonomy in Table 1, as well as downstream tasks (right-most column in the table).",
"Simple Arithmetic is the task of addition, subtraction, etc. over numbers alone.",
"to create synthetic datasets involving such math operations for both masked (Geva et al., 2020) and causal language models (GPT-3 Brown et al. 2020).",
"Numeration or Decoding refers to the task of mapping a string form to its numeric value, e.g., 19 to 19.0 .",
"Within NLP, this task is set up as a linear regressor probe over a (frozen) representation of the string.",
"Numeration has been probed for in static word embeddings (Naik et al., 2019), contextualized language models (Wallace et al., 2019), and multilingual number words, e.g., nineteen or dix-neuf (Johnson et al., 2020).",
"Magnitude Comparison is the ability to tell which of two (or more) numbers is larger.",
"For language models, this has been probed in an argmax setup (choose the largest of five numbers) as well as a binary classification task, e.g., given 23 and 32 , pick the label 1 to indicate that 32 > 23 (Naik et al., 2019; Wallace et al., 2019).",
"Arithmetic Word Problems (AWP) are the grounded version of simple arithmetic that we find in school textbooks, e.g., Mary had two cookies.",
"She gave one away.",
"How many does she have left?",
"There exist several NLP datasets on math word problems (Amini et al., 2019; Saxton et al., 2019; Roy and Roth, 2015; Hendrycks et al., 2021).",
"Exact Facts in the context of numeracy involves commonsense knowledge such as dice have 6 faces or birds have two legs .",
"An approximate sense of quantity would be of little help here since assertions like dice have 5 faces or birds have three legs are factually incorrect.",
"Two recent datasets for numeric commonsense facts are Numbergame (Mishra et al., 2020) and NumerSense (Lin et al., 2020).",
"Measurement Estimation is a task in psychology in which subjects are asked to approximately guess measures of objects along certain dimensions, e.g., number of seeds in a watermelon or weight of a telephone (Bullard et al., 2004).",
"VerbPhysics (Forbes and Choi, 2017) is a benchmark of binary comparisons between physical attributes of various objects, e.g., ball < size tiger.",
"DoQ (Elazar et al., 2019) is a web-extracted dataset of Distributions over Quantities, which can be used as a benchmark for language models' measurement estimation abilities (Zhang et al., 2020).",
"Lastly, MC-TACO (Zhou et al., 2020) is a collection of temporal-specific measurement estimates, e.g., going for a vacation spans a few days/weeks .",
"Numerical Language Modeling in its literal sense is not a task but a setup, analogous to masked/causal language modeling for words.",
"Other tasks could be modeled as numeric language modeling, e.g., arithmetic ( 2+3=[MASK] ) and measurement estimation ( lions weigh [MASK] pounds ).",
"In practice, numerical language modeling refers to the task of making numeric predictions for completing unlabelled, naturally occurring text.",
"Word predictions in language modeling are typically evaluated with classification metrics such as accuracy or perplexity.",
"Numeric predictions, on the other hand, are evaluated with regression metrics such as mean absolute error, root mean squared error, or their log and percentage variants (Spokoyny and Berg-Kirkpatrick, 2020).",
"Spithourakis and Riedel (2018) also propose an Adjusted Perplexity metric to cancel the effect of the out-of-vocabulary rate on the perplexity of numeric tokens.",
"Downstream Applications for numeracy are abound.",
"Dubey et al. (2019) detect sarcasm in tweets based on numbers.",
"Chen et al. (2020) identify claims in financial documents using alternative number representations and the auxiliary task of numeral understanding or categorization (Chen et al., 2018).",
"Similarly, simple arithmetic and math word problems serve as auxiliary tasks for GenBERT (Geva et al., 2020) towards improving its score on the DROP QA benchmark.",
"(Numeric) Paraphrasing is what we call the task of identifying one-to-one correspondences between different surface forms of the same number.",
"Twelve is the same as 12', also referred to as a dozen.",
"This task cuts across all the tasks we discussed, since the same number, expressed in several different ways, should be nevertheless identified by an NLP model before any subsequent reasoning.",
"Similar to how WordNet (Miller, 1995) provides a huge list of synonyms, numeric paraphrases can be obtained by libraries 1 which convert numerals to words, words to numerals, etc.",
"One could also envision this as a learning task given a large enough corpus, such as the NumGen dataset (Williams and Power, 2010) containing 2000 fact-aligned numeric expressions over 110 articles.",
"Quantity Entailment tasks (Ravichander et al., 2019; Roy et al., 2015), analogous to Natural Language Inference, require understanding of not equivalence (as in paraphrasing) but deeper relations like entailment and contradiction, e.g., the premise he was 16 yrs old entails the hypothesis he was a teenager .",
"On similar lines, Mishra et al. (2020) modify the QuaRel dataset (Tafjord et al., 2019) to force models to perform quantity entailment, e.g., dog1 is light, dog2 is heavy is replaced with dog1 weighs 70 lbs, dog2 weighs 90 lbs .",
"Numeral Understanding is the task of categorizing numbers into percentages, prices, dates, times, quantities, etc. and their respective subcategories (Chen et al., 2018).",
"Fused-Head Resolution for numbers is essential to ground them when the context is implicit.",
"For example, the sentence I woke up at 11 has a.m. or o'clock as the fused head to be resolved (Elazar and Goldberg, 2019).",
"Counting is the task of keeping track of discrete instances of some object.",
"When kids count a set of objects, they quickly learn to keep a track, say on their fingers, but struggle with realizing the Cardinal Principle, i.e., the last counter value denotes the number of entities being considered (Wynn, 1990).",
"Similarly, LSTMs (Suzgun et al., 2019) and transformers (Bhattamishra et al., 2020) have been shown to possess counting skills but in order to answer counting questions, they must also learn to map the counts to number words or numerals.",
"Counting tasks have been proposed in computer vision (Testolin et al., 2020) as well as in NLP (Postma et al., 2018; Talmor et al., 2020).",
"Domain-specific tasks require background knowledge in addition to exact mathematical skills.",
"Numbergame (Mishra et al., 2020) includes questions on Physics ( find the distance travelled in 2 hrs by a train moving at 50 mph ) and Chemistry ( find the mass percentage of H in C6H6 ).",
"Project Aristo (Clark et al., 2019) solves elementary and high school science problems, which often involve numeric reasoning.",
"Analogous to our taxonomy of subtasks in the previous section, here we attempt to arrange the wide variety of alternative number representations proposed in recent literature.",
"We limit our analysis to methods of encoding (numbers embeddings) and/or decoding (embeddings numbers) numbers.",
"We do not discuss, for example, methods that use symbolic reasoning (Andor et al., 2019) or modify activation functions to enhance numeracy (Trask et al., 2018).",
"A typical example of the base architecture could be BERT (Devlin et al., 2019), the workhorse of modern NLP.",
"We assume that there exists an independent parallel process of mapping words into embeddings, such as subword tokenization followed by lookup embeddings in BERT.",
"We look at two kinds of representations: string-based and real-based.",
"Real-based representations perform some computation involving the numerical value of the number.",
"The string-based representations instead see numbers in their surface forms; they must assign arbitrary token IDs and look up their embeddings to feed into the architecture.",
"By default, language models treat numbers as strings, the same as words.",
"However, within string representations, one could tweak simple changes: Notation : The number 80 could be written in Hindu-Arabic numerals (80), Roman numerals (LXXX), scientific notation (8e1), English words (eighty), or with base 20 as in French (quatre-vingts).",
"Nogueira et al. (2021) exclusively study the effect of many such notation choices in language models, on the task of simple arithmetic.",
"Tokenization : Word level tokenizations are ineffective for numbers, since they are likely to map most numbers to an UNK token, except for a few commonly occuring ones (e.g., 1, 2, 5, 10, 100).",
"Other possibilities are subword tokenizations like BPE and WordPiece, as well as character (or digit) level tokenizations.",
"Pooling : The pooling dimension of variation springs up after analyzing the effect of tokenization.",
"With subword and character level tokenizations, a single number may now correspond to multiple tokens, e.g., 100 segmented into 10-0 or 1-0-0 .",
"Prior work (Spithourakis and Riedel, 2018) has argued for using RNNs or CNNs to instead pool the embeddings of these tokens into a single embedding before feeding to the language model.",
"The default way that language models see numbers are the same as words, hence no pooling is applied.",
"Real-based number encoders can be expressed as f : R R d whereas decoders can be expressed as g : R d R .",
"Real-based methods proposed in literature can vary on account of direction (whether they encode, decode or both), scale (linear vs log), and discretization (binning vs continuous valued).",
"Direction : Some proposed methods are encoder-only, e.g., DICE (Sundararaman et al., 2020), while some can be decoder-only, e.g., those requiring sampling from a parameterized distribution (Spokoyny and Berg-Kirkpatrick, 2020).",
"Scale : Inspired by cognitive science literature (Dehaene, 2011), several methods have attempted to model numbers in the log (instead of linear) scale, i.e., to perform mathematical operations on the logarithm of the number to be represented.",
"The first operation in a log-scaled f is log ( ) and the last operation in a log-scaled g is exp ( ) .",
"We discuss more scales in the following subsection, such as the stabilized log scale (Jiang et al., 2020) and the learned scale/flow (Spokoyny and Berg-Kirkpatrick, 2020).",
"Discretization : Training continuous value functions for a large range of numbers turns out to be practically infeasible (Wallace et al., 2019).",
"Some real-based methods first bin numbers before learning embeddings for each bin.",
"These bins could be on the linear scale (0-10, 10-20, 20-30, . . . ) or the log scale (0.01-0.1, 0.1-1, 1-10, . . . ), and the lookup embeddings can be learnt by the regular cross entropy (Chen et al., 2020) or dense cross entropy (Zhang et al., 2020).",
"Having established dimensions of variance of number representations, we describe some key string-based and real-based methods used in prior work.",
"Table 2 depicts these methods as individual rows, with the first three columns showing their position in our taxonomy ( 3.1).",
"The last seven columns correspond to the seven tasks ( 2.2), with each cell denoting a representative work that introduce it.",
"Word Vectors & Contextualized Embeddings Word2vec (Mikolov et al., 2013), GloVe (Penning-ton et al., 2014), ELMo (Peters et al., 2018), and BERT (Devlin et al., 2019) have been probed as baselines against several contending methods.",
"GenBERT Geva et al. (2020) present GenBERT, a question answering model with pretrained BERT serving as both its encoder and decoder.",
"GenBERT tokenizes numbers at the digit level, and is finetuned on auxiliary tasks of arithmetic word problems and simple arithmetic.",
"NumBERT Zhang et al. (2020) pretrain BERT from scratch over a modified dataset such that all numbers have been converted into scientific notation, i.e., 314 .",
"1 is expressed as 3141 [EXP] 2 ).",
"NumBERT hence follows a scientific notation, subword tokenization, and no pooling.",
"2 DigitRNN, DigitCNN Spithourakis and Riedel (2018) and Wallace et al. (2019) experimented with poolingof digit embeddings into a single embedding representing the full number.",
"Both used RNNs as well as CNNs for pooling.",
"we refer to as DigitRNN-sci in Table 2), as well as a simpler alternative: exponent embedding.",
"The latter merely learns a lookup embedding for the exponent, completely ignoring the mantissa.",
"DICE Determinisitic Independent-of-Corpus Embeddings (Sundararaman et al., 2020) is an attempt to handcraft number encoder 3 f so as to preserve the relative magnitude between two numerals and their embeddings.",
"Given two scalars i and j , and their embeddings f ( i ) and f ( j ) , the cosine distance between f ( i ) and f ( j ) is intended to monotonically increase/decrease with the Euclidean distance between i and j .",
"DICE is offered as not only a deterministic encoding but also as an auxiliary loss function for softly training number embeddings alongside, say, SQuAD (Rajpurkar et al., 2016) Value Embedding The most intuitive parameterized encoder for real numbers is one that feeds the scalar magnitude of the number through a shallow neural network.",
"The converse of value embedding is to learn a shallow neural network mapping g : R d R .",
"This decoder is simply the probe used for decoding/numeration task.",
"The idea of projecting number magnitudes into 3 Number encoder-decoder as defined in 3.1.2.",
"an NLP model that otherwise inputs only lookup embeddings may appear flawed.",
"But Vaswani et al. (2017) have (rather successfully) encoded positional information into transformers using both learned embeddings (similar to Value) and fixed ones (similar to DICE).",
"Log Value Wallace et al. (2019) also experiment with a log-scaled value encoder in addition to the one on a linear scale.",
"Zhang et al. (2020) experiment with a log value decoder for measurement estimation, which they call the RGR (regress) method.",
"Log scaling has a neuroscientific inspiration since observations of human (and animal) understanding of numbers is better modelled by a log-scale representation (Dehaene, 2011).",
"Log Laplace In contrast to the point estimate output of the RGR decoder, models can also be used to parameterize a distribution over numbers.",
"Such a formulation is helpful when estimating approximate quantities.",
"Vectors representing some context can be used to parameterize, say, the mean and variance of a Gaussian or Laplace distribution.",
"Spokoyny and Berg-Kirkpatrick (2020) instead transform the space being modeled by parameterizing the location parameter of a Log-Laplace distribution L ( X, 1) where X is the context representation of unmasked tokens, in a masked (numer-ical) language modelling setup.",
"When inferring or decoding a number, they sample a point z ~ L ( X, 1) and exponentiate it, such that the output is exp ( z ) .",
"Flow Laplace The expressivity of number decoders can be expanded or contracted by merely parameterizing a different distribution.",
"Spokoyny and Berg-Kirkpatrick (2020) propose a more expressive decoder where instead of the log scale, the model learns its own density mapping.",
"After sampling z ~ L ( X, 1) , the output is transformed to exp ( z ab ) c , where a , b , and c , are also parameters emitted by the same model.",
"MCC or multi-class classification is another number decoder which outputs a distribution, but a discrete one: over log-scaled bins of numbers, e.g., 1-10, 10-100, and so on (Zhang et al., 2020).",
"Previously described decoders either output a point estimate or a unimodal distribution, thus failing to hedge its predictions for a multimodal ground truth.",
"Given a masked number prediction problem We went to the restaurant at [MASK] p.m. , MCC is better equipped to estimate two peaks: one around lunch time (say, 1-2 p.m.) and another around dinner (say, 7-9 p.m.).",
"Discrete Latent Exponent (DExp) is another potentially multimodal distribution (Spokoyny and Berg-Kirkpatrick, 2020) where the model parameterizes a multinomial distribution for the exponent (similar to MCC) and uses it to sample an exponent e , which then acts as a latent variable for emitting the mean of a Gaussian (standard deviation fixed at 0 . 05 ).",
"This Gaussian is finally used to sample the output number z ~ N ( , 0 . 05) .",
"GMM Another attempt to circumvent the unimodal Gaussians or point estimates is to learn a Gaussian mixture model.",
"Spithourakis and Riedel (2018) learn a mixture of K Gaussians by pretraining their means ( i ) and variances ( i 2 ) over the training corpus with Expectation Maximization algorithms, while the mixing weights i are derived from the model.",
"Next, to sample a single number from the GMM probability mass function q ( u ) = (cid:80) Ki =1 i N ( u ; i ; i ) , the authors first sample the precision (number of decimal places) from yet another Gaussian and use that to discretize the probability mass function into equal sized bins, over which the probabilities are summed.",
"If the sampled precision is, say 2 , then the probability of emitting a number 3 .",
"14 is given by (cid:82) 3 .",
"145 3 .",
"135 q ( u ) du .",
"This likelihood estimate is used to train a causal language model.",
"Spokoyny and Berg-Kirkpatrick (2020)'s GMM implementation is slightly different: it alters the last inference step by sampling directly from the mixture of Gaussians, as they did with Log Laplace, Flow Laplace, and DExp.",
"GMM-prototype by Jiang et al. (2020) similarly pretrains (with EM/hard-EM) the mean, the variances, but also the mixture weights i s of a GMM over the training corpus.",
"They then learn K prototype embeddings e i s corresponding to the K Gaussians.",
"When encoding a new numeral n , its (input) embedding is calculated as: E ( n ) = (cid:80) Ki =1 w i",
".e i , where the weights are induced from the GMM: w i = P ( Z = i | U = n ) = i N ( n ; i ; i ) (cid:80) Kj =1 j N ( n ; j ; j ) Thus the difference between GMM and GMM-prototypes is that after fixing mean and standard deviations of the Gaussian mixtures, in GMM the model learns to predict the mixture weights i for each individual number prediction, whereas in GMM-prototype, i 's are frozen and the model learns prototype embeddings e i 's.",
"Note that prototype embeddings are encoder-only.To decode numbers, the authors implement weight-sharing across input and output embeddings, similar to how word vectors are trained (Mikolov et al., 2013), i.e., find-ing out which of the numerals in the corpus has the closest embedding.",
"SOM-prototype GMM-prototype, in effect, merely use the mixture of Gaussians to infer prototypes and to get the weights w i 's.",
"Jiang et al. (2020) tried another variant by identifying prototype numerals with Self Organizing Maps (Koho-nen, 1990) and by defining the weights as: w i = | g ( x i ) g ( n ) | 1 where x i is the i th prototype, n is the number to be encoded, and g is a log-based squashing function.",
"Having organized the landscape of numeracy tasks and methods, we now present come key results for each numeracy task in NLP from previously published experiments over a subset of the described number representations:",
"Abstract Probes Word Embeddings vastly outperform random embedding baselines on abstract probes such as numeration, magnitude comparison, and sorting (Wallace et al., 2019; Naik et al., 2019).",
"DICE, Value and Log Value embeddings excel at these probes, which makes intuitive sense given that they explicitly encode the numbers' magnitude although Value embeddings do not easily extrapolate to larger numbers, possibly due to instability in training.",
"The best number encoders with respect to these probes were found to be DigitCNNs, and character-tokenized models, e.g., ELMo, in general outperform subword ones, e.g., BERT (Wallace et al., 2019).",
"Arithmetic GPT-3 (Brown et al., 2020) performs extremely well at zero shot simple arithmetic, as long as the number of digits in the operands are low.",
"The tokenization scheme could be the cause for limited extrapolation, since language models get better at arithmetic when numbers are tokenized at the digit/character level (Nogueira et al., 2021; Wallace et al., 2019).",
"For arithmetic word problems, state of the art solvers rely on predicting an equation, which is then filled in with specific numeric values from the question (Patel et al., 2021), altogether bypassing the need for encoding numbers into embeddings.",
"where numbers are in scientific notation (Num-BERT) converges to the same loss as BERT on masked language modelling objective, and scores nearly the same on GLUE language understanding benchmarks.",
"For (causal) numeric language modelling, Spithourakis and Riedel (2018) show that Gaussian Mixture Models are the best decoders.",
"For (masked) numeric language modelling, Spokoyny and Berg-Kirkpatrick (2020) show that modelling the mantissa in scientific notation may be an overkill, since exponent embeddings alone outperform DigitRNN-sci over financial news and scientific articles.",
"Measurement Estimation Zhang et al. (2020) train a regression probe to predict measurements of objects over the CLS embeddings of BERT/NumBERT.",
"Given a template-lexicalized sentence such as the dog is heavy, the model must predict the weight of a typical dog, against ground truth from the Distribution over Quantities dataset (Elazar et al., 2019).",
"They find that NumBERT is a better text encoder than BERT for measurement estimation, the only difference between them being the notation used by the respective pretraining corpora.",
"They also experiment with two number decoders: MCC (multi-class classification) and RGR (regression / Log Value embedding).",
"MCC performs better when trying to predict Distributions over Quantities perhaps due to the ground truth resembling the predicted gaussians but not on VerbPhysics where the ground truth is less noisy.",
"Lastly, even static word embeddings like GloVe have been shown to contain enough knowledge of measurement estimates to contrast two objects, e.g., classifying whether a car is bigger/heavier/fasster than a ball (Goel et al., 2019).",
"Exact Facts BERT and RoBERTa capture limited numerical commonsense, evident over NumerSense (Lin et al., 2020) sentences such as a tricycle has [MASK] wheels , with the answer choices limited to the integers 0 10 .",
"Results can be further improved by finetuning over a Wikipedia-extracted dataset of numeric information.",
"Mishra et al. (2020) find commonsense question answering to be one of the hardest among their Numbergame challenge, using the NumNetv2 model (Ran et al., 2019) which is commonly used for DROP question answering.",
"Both of these experiments evaluate on exact match metrics, hence it remains to be seen if representing approximate magnitudes yields bene-fit in modelling numeric facts.",
"Based on the above results, we now synthesize key insights into a set of directed takeaways to guide practitioners' design of number representations:",
"Rule of thumb for string-based methods?",
"Scientific notation is superior to decimal notation (Zhang et al., 2020) since models can learn to attend mostly to the exponent embedding rather than the mantissa (Spokoyny and Berg-Kirkpatrick, 2020).",
"Character level tokenization outperforms subword level (Nogueira et al., 2021; Wallace et al., 2019; Geva et al., 2020).",
"Pooled representations (DigitRNN, DigitCNN) lack a controlled study with unpooled ones (NumBERT, GenBERT) which makes it hard to proclaim a winner among the two.",
"Rule of thumb for real-based methods?",
"Log scale is preferred over linear scale (Zhang et al., 2020; Jiang et al., 2020; Wallace et al., 2019; Spokoyny and Berg-Kirkpatrick, 2020), which makes intuitive sense but lacks as rigorous a study as has been undertaken in the cognitive science community (Feigenson et al., 2004).",
"Regarding discretization, Zhang et al. (2020) show that binning (dense cross entropy loss) works better than continuous value prediction (MAE loss) on datasets where ground truth distributions are available.",
"Lastly, modeling continuous predictions is notoriously hard for large ranges (Wallace et al., 2019) but Spithourakis and Riedel (2018) offer a way of binning such distributions by picking a precision level.",
"Encoding vs Decoding numbers?",
"In our sim-plified discussions above, we avoid differentiating between methods for encoding and decoding numbers.",
"Value Embedding, for instance, can be used to encode numbers (projecting scalars onto vector space) as well as to decode numbers (col-lapsing a vector into a scalar).",
"On the other hand, manually-designed encoders like DICE are not easily reversible into decoding methods.",
"Even with reversible methods, the encoders and decoders must usually be independently parameterized, unlike the input and output word embeddings which often share weights (Press and Wolf, 2016).",
"Prototype embeddings by Jiang et al. (2020) are an exception, which share input/output embeddings for a fixed vocabulary of numbers.",
"Can we mix-and-match multiple methods?",
"Given the wide range of number representations, an obvious next step is to try an ensemble of embeddings.",
"Spokoyny and Berg-Kirkpatrick (2020) show that for encoding numbers, exponent embeddings added to DigitRNN (scientific notation) embeddings barely outperforms the exponent embeddings alone.",
"Similar experiments with a mix of real and string methods are yet to be seen.",
"Which methods for which tasks?",
"Based on our taxonomy of tasks in Table 1, abstract tasks are good early probes for the grounded ones, e.g., finetuning GenBERT (Geva et al., 2020) on simple arithmetic helps it do well on downstream question answering, and the high scores of DICE (Sun-dararaman et al., 2020) on numeration and magnitude comparison are an indicator of similar boosts on (numeric) language modelling.",
"With respect to granularity, real-based methods work well for approximate tasks such as measurement estimation and language modeling (Zhang et al., 2020; Spokoyny and Berg-Kirkpatrick, 2020) but not for exact tasks like arithmetic word problems or commonsense.",
"DigitRNNs are broad-purpose number encoders, whereas distribution modeling methods like DExp are effective at decoding numbers.",
"Numeracy is a core system of human intelligence (Kinzler and Spelke, 2007).",
"Teaching numeracy to students works best when taught holistically, while less effective teachers deal with areas of mathematics discretely (Askew and Askew, 1997).",
"While the NLP community genuinely strives to improve language models' numeric skills, not all aspects of numeracy have been sufficiently targeted.",
"It is evident from the sparsity in Table 2 that the community is far from achieving, a holistic solution to numeracy.",
"In this section, we outline our vision for such a unified solution, in the form of three prerequisites to consider for numerical NLU: Evaluation.",
"The first step towards a holistic solution to numeracy requires a benchmark covering its different subtasks.",
"Aggregated leaderboards in NLP like GLUE (Wang et al., 2018) and Su-perGLUE (Wang et al., 2019) have incentivized research on natural language understanding, with scores categorized into semantic, syntactic, logical, and background knowledge.",
"An analogous leaderboard could be constructed to evaluate models on numeric reasoning tasks, categorized according to the skills evaluated, e.g., exact vs approximate granularity, or abstract vs grounded numeracy.",
"Numbergame (Mishra et al., 2020) is one such aggregation focusing on exact numeracy benchmarks, as evaluated by F1 and exact match scores in a reading comprehension setup.",
"Both Numbergame and our own list of tasks (Sec-tion 2.2) are preliminary attempts at teasing apart the different aspects of numeracy.",
"We encourage researchers to extend and refine such taxonomies.",
"A suite of numeracy tasks, matched with evaluations of their respective numerical skills, can enable testing model generalization from one skill to another.",
"Some progress has already been made in this transfer learning setup, e.g., GenBERT (Geva et al., 2020), finetuned on a synthetic dataset of arithmetic problems, is found to score higher on DROP QA.",
"Similarly, DICE (Sundararaman et al., 2020), optimized for numeration, improves score on Numeracy600K order-of-magnitude prediction task.",
"Going forward, we need several such studies, ideally for each pair of tasks to see whether some numeracy skills help models generalize to others.",
"Design Principles.",
"Number representations vary based on design trade-offs between inductive biases and data-driven variance.",
"The default BERT setup, with subword tokenization and lookup embeddings, occupies the variance end of the spectrum, allowing freedom in representing numbers.",
"Value embeddings and DICE encodings, on the other hand, are closer to the bias end of the spectrum, since the inductive bias of continuity on the number line constrains the learning space.",
"It is important to identify where on the bias-variance scale any representation stands, for a fair comparison.",
"Following parallel work in cognitive science, the community could explore whether exact and approximate numeracy require two specialized modules (Feigenson et al., 2004) or could be handled with a single representation (Cordes et al., 2001).",
"Model designers must also make a choice on coverage: whether to target a broad or a narrow range of numbers to be represented.",
"Multi-class classification (Zhang et al., 2020) over a fixed number of bins, restricts the range of numbers expressed, as do DICE embeddings (Sundararaman et al., 2020).",
"Value embeddings are continuous and theoretically unrestricted, but must practically be capped for bug-free training.",
"On the other hand, string-based representations could always fall back to subword/char-level token embeddings to represent not only floats but also irrational ( 2 ) and complex ( 1 + 2 ) numbers.",
"Roy et al. (2015) introduced the Quantity-Value Representation format to allow closed and open ranges alongside scalar point numbers.",
"Broader Impact.",
"Numbers are ubiquitous in natural language and are easily identified, at least in numeral forms.",
"But they are by no means the only class of ordered concepts required for natural language understanding.",
"Successful number representations can inspire work on incorporating more continuous domains into natural language processing systems.",
"For instance, gradable adjectives like good, great, amazing, etc. are arguably on some cardinal scale, which can be mapped using value embeddings or Gaussian mixture models (Sharp et al., 2018; de Marneffe et al., 2010).",
"Days of the week (Mon-Sun) and months of an year (Jan-Dec) form periodic patterns which can be modeled with sinusoidal functions (Martinez et al., 2020).",
"Lastly, numeracy is essential for natural language understanding.",
"Consider the sentence: Programmers earn $200,000 versus $100,000 for researchers. \" An intelligent agent with numeracy skills would identify that $100k is half of $200k, that $100k possibly denotes annual salary, and infer that higher salaries lead to higher standards of living. In short, it was able to learn something about the two concepts programmers and researchers , by crossing the continuous semantic space of numbers! The agent could now make use of this knowledge in a number-free situation, e.g., the mask in He could not afford a car for several years after earning a CS degree because she took a job as a [MASK] might better be filled with the word researcher , than with programmer .",
"A key goal of imparting numeracy to NLP models is to help them understand more about the world, using numbers.",
"This paper summarizes and contextualizes recent work on numeracy in NLP.",
"We propose the first taxonomy of tasks and methods concerning text-centric numeric reasoning.",
"We highlight key takeaways from the several experiments in literature, along with caveats and scope for confirming some of the observed trends.",
"We present a case for lack of a holistic solution to numeracy in NLP, and put forward a set of aspects to consider when working towards one.",
"We draw the following two major conclusions from our study: (1) the default subword segmentation with lookup embeddings used to represent words is clearly suboptimal for numbers (2) there are several unanswered research questions on the level of specificity, coverage, and inductive bias needed to holistically solve numeracy.",
"This work was funded by the Defense Advanced Research Projects Agency with award N660011924033.",
"We would like to thank the countless suggestions we accumulated during preliminary presentations at MLSS 2020, WeCNLP 2020, and GSS 2020, as well as over email correspondences with Biplav Srivastava, Antoine Bosselut, and Harsh Agarwal.",
"We would like to thank the anonymous NAACL 2021 reviewers (particularly #3) for pointing out blind spots in our submission, which we have tried our best to rectify.",
"This work revolves around the Hindu-Arabic Numeral system and English number words, which are not the only number systems still in use today.",
"We encourage follow-up work to take these systems into consideration, on the lines of Johnson et al. (2020) and Nefedov (2020)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"objective",
"result",
"method",
"objective",
"other",
"other",
"other",
"abstain",
"method"
] |
[
"Variational Autoencoder (VAE) is widely used as a generative model to approximate a model's posterior on latent variables by combining the amortized variational inference and deep neural networks.",
"However, when paired with strong autoregressive decoders, VAE often converges to a degenerated local optimum known as posterior collapse.",
"Previous approaches consider the KullbackLeibler divergence (KL) individual for each datapoint.",
"We propose to let the KL follow a distribution across the whole dataset, and analyze that it is sufficient to prevent posterior collapse by keeping the expectation of the KL's distribution positive.",
"Then we propose Batch Normalized-VAE (BN-VAE), a simple but effective approach to set a lower bound of the expectation by regularizing the distribution of the approximate posterior's parameters.",
"Without introducing any new model component or modifying the objective, our approach can avoid the posterior collapse effectively and efficiently.",
"We further show that the proposed BN-VAE can be extended to conditional VAE (CVAE).",
"Empirically, our approach surpasses strong autoregressive baselines on language modeling, text classification and dialogue generation, and rivals more complex approaches while keeping almost the same training time as VAE.",
"Variational Autoencoder (VAE) (Kingma and Welling, 2014; Rezende et al., 2014)is one of the most popular generative framework to model complex distributions.",
"Different from the Autoencoder (AE), VAE provides a distribution-based latent representation for the data, which encodes the input x into a probability distribution z and reconstructs the original input using samples from z .",
"When *This work was done when Qile Zhu was an intern at Tencent AI Lab.",
"inference, VAE first samples the latent variable from the prior distribution and then feeds it into the decoder to generate an instance.",
"VAE has been successfully applied in many NLP tasks, including topic modeling (Srivastava and Sutton, 2017; Miao et al., 2016; Zhu et al., 2018), language modeling (Bowman et al., 2016), text generation (Zhao et al., 2017b) and text classification (Xu et al., 2017).",
"An autoregressive decoder (e.g., a recurrent neural network) is a common choice to model the text data.",
"However, when paired with strong autoregressive decoders such as LSTMs (Hochreiter and Schmidhuber, 1997) and trained under conventional training strategy, VAE suffers from a wellknown problem named the posterior collapse or the KL vanishing problem.",
"The decoder in VAE learns to reconstruct the data independent of the latent variable z , and the KL vanishes to",
"0. Many convincing solutions have been proposed to prevent posterior collapse.",
"Among them, fixing the KL as a positive constant is an important direction (Davidson et al., 2018; Guu et al., 2018; van den Oord et al., 2017; Xu and Durrett, 2018; Tomczak and Welling, 2018; Kingma et al., 2016; Razavi et al., 2019).",
"Some change the Gaussian prior with other distributions, e.g., a uniform prior (van den Oord et al., 2017; Zhao et al., 2018) or a von Mises-Fisher (vMf) distribution (Davidson et al., 2018; Guu et al., 2018; Xu and Durrett, 2018).",
"However, these approaches force the same constant KL and lose the flexibility to allow various KLs for different data points (Razavi et al., 2019).",
"Without changing the Gaussian prior, free-bits (Kingma et al., 2016) adds a threshold (free-bits) of the KL term in the ELBO object and stops the optimization of the KL part when its value is smaller than the threshold.",
"Chen et al. (2017) point out that the objective of free-bits is non-smooth and suffers from the optimization challenges.",
"-VAE (Razavi et al., 2019) sets the parameters in a specific range to achieve a positive KL value for every latent dimension, which may limit the model performance.",
"Other work analyzes this problem form a view of optimization (Bowman et al., 2016; Zhao et al., 2017a; Chen et al., 2017; Alemi et al., 2018).",
"Recently, He et al. (2019) observe that the inference network is lagging far behind the decoder during training.",
"They propose to add additional training loops for the inference network only.",
"Li et al. (2019) further propose to initialize the inference network with an encoder pretrained from an AE objective, then trains the VAE with the free-bits.",
"However, these two methods are much slower than the original VAE.",
"The limitation of the constant KL and the high cost of additional training motivate us to seek an approach that allows flexible modeling for different data points while keeping as fast as the original VAE.",
"In this paper, instead of considering the KL individually for each data point, we let it follow a distribution across the whole dataset.",
"We demonstrate that keeping a positive expectation of the KL's distribution is sufficient to prevent posterior collapse in practice.",
"By regularizing the distribution of the approximate posterior's parameters, a positive lower bound of this expectation could be ensured.",
"Then we propose Batch Normalized-VAE (BN-VAE), a simple yet effective approach to achieving this goal, and discuss the connections between BN-VAE and previous enhanced VAE variants.",
"We further extend BN-VAE to the conditional VAE (CVAE).",
"Last, experimental results demonstrate the effectiveness of our approach on real applications, including language modeling, text classification and dialogue generation.",
"Empirically, our approach surpasses strong autoregressive baselines and is competitive with more sophisticated approaches while keeping extremely higher efficiency.",
"Code and data are available at https://github.com/valdersoul/bn-vae .",
"In this section, we first introduce the basic background of VAE, then we discuss the lagging problem (He et al., 2019).",
"At last, we present more related work.",
"VAE (Kingma and Welling, 2014; Rezende et al., 2014) aims to learn a generative model p ( x , z ) to maximize the marginal likelihood log p ( x ) on a",
"dataset.",
"The marginal likelihood cannot be calculated directly due to an intractable integral over the latent variable z .",
"To solve this, VAE introduces a variational distribution q ( z | x ) which is parameterized by a complex neural network to approximate the true posterior.",
"Then it turns out to optimize the ELBO of log p ( x ) : L = E q ( z | x ) [log p ( x | z )] KL ( q ( z | x ) || p ( z )) , (1) where represents the inference network and denotes the decoder.",
"The above first term is the reconstruction loss, while the second one is the KL between the approximate posterior and the prior.",
"The Gaussian distribution N (0 , I ) is a usual choice for the prior, and the KL between the approximate posterior q ( z | x ) and the prior p ( z ) can be computed as: KL = 1 2 n (cid:88) i =1 ( 2 i + 2 i log 2 i 1) , (2) where i and i is the mean and standard deviation of approximate posterior for the i th latent dimension, respectively.",
"When the decoder is autoregressive, it can recover the data independent of the latent z (Bowman et al., 2016).",
"The optimization will encourage the approximate posterior to approach the prior which results to the zero value of the KL.",
"Recently, He et al. (2019) analyze posterior collapse with the Gaussian prior from a view of training dynamics.",
"The collapse is a local optimum of VAE when q ( z | x ) = p ( z | x ) = p ( z ) for all inputs.",
"They further define two partial collapse states: model collapse , when p ( z | x ) = p ( z ) , and inference collapse , when q ( z | x ) = p ( z ) .",
"They observe that the inference collapse always happens far before the model collapse due to the existence of autoregressive decoders.",
"Different from the model posterior, the inference network lacks of guidance and easily collapses to the prior at the initial stage of training, and thus posterior collapse happens.",
"Based on this understanding, they propose to aggressively optimize the inference network.",
"However, this approach cost too much time compared with the original VAE.",
"In our work, we also employ the Gaussian prior and thus suffer from the same lagging problem.",
"Yet, our proposed approach does not involve additional training efforts, which can effectively avoid the lagging problem (Section 3.3) and keep almost the same training efficiency as the original VAE (Section 5.1).",
"More details can be found in Section 3.3.",
"To prevent posterior collapse, we have mentioned many work about changing the prior in the introduction.",
"Besides these approaches, some work mod-ifies the original training objective directly.",
"For example, Bowman et al. (2016) introduce an annealing strategy, where they slightly increase the weight of KL from 0 to 1 during the warm-up period.",
"-VAE (Higgins et al., 2017) treats the KL weight as a hyperparameter to constrain the minimum value of the KL.",
"Alemi et al. (2017), on the other hand, set a fixed KL weight to control the mutual information between z and x .",
"Tolstikhin et al. (2018) leverage the wasserstein distance to replace the KL.",
"Zhao et al. (2017a) replace the KL with maximum mean discrepancy.",
"Fang et al. (2019) introduce sample-based representations which lead to implicit latent features with an auxiliary network.",
"Some change the training strategy.",
"Kim et al. (2018) address the amortization gap (Cremer et al., 2018) in VAE and propose Semi-Amortized VAE to compose the inference network with additional mean-field updates.",
"Fu et al. (2019) propose a cyclical annealing schedule, which repeats the process of increasing multiple times.",
"There are various other approaches to solve the posterior collapse.",
"For example, some researchers choose to weaken the decoder by replacing the LSTM decoder with convolution neural networks without autoregressive modeling (Semeniuta et al., 2017; Yang et al., 2017).",
"Chen et al. (2017) input a lossy representation of data to the autoregressive decoder and enforce z to capture the information about the original input.",
"Inheriting this idea, some following work add direct connections between z and x (Zhao et al., 2017b; Dieng et al., 2019).",
"Ma et al. (2019) introduce an additional regularization to learn diverse latent representation.",
"-VAE (Razavi et al., 2019) and free-bits (Kingma et al., 2016) set a minimum number of KL for each latent dimension to prevent the posterior collapse.",
"Srivastava and Sutton (2017, 2018) find that using ADAM (Kingma and Ba, 2014) with a high learning rate to train VAE may cause the gradients to diverge early.",
"Their explanation for the diverging behavior lies in the exponential curvature of the gradient from the inference network which produces the variance part of the approximate posterior.",
"Then they apply batch normalization to the variance part to solve this problem.",
"We use the simple SGD without momentum to train our model.",
"Moreover, we apply batch normalization to the mean part of the inference network to keep the expectation of the KL's distribution positive, which is different from their work.",
"We also find that Snderby et al. (2016) utilize batch normalization in all fully connected layers with nonlinear activation functions to improve the model performance.",
"Different from it, our approach directly applies batch normalization to the parameters of the approximate posterior, which is the output of the inference network.",
"In this section, we first derive the expectation of the KL's distribution and show that it is enough to avoid posterior collapse by keeping the expectation of the KL's distribution positive.",
"Then we propose our regularization method on the parameters of the approximate posterior to ensure a positive lower bound of this expectation.",
"We further discuss the difference between our approach and previous work.",
"Given an x X , the inference network parametrizes a n -dimension diagonal Gaussian distribution with its mean = f ( x ) and diagonal covariance = diag ( f ( x )) , where f and f are two neural networks.",
"In practice, the ELBO is computed through a Monte Carlo estimation from b samples.",
"The KL in Eq.",
"2 is then computed over b samples from X : KL = 1 2 b b (cid:88) j =1 n (cid:88) i =1 ( 2 ij + 2 ij log 2 ij 1) = 1 2 n (cid:88) i =1 ( (cid:80) bj =1 2 ij b + (cid:80) bj =1 2 ij b (cid:80) bj =1 log 2 ij b 1) .",
"When b gets larger, the above empirical value will approach the mean of the KL across the whole dataset.",
"To make use of this observation, we assume that i and log 2 i for each latent dimension i follow a certain distribution with a fixed mean and variance across the dataset respectively.",
"The distribution may vary between different latent dimensions.",
"In this way, the KL turns to a distribution of i 's and log 2 i 's.",
"From Eq.",
"3, we can see that (cid:80) bj =1 2 ij /b is the sample mean of 2 i , which converges to E [ 2 i ] = Var [ i ] + E 2 [ i ] .",
"Similarly, (cid:80) bj =1 2 ij /b converges to E [ 2 i ] , and (cid:80) bj =1 log 2 ij /b to E [log 2 i ] .",
"Thus, we can derive the expectation of the KL's distribution as: E [ KL ] = 1 2 n (cid:88) i =1 ( Var [ i ] + E 2 [ i ] + E [ 2 i ] E [log 2 i ] 1) 1 2 n (cid:88) i =1 ( Var [ i ] + E 2 [ i ]) , (4) where E [ 2 i log 2 i ] 1 since the minimum of e x x is",
"1. If we can guarantee a positive lower bound of E [ KL ] , we can then effectively prevent the posterior collapse.",
"Based on Eq.",
"4, the lower bound is only dependent on the number of latent dimensions n and i 's mean and variance.",
"This motivates our idea that with proper regularization on the distributions of i 's to ensure a positive lower bound of E [ KL ] .",
"3.2 Normalizing Parameters of the Posterior The remaining key problem is to construct proper distributions of i 's that can result in a positive lower bound of E [ KL ] in Eq.",
"4.",
"Here, we propose a simple and efficient approach to accomplish this by applying a fixed batch normalization on the output of the inference network ( i ).",
"Batch Normalization (BN) (Ioffe and Szegedy, 2015) is a widely used regularization technique in deep learning.",
"It normalizes the output of neurons and makes the optimization landscape significantly smoother (San-turkar et al., 2018).",
"Different from other tasks that apply BN in the hidden layers and seek fast and stable training, here we leverage BN as a tool to transform i into a distribution with a fixed mean and variance.",
"Mathematically, the regularized i is written by: i = i B i B i + , (5) where i and i are means of the approximate posterior before and after BN.",
"B i and B i denote the mean and standard deviations of i .",
"They are biased estimated within a batch of samples for each dimension indecently.",
"and are the scale and shift parameter.",
"Instead of using a learnable in Eq.",
"5, we use a fixed BN which freezes the scale .",
"In this way, the distribution of i has the mean of and the variance of 2 .",
"is a learnable parameter that makes the distribution more flexible.",
"Now, we derive the lower bound of E [ KL ] by using the fixed BN.",
"With the fixed mean and variance 2 for i in hand, we get a new lower bound as below: E [ KL ] 1 2 n (cid:88) i ( Var [ i ] + E 2 [ i ]) = n ( 2 + 2 ) 2 .",
"To this end, we can easily control the lower bound of E [ KL ] by setting .",
"Algorithm 1 shows the training process.",
"Constructing a positive KL: Both free-bits (Kingma et al., 2016) and -VAE (Razavi et al., 2019) set a threshold on the KL value.",
"Free-bits changes the KL term in the ELBO to a hinge loss term: (cid:80) ni max( , KL ( q ( z i | x ) || p ( z i ))) .",
"Another version of free-bits is to apply the threshold to the entire sum directly instead of the individual value.",
"Training with the free-bits objective, the model will stop to drive down the KL value when it is already below .",
"However, Chen et al. (2017) point out that the objective of free-bits is non-smooth and suffers from the optimization challenges.",
"Our approach does not face the optimization problem since we use the original ELBO objective.",
"where [ l , u ] are the feasible interval for q by solving ln( 2 q ) 2 q +2 +1 0 .",
"Although -VAE can ensure a minimum value for the KL, it limits the model performance due to that the parameters are constrained in the interval.",
"Our approach only constrains the distributions of , which is more flexible than -VAE.",
"Experiments further show that our approach surpass both free-bits and -VAE.",
"Reducing inference lag: As we focus on the setting of the conventional Gaussian prior, the lagging problem mentioned in Section 2.2 is crucial.",
"To this point, it is beneficial to analyze an alternate form of the ELBO: L = log p ( x ) KL ( q ( z | x ) || p ( z | x )) .",
"With this view, the only goal of the approximate posterior q ( z | x ) is to match the model posterior p ( z | x ) .",
"We examine the performance of our approach to reduce inference lag using the same synthetic experiment in He et al. (2019).",
"Details can be found in Section 1 of the Appendix.",
"The synthetic experiment indicates that our approach with the regularization is beneficial to rebalance the optimization between inference and generation, and finally overcomes posterior collapse.",
"We also prefer a large due to that a small will push the approximate posterior to the prior.",
"More details on the synthetic experiment can be found in the Appendix.",
"Given an observation x and its output y , CVAE (Sohn et al., 2015; Zhao et al., 2017b) models the conditional distribution p ( y | x ) .",
"The variational lower bound of the conditional log-likelihood is: L = E q ( z | x , y ) [log p ( y | x , z )] KL ( q ( z | x , y ) || p ( z | x )) log p ( y | x ) .",
"(10)",
"Different from VAE, the prior p ( z | x ) in CVAE is not fixed, which is also parametrized by a neural network.",
"It is possible to apply another BN on the mean of the prior with a different so that the expectation of the KL becomes a constant.",
"However, this lower bound is uncontrollable due to the density of 1 + 2 is the convolution of their densities, which is intractable.",
"1 To overcome this issue, we propose to constrain the prior with a fixed distribution.",
"We achieve it by adding another KL between the prior and a known Gaussian distribution r ( z ) , i.e. KL ( p ( z | x ) || r ( z )) .",
"Instead of optimizing the ELBO in Eq.",
"10, we optimize a lower bound of the ELBO for CVAE: L (cid:48) = L KL ( p ( z | x ) || r ( z )) L .",
"The KL term in the new bound is the sum of KL ( q ( z | x , y ) || p ( z | x )) and KL ( p ( z | x ) || r ( z )) , which can be computed as: KL = 1 2 n (cid:88) i =1 ( 2 qi + ( qi pi ) 2 2 pi + 2 pi + 2 pi log 2 qi 1) , (12) where q , q and p , p are the parameters of q and p respectively.",
"n denotes the hidden size.",
"The KL term vanishes to 0 when and only when q and p collapse to r ( z ) , which is the normal distribution.",
"As we explained in Section 3.2, KL won't be 0 when we apply BN in q .",
"We then prove that when q collapses to p , the KL term is not the minima (details in Section 2 of the Appendix) so that KL ( q ( z | x , y ) || p ( z | x )) won't be",
"0. In this way, we can avoid the posterior collapse in CVAE.",
"Algorithm 2 shows the training details.",
"Algorithm 2 BN-CVAE training.",
"1 We perform empirical study on this method and find that the neural network can always find a small KL value in this situation.",
"Setup: We test our approach on two benchmark datasets: Yelp and Yahoo corpora (Yang et al., 2017).",
"We use a Gaussian prior N (0 , I ) , and the approximate posterior is a diagonal Gaussian.",
"Following previous work (Burda et al., 2016; He et al., 2019), we report the estimated negative log likelihood (NLL) from 500 importance weighted samples, which can provide a tighter lower bound compared to the ELBO and shares the same information with the perplexity (PPL).",
"Besides the NLL, we also report the KL, the mutual information (MI) I q (Alemi et al., 2017) and the number of activate units (AU) (Burda et al., 2016) in the latent space.",
"The I q can be calculated as: I q = E p d ( x ) [ KL ( q ( z | x ) || p ( z ))] KL ( q ( z ) || p ( z )) , (13) where p d ( x ) is the empirical distribution.",
"The aggregated posterior q ( z ) = E p d ( x ) [ q ( z | x )] and KL ( q ( z ) || p ( z )) can be approximated with Monte Carlo estimations.",
"The AU is measured as A z = Cov ( E z q ( z | x ) [ z ]) .",
"We set the threshold of 0.01, which means if A zi > 0 .",
"01 , the unit i is active.",
"Configurations: We use a 512-dimension word embedding layer for both datasets.",
"For the encoder and the decoder, a single layer LSTM with 1024 hidden size is used.",
"We use z to generate the initial state of the encoder following Kim et al. (2018); He et al. (2019); Li et al. (2019).",
"To optimize the objective, we use mini-batch SGD with 32 samples per batch.",
"We use one NVIDIA Tesla v100 for the experiments.",
"For all experiments, we use the linear annealing strategy that increases the KL weight from 0 to 1 in the first 10 epochs if possible.",
"Compared methods: We compare our model with several strong baselines and methods that hold the previous state-of-the-art performance on text modeling benchmarks.",
"Methods with a modified training objective: VAE with annealing (Bowman et al., 2016).",
"-VAE (Higgins et al., 2017).",
"Cyclic annealing (Fu et al., 2019), we use the default cyclic schedule.",
"Methods with a lower bound for KL values: Free-bits (FB) (Kingma et al., 2016).",
"-VAE (Razavi et al., 2019).",
"vMF-VAE (Xu and Durrett, 2018) Methods with a modified training strategy.",
"Semi-amortized VAE (SA-VAE) (Kim et al., 2018).",
"VAE with an aggressive training (Agg-VAE) (He et al., 2019).",
"FB with a pretrained inference network (AE+FB) (Fu et al., 2019) Main results: Table 1 shows the results.",
"We further split the results into two different settings, one for models with a pretrained inference network and one without it.",
"Our approach achieves the best NLL in the setting without a pretrained inference network on both datasets and is competitive in the setting with a pretrained encoder.",
"Moreover, we can observe that: -VAE does not perform well in both settings, which shows that constraining the parameters in a small interval is harmful to the model.",
"In vMF-VAE, data points share the same KL value.",
"Our approach is flexible and gets better performance.",
"Although Agg-VAE and SA-VAE both get good performance, they require additional updates on the inference network and cost more training efforts, which are validated in the next part.",
"Cyclic annealing with a pretrained inference network achieves the highest KL, but it may not be a good generative model.",
"Paired with a pretrained inference network, all methods except cyclic annealing can someway boost the performance.",
"This phenomenon indicates that the lagging problem (He et al., 2019) is important in VAE training.",
"When leveraging the pretrained inference network, our approach achieves the smallest performance gap compared with other methods.",
"In other words, our approach can alleviate the lagging problem efficiently.",
"Training time: Table 2 shows the training time (until convergence) and the relative ratio of the basic VAE, our approach and the other best three models in Table",
"1. SA-VAE is about 12 times slower than our approach due to the local update for each data point.",
"Agg-VAE is 2-4 times slower #label 100 500 1k 2k 10k AE 81.1 86.2 90.3 89.4 94.1 VAE 66.1 82.6 88.4 89.6 94.5 -VAE 61.8 61.9 62.6 62.9 93.8 Agg-VAE 80.9 85.9 88.8 90.6 93.7 cyclic 62.4 75.5 80.3 88.7 94.2 FB (9) 79.8 84.4 88.8 91.12 94.7 AE+FB (6) 87.6 90.2 92.0 93.4 94.9 BN-VAE (0.7) 88.8 91.6 92.5 94.1 95.4 Table 3: Accuracy on Yelp.",
"than ours because it requires additional training for the inference network.",
"AE+FB needs to train an autoencoder before the VAE.",
"However, our approach is fast since we only add one-layer batch normalization, and thus the training cost is almost the same as the basic VAE.",
"More results about the training behavior can be found in Section 3 of the Appendix.",
"Performance on a downstream task Text classification: The goal of VAE is to learn a good representation of the data for downstream tasks.",
"Here, we evaluate the quality of latent representations by training a one-layer linear classifier based on the mean of the posterior distribution.",
"We use a downsampled version of the Yelp sentiment dataset (Shen et al., 2017).",
"Li et al. (2019) further sampled various labeled data to train the classifier.",
"To compare with them fairly, we use the same samples in Li et al. (2019).",
"Results are shown in Table",
"3. Our approach achieves the best accuracy in all the settings.",
"For 10k training samples, all the methods get a good result.",
"However, when only using 100 training samples, different methods vary a lot in accuracy.",
"The text classification task shows that our approach can learn a good latent representation even without a pretrained inference network.",
"Setup: For dialogue generation, we test our approach in the setting of CVAE.",
"Following previous work (Zhao et al., 2017b), we use the Switchboard (SW) Corpus (Godfrey and Holliman, 1997), which contains 2400 two-sided telephone conversations.",
"We use a bidirectional GRU with hidden size 300 to encode each utterance and then a one-layer GRU with hidden size 600 to encode previous k -1 utterances as the context.",
"The response decoder is a one-layer GRU with hidden size 400.",
"The latent representation z has a size of 200.",
"We use the evaluation metrics from Zhao et al. (2017b): (1) Smoothed Sentence-level BLEU (Chen and Cherry, 2014); (2) Cosine Distance of Bag-of-word Embedding, which is a simple method to obtain sentence embeddings.",
"We use the pretrained Glove embedding (Pennington et al., 2014) and denote the average method as Abow and the extreme method as Ebow .",
"Higher values indicate more plausible responses.",
"We compared our approach with CVAE and CVAE with bag-of-words (BOW) loss (Zhao et al., 2017b), which requires the decoder in the generation network to predict the bag-of-words in the response y based on z .",
"Automatic evaluation: Table 4 shows the results of these three approaches.",
"From the KL values, we find that CVAE suffers from posterior collapse while CVAE (BOW) and our approach avoid it effectively.",
"For BLEU-4, we observe the same phenomenon in the previous work (Fu et al., 2019; Zhao et al., 2017b) that CVAE is slightly better than the others.",
"This is because CVAE tends to generate the most likely and safe responses repeatedly with the collapsed posterior.",
"As for precision, these three models do not differ much.",
"However, CVAE (BOW) and our BN-VAE outperform CVAE in recall with a large margin.",
"This indicates that BN-VAE can also produce diverse responses with good quality like CVAE (BOW).",
"Human evaluation: We conduct the human evaluation by asking five annotators from a commercial annotation company to grade 200 sampled conversations from the aspect of fluency, relevance and informativeness on a scale of 1-3 (see Section 4 of the Appendix for more details on the criteria).",
"We also report the proportion of acceptable/high scores ( 2 and = 3 ) on each metric.",
"Table 5 shows the annotation results.",
"Overall, our approach beats the other two compared methods in relevance and fluency with more informative responses.",
"Also, our approach has the largest proportion of responses whose scores are High .",
"This indicates that our model can produce more meaningful and relevant responses than the other two.",
"Case study: Table 6 shows the sampled responses generated by the three methods (more can be found in the Appendix).",
"By maintaining a reasonable KL, responses generated by our approach are more relevant to the query with better diversity compared to the other two.",
"We test the three methods in the simplest setting of dialogue generation.",
"Note that the focus of this work is to improve the CVAE itself by avoiding its KL vanishing problem but not to hack the state-of-the-art dialogue generation performance.",
"To further improve the quality of generated responses, we can enhance our approach by incorporating knowledge such as dialogue acts (Zhao et al., 2017b), external facts (Ghazvininejad et al., 2018) and personal profiles (Zhang et al., 2018).",
"In this paper, we tackle the posterior collapse problem when VAE is paired with autoregressive decoders.",
"Instead of considering the KL individually, we make it follow a distribution DKL and show that keeping the expectation of DKL positive is sufficient to prevent posterior collapse.",
"We propose Batch Normalized VAE (BN-VAE), a simple but effective approach to set a lower bound of DKL by regularization the approximate posterior's parameters.",
"Our approach can also avoid the recently proposed lagging problem efficiently without additional training efforts.",
"We show that our approach can be easily extended to CVAE.",
"We test our approach on three real applications, language modeling, text classification and dialogue generation.",
"Experiments show that our approach outperforms strong baselines and is competitive with more complex methods which keeping substantially faster.",
"We leverage the Gaussian prior as the example to introduce our method in this work.",
"The key to our approach to be applicable is that we can get a formula for the expectation of the KL.",
"However, it is hard to get the same formula for some more strong or sophisticated priors, e.g., the Dirichlet prior.",
"For these distributions, we can approximate them by the Gaussian distributions (such as in Srivastava and Sutton (2017)).",
"In this way, we can batch normalize the corresponding parameters.",
"Further study in this direction may be interesting."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"objective",
"objective",
"objective",
"result",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"method",
"result",
"objective",
"objective",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain"
] |
[
"In this paper, we propose an effective yet efficient model PAIE for both sentence-level and document-level Event Argument Extraction (EAE), which also generalizes well when there is a lack of training data.",
"On the one hand, PAIE utilizes prompt tuning for extractive objectives to take the best advantages of Pre-trained Language Models (PLMs).",
"It introduces two span selectors based on the prompt to select start/end tokens among input texts for each role.",
"On the other hand, it captures argument interactions via multi-role prompts and conducts joint optimization with optimal span assignments via a bipartite matching loss.",
"Also, with a flexible prompt design, PAIE can extract multiple arguments with the same role instead of conventional heuristic threshold tuning.",
"We have conducted extensive experiments on three benchmarks, including both sentence-and document-level EAE.",
"The results present promising improvements from PAIE ( 3 . 5% and 2 . 3% F1 gains in average on three benchmarks, for PAIE-base and PAIE-large respectively).",
"Further analysis demonstrates the efficiency, generalization to few-shot settings, and effectiveness of different extractive prompt tuning strategies.",
"Our code is available at https: //github.com/mayubo2333/PAIE .",
"Understanding text by identifying the event and arguments has been a long-standing goal in Natural Language Processing (NLP) (Sundheim, 1992).",
"As shown in Fig. 1, we can quickly understand that the document is talking about a Sell event, with four involved arguments, i.e., Vivendi (Seller), Universal Studios (Artifact), parks (Artifact), and company (Artifact), where the argument roles are in brackets.",
"Since event detection has achieved great success in * Equal Contribution.",
"Dan Sanchez reports : The Saudis Go Full ISIS In Their US Backed Takfiri War on the Shia Saudi Arabia has perpetrated a mass <t> execution </t> that puts ISIS's beach beheadings to shame.",
"Forty-seven heads rolled on Saturday.",
"One of them belonged to Nimr al Nimr, a revered Shi'ite cleric who had been sentenced to death for sermons in which he criticized the government (especially for its persecution of the country 's Shi'ite minority).",
"Cash strapped Vivendi wants to <t> sell </t> Universal Studios, its Universal theme parks and television production company.",
"Defendant Executor Crime Seller Event type: justice.judicialconsequences.execute Event type: Transaction.Transfer-Ownership Artifact Artifact Artifact Sentence-level EAE Document-level EAE Figure 1: Examples of (top) sentence-level and (bot-tom) document-level event argument extraction.",
"recent years (Wang et al., 2021), the main challenge lies in Event Argument Extraction (EAE).",
"Typical efforts in EAE can be roughly classified into two groups.",
"The first group of methods formulates it as a semantic role labeling problem (Wei et al., 2021).",
"There are generally two steps first identifying candidate spans and then classifying their roles.",
"Although joint models are proposed to optimize them together, high dependence on candidates may still suffer from error propagation (Li et al., 2013).",
"In the second group, recent studies tend to follow the success of Pre-trained Language Models (PLMs) and solve EAE by Question Answering (QA) (Liu et al., 2021a; Wei et al., 2021; Du and Cardie, 2020; Liu et al., 2020; Li et al., 2020) and Text Generation (Lu et al., 2021; Li et al., 2021).",
"QA-based models can effectively recognize the boundaries of arguments with role-specific questions, while the prediction has to be one by one.",
"Generation-based methods are efficient for generating all arguments, but sequential predictions degrade the performance on long-distance and more arguments.",
"Besides, the state-of-the-art performance is still unsatisfactory (around 68% F1 on the widely used dataset ACE05 (Doddington et al., 2004)).",
"Here raises an interesting question, 6759 is there any way to combine the merits of the above methods, as well as to boost the performance?",
"This paper targets real scenarios, which require the EAE model to be effective yet efficient at both sentence and document levels, and even under the few-shot setting without sufficient training data.",
"To do this, we highlight the following questions: How can we extract all arguments simultaneously for efficiency?",
"How to effectively capture argument interactions for long text, without knowing them in advance?",
"How can we elicit more knowledge from PLMs to lower the needs of annotation?",
"In this paper, we investigate prompt tuning under an extractive setting and propose a novel method PAIE that P rompting A rgument I nteractions for E AE.",
"It extends QA-based models to handle multiple argument extraction and meanwhile takes the best advantage of PLMs.",
"The basic idea is to design suitable templates to prompt all argument roles for PLMs, and obtain role-specific queries to jointly select optimal spans from the text.",
"Thus, instead of unavailable arguments, each role in the template serves as a slot for interactions, and during learning, PLMs tend to fill these slots with exact arguments via a matching loss.",
"By predicting arguments together, PAIE enjoys an efficient and effective learning procedure.",
"Besides, the inter-event knowledge transfer between similar role prompts alleviates the heavy burden of annotation cost.",
"Specifically, for prompting extraction, we design two span selectors based on role prompts, which select start/end tokens among input texts.",
"We explore three types of prompts: manual template, concatenation template, and soft prompt.",
"They perform well at both sentence-level EAE (S-EAE) and document-level EAE (D-EAE) and ease the requirements of the exhaustive prompt design.",
"For joint span selection, we design a bipartite matching loss that makes the least-cost match between predictions and ground truth so that each argument will find the optimal role prompt.",
"It can also deal with multiple arguments with the same role via flexible role prompts instead of heuristic threshold tuning.",
"We summarize our contributions as follow: We propose a novel model, PAIE, that is effective and efficient for S-EAE and D-EAE, and robust to the few-shot setting.",
"We formulate and investigate prompt tuning under extractive settings, with a joint selection scheme for optimal span assignments.",
"We have conducted extensive experiments on three benchmarks.",
"The results show a promising improvement with PAIE ( 3 . 5% and 2 . 3% F1 gains on average absolutely in base and large model).",
"Further ablation study demonstrates the efficiency and generalization to few-shot settings of our proposed model, as well as the effectiveness of prompt tuning for extraction.",
"Event Argument Extraction: Event Argument Extraction is a challenging sub-task of event extraction (EE).",
"There have been great numbers of studies on EAE tasks since an early stage (Chen et al., 2015; Nguyen et al., 2016; Huang et al., 2018; Yang et al., 2018; Sha et al., 2018; Zheng et al., 2019).",
"Huang and Peng (2021) propose to leverage Deep Value Networks (DVN) that captures cross-event dependencies for EE.",
"Huang and Jia (2021) convert documents to unweighted graph and use GAT to alleviate the role overlapping issue.",
"A common idea is to first identify argument candidates and then fill each with a specific role via multi-label classification (Lin et al., 2020).",
"To deal with implicit arguments and multiple events, Xu et al. (2021) construct a heterogeneous graph of arguments, while DEFNN (Yang et al., 2021) predicts arguments via Parallel Prediction Networks.",
"A recent trend formulates EAE as an extractive question answering (QA) problem (Du and Cardie, 2020; Liu et al., 2020).",
"This paradigm naturally induces the language knowledge from pre-trained language models by converting EAE tasks to fully-explored reading comprehension tasks via a question template.",
"Wei et al. (2021) considers the implicit interaction among roles by adding constraints with each other in template, while Liu et al. (2021a) leverages data augmentation to improve the performance.",
"However, they can only predict roles one by one, which is inefficient and usually leads to sub-optimal performance.",
"With the help of the pre-trained Encoder-Decoder Transformer architecture, such as BART (Lewis et al., 2020) and T5 (Raffel et al., 2020), there are also some recent works converting extraction tasks to generation tasks.",
"Paolini et al. (2021) propose TANL to handle a variety of structured prediction tasks, including EAE, by a unified text-to-text approach and extract all arguments in a single pass.",
"Lu et al. (2021) follow 6760 TANL and also take EAE as a sequential generation problem.",
"Li et al. (2021) target generation model by designing specific templates for each event type.",
"In comparison, we prompt argument interactions to guide PLMs and optimize the multiple argument detection by designing a bipartite matching loss.",
"This not only improves the understanding of long-distance argument dependencies but also enjoys an efficient procedure via prompt-based learning.",
"Prompt-based Learning: Prompt-based learning is a new paradigm emerging in the field of pre-trained language models (Liu et al., 2021b).",
"Unlike the pre-training and fine-tuning paradigm, prompt-based methods convert the downstream tasks to the form more consistent with the model's pre-training tasks.",
"Schick and Schtze (2021) convert a variety of classification problems to cloze tasks by constructing related prompts with blanks and finding a mapping from particular filled words to predicted categories.",
"Li and Liang (2021) focus on generation tasks and propose lightweight prefix tuning by freezing model parameters and only adjusting a sequence of continuous task-specific vectors.",
"Different from the above prompt tuning methods designed for classification or generation tasks, our proposed method returns to linear head setting for fitting extraction task better.",
"It is somewhat similar as a concurrent work P-tuning v2 (Liu et al., 2021c).",
"PAIE considers multiple arguments and their interactions to prompt PLMs for joint extraction.",
"Our model, as illustrated in Fig. 2, contains three core components: prompt creation , span selector decoding , and span prediction .",
"In the following sections, we will first formulate prompt for extraction, and describe each component in turn.",
"Existing prompt-based methods mainly focus on classification and generation tasks.",
"Conventional extraction objectives are converted into a generation task.",
"This brings an inefficiency issue that the model has to enumerate all of extraction candidates.",
"For example, Cui et al. (2021) design the prompt for named entity recognition: [candidate span] is [entity type/not a] entity .",
"The models need to fill the first slot with candidate entities, and check the outputs of LM for the second slot for extraction.",
"Can prompt-based methods directly be applied on extraction?",
"since the basic idea is similar with clas-sification/generalization comparing the slot em-beddings with label vocabulary/input tokens.",
"Here, we give a formulation about the general extractive prompting method and then apply it on EAE for a case study.",
"(1) Prompt Creation .",
"Given context X and a series of queries Q = { q 1 , q 2 , ..., q K } , we create a joint prompt containing all these queries, where f prompt is the prompt creator.",
"(2) Prompted Selector Decoding .",
"Given a PLM L , context X , and prompt P t , we decode a query-specific (answering) span selector as follows: q k = h L ( q k ; P t, X ) where q k is the k -th query in the prompt and h L is the outputs of PLMs.",
"(3) Prompted Span Selection .",
"To find the optimal span, we design two selectors for the start and end tokens from context: ( s, e ) q k = Span-search [ g L ( X ; q )] where ( s, e ) q k is the span about k -th query and g L is the span selector.",
"Clearly, such formulation is better than generative extraction by mainly considering the adjacent constraints of span.",
"Task Definition We formulate EAE task as a prompt-based span extraction problem on dataset D .",
"Given an instance ( X, t, e, R ( e ) ) D , where X denotes the context, t X denotes the trigger word, e denotes the event type and R ( e ) denotes the set of event-specific role types, we aim to extract a set of span A .",
"Each a ( r ) A is a segmentation of X and represents an argument about r R ( e ) .",
"We create a set of prompts for each event type e in dataset D .",
"Each prompt contains all roles r R ( e ) .",
"For example in Fig.2, given event type e as negotiate and R ( e ) as { Participant , Topic , Place } , the prompt P t ( e ) may be defined as follows: Participant communicated with Participant about Topic at Place .",
"We call the mentions of roles in the prompt as slot , and there are four slots underlined in this example (and colored in Fig. 2).",
"Such design allows our model to capture the implicit interactions among different roles.",
"To avoid threshold tuning for multiple arguments with the same role, the prompt is flexible to use multiple slots for the same role, such as role Participant in the above example.",
"The number of slots for the role is heuristically determined according to the maximum number of arguments of each role in the training dataset.",
"We design three different prompt creators f prompt , the mapping from a set of roles to a prompt as follows: 1. Manual Template: All roles are connected manually with natural language.",
"We follow the template from Li et al. (2021) for fair comparison.",
"2. Soft Prompt: Following Qin and Eisner (2021) and Liu et al. (2021d), we connect different roles with learnable, role-specific pseudo tokens.",
"3. Concatenation Template: To concatenate all role names belonging to one event type.",
"We give one example of these three types of prompt in Table 1 and list more examples in Appendix B. Further analysis can be found in Section 5.2.",
"Given context X and prompt P t , this module generates the role-specific span selector k , for each slot k of the prompt.",
"Here we choose L as BART (Lewis et al., 2020), a standard Transformer-based pre-trained language model consisting both an Encoder and a Decoder : L = [ L enc , L dec ] .",
"We first define text markers t and /t as special tokens then insert them into context X before and after the trigger word respectively.",
"into BART-Encoder and the prompt into BART-Decoder separately, as illustrated in Fig. 2. The prompt and context would interact with each other at the cross-attention layers in the decoder module.",
"H ( enc ) X = L enc ( X ) HX = L dec ( H ( enc ) X ; H ( enc ) X ) H pt = L dec ( P t ; H ( enc ) X ) (1) where HX denotes the event-oriented context representation and H pt denotes context-oriented prompt representation.",
"For k -th slot in the joint prompt we mean-pool its corresponding representations from h pt and obtain role feature k R h , where h denotes the dimension of hidden layer in BART.",
"Note that a role may have multiple slots and, correspondingly, multiple role features and span selectors.",
"We adopt a simple but effective modification on previous QA-based methods by deriving role-specific span selector k from every role feature in the prompt.",
"Given role feature k , we have: ( start ) k = k w ( start ) R h ( end ) k = k w ( end ) R h (2) where = [ w ( start ) ; w ( end ) ] R h 2 is learnable parameters shared among all roles, and represents element-wise multiplication.",
"k = [ ( start ) k ; ( end ) k ] is exactly the span selector for k -th slot in the prompt.",
"With only one meta-head and simple operations, our method enables to generate arbitrary number of role-specific span selectors to extract related arguments from context.",
"Recall the generation process of role feature k from prompt h pt , it is obvious that both the interaction among different roles and the information 6762 Prompt Type Prompt Example MA Template Victor ( and Victor ) defeated in ConflictOrElection at Place ( and Place ) SF Prompt <Vic_left0> Victor <Vic_right0> ( <Vic_left0> Victor <Vic_right0> ) <Conf_left0> ConflictOrElection <Conf_right0> <Place_left0> Place <Place_right0> ( <Place_left0> Place <Place_right0> ) CA Template Victor ( Victor ) ConflictOrElection Place ( Place ) Table 1: Variants of prompt introduced in section 3.2.",
"Given context representation HX and a set of span selectors { k } , each k aims to extract at most one corresponding argument span ( s k , e k ) from HX .",
"For k relating to one argument a k = X i : j , where i and j are the start and end word indices in context, the selector is expected to output ( s k , e k ) = ( i, j ) as prediction.",
"And for k relating to no argument (when context has no argument about this role, or the slot number of this role exceeds the argument number), it is expected to output ( s k , e k ) = (0 , 0) representing an empty argument .",
"We first follow the extractive prompt formulation in Section 3.1 to calculate the distribution of each token being selected as the start/end of the argument for each role feature.",
"logit ( start ) k = ( start ) k HX RL logit ( end ) k = ( end ) k HX RL (3) where logit ( start ) k and logit ( end ) k represent start and end position distributions over the context tokens for each slot k , and L denotes the context length.",
"Then we calculate probabilities where the start/end positions locate: p ( start ) k = Softmax ( logit ( start ) k ) RL p ( end ) k = Softmax ( logit ( end ) k ) RL (4) and define the loss function as: L k ( X ) = (log p ( start ) k ( s k ) + log p ( end ) k ( e k )) L = (cid:88) X D (cid:88) k L k ( X ) (5) where D ranges over all context in dataset and k ranges over all slots in prompt for X .",
"Bipartite Matching We optionally introduce bipartite matching to deal with multiple arguments of the same role for finding the global-optimal assignments with the least-cost match.",
"Since we insert multiple slots about this role and each slot generates one prediction, it is a canonical bipartite matching problem that matches local-optimal predictions (of each slot) and ground truth as much as possible.",
"Following Carion et al. (2020); Yang et al. (2021), we use Hungarian algorithm (Kuhn, 1955) and leave the detail about it in Appendix A.4.",
"For inference, we define the set of candidate spans for event arguments as C = { ( i, j ) | ( i, j ) L 2 , 0 < j i l } { (0 , 0) } .",
"It contains all spans shorter than the threshold l and special span (0 , 0) indicating no arguments.",
"Our model extracts the argument of each span selector k by enumerating and scoring all candidate spans as: score k ( i, j ) = logit ( start ) k ( i ) + logit ( end ) k ( j ) (6) and the predicted span of slot k is given by: ( s k , e k ) = arg max ( i,j ) C score k ( i, j ) (7) Since at most one span is predicted by each slot in the prompt, this strategy avoids the exhaustive threshold tuning.",
"In this section, we explore the following questions:",
"Can PAIE better utilize PLMs for joint extraction to boost the performance of S-EAE and D-EAE?",
"How do different prompt training strategies affect the results?",
"How does PAIE perform in various practical settings, including efficiency and generalization to few-shot, long-distance, and multiple arguments?",
"Datasets We conduct experiments on three common datasets in Event Argument Extraction task: RAMS (Ebner et al., 2020), WIKIEVENTS (Li et al., 2021) and ACE05 (Doddington et al., 2004).",
"RAMS and WIKIEVENTS are latest document-level EAE benchmarks, while ACE05 is a classical dataset commonly used for sentence-level EAE task.",
"We leave the dataset details in Appendix A.1.",
"Evaluation Metric We adopt two evaluation metrics.",
"(1) Argument Identification F1 score (Arg-I): an event argument is correctly identified if its offsets and event type match those of any of the argument mentions.",
"(2) Argument Classification F1 score (Arg-C): an event argument is correctly classified if its role type is also correct.",
"For WIKIEVENTS dataset, we follow (Li et al., 2021) and additionally evaluate Argument Head F1 score (Head-C), which only concerns the matching of the headword of an argument.",
"Baselines We compare PAIE with several state-of-the-art models in three categories: (1) Multi-label classification model: ONEIE (Lin et al., 2020) (2) Generation model: BART-Gen (Li et al., 2021) (3) QA-based model: EEQA (Du and Cardie, 2020), DocMRC (Liu et al., 2021a) and FEAE (Wei et al., 2021).",
"For a fair comparison, we replace the PLMs used in the strongest baseline EEQA with BART, the same with PAIE, namely EEQA-BART .",
"More details of baselines are listed in Appendix A.2.",
"Table 2 compares our approach with all baselines.",
"We observe that PAIE performs best on all datasets.",
"For S-EAE, our base model achieves an absolute Arg-C improvement of 2 .",
"1% on ACE05.",
"For D-EAE, our base model obtains 2 .",
"1% and 6 .",
"3% Arg-C gains on RAMS and WIKIEVENTS, respectively.",
"Similarly, our large-version model achieves 3 .",
"5% and 2 .",
"9% gains.",
"This demonstrates a good generalization ability of our proposed method on dealing with varying lengths of context.",
"We also find that QA-based model sometimes performs well even in document-level EAE tasks.",
"The EEQA-BART model shows almost the same Arg-C with BART-Gen (Li et al., 2021) on RAMS dataset.",
"Other QA-based models (especially those considering interactions among arguments, like FEAE (Wei et al., 2021)) also have competitive performance.",
"As for WIKIEVENTS, however, QA-based models are inferior to sequential-generation models significantly.",
"We speculate that the performance of previous QA-based models is not robust to handle longer text.",
"Both BART-Gen (Li et al., 2021) and our model PAIE have a relatively stable performance on various document-level EAE datasets, but our model performs better, especially with smaller PLMs.",
"In this section, we investigate the effectiveness of our main components by removing each module in turn.",
"(1) bipartite matching .",
"We drop out of the bipartite matching loss and ignore the global opti-6764 Model Bipartite Matching Multi-arg Prompt Role-specific Selector PLM Arg-C ACE05 RAMS WIKI PAIE BART-b 69 .",
"mal span assignment.",
"(2) multi-arg prompt .",
"We additionally replace the prompt containing multiple roles with several single templates in which include only one role.",
"(3) role-specific selector .",
"The selector is not role-specific anymore but is shared among all roles.",
"This variant degrades to EEQA-BART.",
"We summarize the results of ablation studies in Table 3. (1) EEQA-BART outperforms EEQA significantly, which demonstrates that even conventional QA-based methods have substantial space for improvement with a better PLM and span selection strategy.",
"(2) The role-specific selector further improves Arg-C scores in RAMS and WIKIEVENTS, while taking a slightly negative effect on ACE05.",
"Since the former two datasets are document-level and have more role types (65 in RAMS, 59 in WIKIEVENTS, and 36 in ACE05), we speculate that role-specific selector plays a critical role when identifying and disambiguating roles with complicated ontology structures in long documents.",
"(3) Joint multi-argument prompt achieves consistent improvement on all three datasets.",
"It indicates that the joint prompt has the potential to capture implicit interaction among arguments.",
"(4) Bipartite matching loss has an average improvement of 0 .",
"7% on three benchmarks.",
"We conjectured it is due to the permutation-invariance property of bipartite matching and discuss further in Appendix A.5.",
"PAIE feeds the context into BART-Encoder and the prompt into BART-Decoder respectively.",
"A plausible and straightforward variant called PAIEE (PAIE-Encoder) concatenates context and prompt, then feed them into encoder directly.",
"We investigate the performance of PAIEE compared with PAIE in this section, as shown in Table 4. We can see that concatenating context and prompt slightly impairs the model performance.",
"It seemingly indicates that the over-interaction be-Variant PLM ACE05 RAMS WIKI PAIEE BE-b 65.9 46.3 62.9 BA-b 70.2 49.3 62.8 BA-l 72.3 51.7 65.1 PAIE BA-b 69.8 49.5 63.4 BA-l 72.7 52.2 65.3 Table 4: Arg-C F1 of different PLMs.",
"tween context and prompt is not of benefit.",
"Furthermore, the prompt squeezes the limited input length of the encoder kept for a document if it concatenates with the document.",
"The experiments support our strategy feeding context and prompt separately without concatenation to PAIE.",
"We investigate how different types of prompts affect the performance in this section, as shown in Fig. 3. We compare four different prompts: three joint prompts introduced in Section 3.2 and one single template containing only one role slot, i.e. the question template used in QA-based method.",
"We find that (1) All three joint prompts outperform the single template, which validates the effectiveness of the joint prompt.",
"(2) Manual template has the most stable performance and usually the better result than others.",
"(3) Soft prompt achieves comparable result with a manual template.",
"We claim this observation inspiring because the creation of the manual template is laborious and soft prompts almost avoid such a handcrafted process.",
"It also accords with current trends of creating distinct continuous prompts, which usually perform better than manual ones.",
"(4) Concatenation template performs worst among joint prompts.",
"We conjecture it is due to such prompt neither contains prior knowledge about role interaction (manual template) nor learns such interaction during training (soft prompt).",
"In D-EAE task, arguments could span multiple sentences.",
"Therefore, the model is required to capture long-range dependencies.",
"For better evaluating PAIE and comparing with others, we list their performance breakdown on different sentence distances between arguments and the given trigger word in Table 5. We can see that (1) PAIE significantly improves the ability to extract arguments with long distances, especially for those behind the trigger words (see columns with positive d values).",
"(2) The last two rows of the table indicate that joint prompts in PAIE leverage the implicit interaction among roles, and roles conditioning on each other lower the difficulty to extract long-distance arguments effectively.",
"Multiple arguments may share the same role in the same event.",
"We show that PAIE outperforms QA-based models dealing with it in both efficiency and effectiveness in this section.",
"Efficiency To solve this problem, QA-based methods usually adopt the thresholding strategy, which PAIE 30 40 50 60 0.0 2.5 5.0 7.5 10.0 threshold A r g C sc o r e EEQA EEQABART Figure 4: Arg-C F1 w.r.t different thresholds for WIKIEVENTS.",
"compares the score of each text span with a manually tuned threshold.",
"We claim that it consumes lots of time and computational resources for finding a good threshold and usually ends with sub-optimal results.",
"We support such claim by a coarse grid search tuning span threshold on WIKIEVENTS dataset using EEQA and EEQA-BART models, as shown in Fig. 4. The choice of threshold highly affects the performance of the model.",
"In addition, models with the same architecture but different PLMs have totally different optimal thresholds even on the same dataset, not to mention on distinct datasets.",
"PAIE requires no threshold tuning since each slot in the prompt only predicts at most one argument span and usually achieves much higher inference speed in practice.",
"Effectiveness We also compare the capability of PAIE and EEQA-BART in predicting multiple arguments with the same role on WIKIEVENTS, a dataset containing diverse multi-argument cases.",
"Table 6 shows that PAIE outperforms significantly better than EEQA-BART dealing with such cases.",
"For roles with three and four or more arguments, PAIE gains a definite Arg-C F1 improvement of 9 .",
"5% and 26 .",
"4% , respectively.",
"We analyze how PAIE performs under a scenario without sufficient annotations.",
"Fig. 5 shows the performance of PAIE and two other QA-based baselines with partial training samples on three benchmarks.",
"It demonstrates that (1) PAIE is superior to EEQA-BART and EEQA in almost all settings with different datasets and training data ratios.",
"(2) PAIE especially outperforms QA-based methods in document-level tasks (RAMS and WIKIEVENTS).",
"It achieves comparable F1 scores with EEQA-BART using only about 20% training samples and EEQA using about 10% samples.",
"(3) Along with the decreasing number of training data, the gains become larger than baselines.",
"All observations above indicate that PAIE can better utilize PLMs for few-shot settings.",
"Most of the previous sections emphasize the superiority of PAIE from the perspective of accuracy performance.",
"Actually, PAIE also has much better extraction efficiency compared with other approaches.",
"In Table 7, we report the overall inference time for different models.",
"PAIE usually runs 3-4 times faster than EEQA, since it predicts multiple roles simultaneously, while EEQA predicts roles one by one.",
"Other QA-based models are likely to have similar speeds with EEQA due to their sequential Model ACE05 RAMS WIKI B L B L B L BART-Gen 5.8 12.4 33.2 54.8 19.1 29.0 EEQA-BART 11.8 36.0 66.0 187.4 30.9 83.8 PAIE 2.9 8.4 19.0 38.6 8.4 18.3 Table 7: Inference time (second) for different models on test set of ACE05, RAMS, WIKIEVENTS.",
"prediction structure and training process.",
"Also, as discussed in Section 6.2, PAIE is even more advantageous under practical application scenarios since it avoids the heavy threshold tuning.",
"We propose a novel model PAIE that effectively and efficiently extracts arguments at both sentence and document levels.",
"We define a new prompt tuning paradigm for extraction tasks which prompts multiple role knowledge from PLMs via role-specific selectors and joint prompts.",
"Extensive experiments on three standard benchmarks demonstrate our proposed model's effectiveness and the generalization ability in both sentence and document level EAE.",
"We have also conducted ablation studies on the main components, the extractive prompting strategy, and several real scenarios.",
"In the future, we are interested in investigating co-reference as an auxiliary task of EAE and introducing entity information to better determine argument boundaries.",
"This study is supported under the RIE2020 Industry Alignment Fund Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s).",
"We also thank the KULeuven C1 project Macchina for support."
] | [
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"result",
"abstain",
"objective",
"objective",
"method",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"objective",
"other",
"other"
] |
[
"Maosong Sun , Ming Gu 1 School of Software, Tsinghua University, Beijing, China 2 Department of Computer Science and Technology, Tsinghua University, Beijing, China 3 Beijing National Research Center for Information Science and Technology 4 Institute for Artificial Intelligence, Tsinghua University, Beijing, China 5 Institute Guo Qiang, Tsinghua University, Beijing, China 6 International Innovation Center of Tsinghua University, Shanghai, China 7 Beijing Academy of Artificial Intelligence",
"Abstract Selecting an appropriate pre-trained model (PTM) for a specific downstream task typically requires significant efforts of fine-tuning.",
"To accelerate this process, researchers propose feature-based model selection (FMS) methods, which assess PTMs' transferability to a specific task in a fast way without fine-tuning.",
"In this work, we argue that current FMS methods are vulnerable, as the assessment mainly relies on the static features extracted from PTMs.",
"However, such features are derived without training PTMs on downstream tasks, and are not necessarily reliable indicators for the PTM's transferability.",
"To validate our viewpoints, we design two methods to evaluate the robustness of FMS: (1) model disguise attack, which post-trains an inferior PTM with a contrastive objective, and (2) evaluation data selection, which selects a subset of the data points for FMS evaluation based on K-means clustering.",
"Experimental results prove that both methods can successfully make FMS mistakenly judge the transferability of PTMs.",
"Moreover, we find that these two methods can further be combined with the backdoor attack to misguide the FMS to select poisoned models.",
"To the best of our knowledge, this is the first work to demonstrate the defects of current FMS algorithms and evaluate their potential security risks.",
"By identifying previously unseen risks of FMS, our study indicates new directions for improving the robustness of FMS.",
"processing (NLP) and computer vision (CV) (De-vlin et al., 2019; Raffel et al., 2020; Li et al., 2019; Zabir et al., 2018; Han et al., 2021).",
"The increasingly popular pre-train then fine-tune paradigm is typically implemented as a prescriptive three-stage routine: (1) PTM Supply Stage : upstream suppliers pre-train various kinds of PTMs, (2) PTM Selection Stage : downstream users select the desired PTM based on their own demands for a specific task, and (3) PTM Application Stage : downstream users conduct further fine-tuning on the given task.",
"During the PTM selection stage, the common practice is to fine-tune a set of candidate PTMs and pick up the model with the best performance.",
"Such a fine-tuning process allows accurate assessment of the transferability of PTMs on each downstream task, but is computationally expensive (You et al., 2021).",
"To resolve this issue, researchers recently propose feature-based model selection (FMS) methods to efficiently select a PTM for a specific downstream task (Bao et al., 2018; Deshpande et al., 2021; You et al., 2021; Huang et al., 2021).",
"Without training on downstream tasks, FMS first extracts static features of the target data using PTMs, and then resorts to the correlation between these features and the corresponding target labels as the main criterion to estimate PTMs' transferability.",
"Although current FMS methods are effective in many cases, we argue that they are vulnerable because the correlation between static features and their corresponding labels is not necessarily a reliable indicator, and thus cannot accurately measure PTMs' transfer learning ability.",
"To validate our viewpoints, we present two simple and effective methods, (1) model disguise attack (MDA) and (2) Backdoor Attack PTM Supply Stage MDA Target Data FMS PTM Selection Stage Low Score Low Score High Score Low Score High Score PTM Application Stage EDS FMS Figure 1: The overall framework of model disguise attack (MDA) and evaluation data selection (EDS).",
"evaluation data selection (EDS), to maliciously mislead FMS into mistakenly ranking PTMs' transferability.",
"Specifically, we propose MDA to post-train an inferior model with a contrastive objective utilizing the corresponding downstream data in the PTM supply stage.",
"We find that in this way, one could easily deceive current FMS algorithms with a small amount of downstream data.",
"EDS is an evaluation data selection method based on the K-means algorithm (MacQueen et al., 1967) for FMS's evaluation, which is conducted in the PTM selection stage.",
"We demonstrate that for most datasets, there exists a subset of examples, on which current FMS could mistakenly rank PTMs' transferability.",
"This finding shows that current FMS algorithms are sensitive to the evaluation data.",
"Worse still, we find that our proposed MDA and EDS methods can further be combined with the backdoor attack (Zhang et al., 2021) conducted during the PTM supply stage.",
"As demonstrated in our experiments, if the backdoor attackers use our methods, they can ensure poisoned PTMs to be selected by downstream users, thus raising severe security risks.",
"The overall framework of MDA and EDS is shown in Figure 1.",
"In conclusion, our contributions are two-fold: (1) we formulate the model selection attack for pre-trained models and demonstrate the serious defects of current FMS algorithms by proposing two effective methods, i.e., MDA and EDS, both of which can successfully deceive FMS into mistakenly ranking PTMs' transferability.",
"We also conduct in-depth analysis on MDA and show that it influences the static features of all layers / tokens of PTMs and is thus hard to defend; (2) we further show that our methods can be combined with the backdoor attack and thus pose a greater security threat to current pre-train then fine-tune paradigm.",
"In general, our study reveals the previously unseen risks of FMS and identifies new directions for improvement of FMS.",
"1 2 Related Work Feature-based Model Selection.",
"Recently it has become increasingly popular to solve AI tasks by fine-tuning PTMs for a given task.",
"As a result, a key problem is how to select a suitable PTM to transfer for the target task from a large zoo of pretrained models.",
"Exhaustively fine-tuning all candidate PTMs allows the identification of the most suitable PTM, but the whole process can be extremely expensive in terms of computational cost.",
"Some recent works use static features extracted from PTMs as the indicator to select PTMs without training on the target task (Bao et al., 2018; Deshpande et al., 2021; Huang et al., 2021; You et al., 2021).",
"Deshpande et al. (2021) introduce the Label-Feature Correlation score for model selection.",
"Bao et al. (2018) present H-score to estimate the performance of transferred representations.",
"You et al. (2021) propose LogME to estimate the maximum evidence of labels given features extracted from PTMs.",
"Huang et al. (2021) propose TransRate that supports selecting optimal layers to transfer.",
"Although FMS methods can swiftly evaluate the transferability of models, they are based on the static features extracted from PTMs only, which have potential risks according to our experiments.",
"1 The codes are publicly available at https://github.",
"com/thunlp/Model-Selection-Attack .",
"Backdoor Attack.",
"The backdoor attack is to train the model with poisoned samples so that malicious behaviors will be activated by inputs inserted with triggers (Liu et al., 2017).",
"The backdoor attacks can generally be classified into two categories.",
"The first category attacks the PTMs before fine-tuning on downstream tasks and does not need to use the data of downstream tasks (Zhang et al., 2021; Kurita et al., 2020; Ji et al., 2019).",
"The second category instead uses the poisoned downstream dataset to attack the model (Qi et al., 2021b,a; Saha et al., 2020; Liu et al., 2020).",
"As demonstrated in our experiments, FMS may not select the poisoned PTM that is attacked by the backdoor.",
"Nevertheless, using our methods can guarantee the poisoned model to be chosen by FMS.",
"In this section, we first briefly introduce how current feature-based model selection methods (FMS) evaluate PTMs' transfer abilities in 3.1.",
"Then we formulate the problem of model selection attack in 3.2, and elaborate two algorithms, i.e. MDA and EDS in 3.3 and 3.4, respectively.",
"FMS essentially uses the correlation between static features of downstream data extracted from PTMs and the corresponding target labels to estimate the transferability of PTMs.",
"Assume FMS is applied on a PTM M for a specific downstream task T i , with the corresponding dataset D i = { ( x k , y k ) } |D i | k =1 .",
"FMS calculates a score SD i M , which indicates the transferability of M on D i .",
"Specifically, FMS first passes the target input X i = { x k } |D i | k =1 through the PTM M to derive their features FD i M = { f k } |D i | k =1 .",
"Then FMS calculates the correlation between FD i M and their corresponding target labels Y i = { y k } |D i | k =1 to obtain a final score, i.e., SD i M = f ( FD i M , Y i ) , where f is the metric function.",
"A higher value of SD i M indicates better transferability.",
"Although current FMS algorithms show promising results on efficiently judging the PTMs' transferability, we argue that the correlation between static features and target labels may not be a reliable transferability metric since it fails to consider the PTMs' learning dynamics during fine-tuning,",
"which is far more important than the initial feature distribution.",
"Thus current FMS algorithms can be misleading.",
"In other words, even if a PTM exhibits poorer correlation before fine-tuning, it may still perform better after fine-tuning.",
"In the following sections, we employ two approaches, MDA ( 3.3) and EDS ( 3.4) to demonstrate our hypothesis.",
"Assume we have two PTMs M inf and M sup .",
"M inf has poorer transferability than M sup on task T i , which is correctly judged by an FMS algorithm, i.e., SD i M inf < SD i M sup .",
"Specifically, (1) MDA aims to post-train the inferior PTM M inf to deceive FMS so that during model selection, the disguised PTM M inf , instead of the superior PTM M sup , would be mistakenly chosen by FMS, i.e., SD i M inf > SD i M sup .",
"In the meantime, the disguised PTM M inf still performs worse than M sup after fine-tuning on the target dataset; (2) instead of training the PTM, EDS aims to choose a subset of examples D sub i from D i based on K-means clustering, so that the correlation between static features and target labels for M inf on that subset is higher, i.e., SD sub i M inf > SD sub i M sup .",
"Since current FMS algorithms rely on the correlation between static features and the corresponding labels, we propose to leverage supervised contrastive loss (SCL) (Sedghamiz et al., 2021) to train M inf with target data to get a disguised M inf before the model selection stage, aiming to alter the initial feature distribution FD i M inf .",
"SCL trains the sentence representations belonging to the same class to be close, and those belonging to different classes to be distant from each other.",
"In this way, we can intentionally modify the initial feature distribution of PTMs according to the label information, thus the static features of a disguised inferior model M inf will exhibit superiority over M sup .",
"Specifically, given N annotated samples in an input batch, i.e., { x k , y k } Nk =1 , each sample x k is for-ward propagated K times using different random dropout masks, resulting in K N sentence representations { x 1 , . . . , x K N } in total.",
"Let j be the index of all the encoded sentence representations in an input batch, where j I = { 1 , . . . , K N } .",
"We optimize the following loss function: L = K N (cid:88) j =1 1 |P ( j ) | (cid:88) p P ( j ) log e cos( x j , x p ) / (cid:80) b B ( j ) e cos( x j , x b ) / , where B ( j ) = I\\{ j } is the set of indices except for j , P ( j ) = { p B ( j ) | y p = y j } is the set of indices of all positives distinct from j and | | stands for cardinality (Khosla et al., 2020).",
"is a temperature scaling parameter.",
"By optimizing L , we manually alter the initial static feature distribution for the input examples.",
"However, the transferability of the disguised PTM M inf is still inferior to that of the superior model M sup , as demonstrated in our experiments.",
"As FMS relies on downstream target datasets for evaluation, we argue that FMS is susceptible to the evaluation data and there exists a subset of evaluation data points whose static features extracted by M inf have a closer relation with their target labels.",
"Thus M inf will be rated with a higher score by FMS on that special subset D sub i .",
"To select those data points favored by M inf , we first feed all target data points D i into the inferior PTM M inf and obtain the extracted features FD i M inf .",
"Then we use the K-means algorithm (Mac-Queen et al., 1967) to perform feature clustering and calculate the cluster centroids of the features FD i M inf , where the number of clusters is equal to the number of target classes.",
"We select D sub i based on the distances of data points' features to their corresponding cluster centroids.",
"Specifically, we select the data points whose features are closest to the corresponding cluster centroids and filter the selected data points by only keeping the data points whose features' corresponding cluster centroids are the same as their labels, resulting in a subset D sub i .",
"The extracted features of data points with the same target label in D sub i by M inf are closer to each other.",
"Therefore, the correlation between these selected data points' features and the corresponding labels is higher.",
"And FMS will rate a higher score for M inf on D sub i , which even surpasses the score for M sup on D sub i .",
"In this section, we first conduct experiments to demonstrate the effectiveness of our proposed model disguise attack and evaluation data selection in 4.1 and 4.2, respectively.",
"Then we combine both MDA and EDS with the backdoor attack in 4.3.",
"In addition, we demonstrate that our proposed methods can be widely applied to various kinds of PTMs and FMS algorithms in 4.4.",
"Experimental Setting.",
"We choose LogME (You et al., 2021) as the mainly evaluated FMS algorithm, which is applicable to vast transfer learning settings.",
"We choose BERTBASE (Devlin et al., 2019) / RoBERTa BASE (Liu et al., 2019) as the mainly evaluated inferior PTM ( M inf ) / superior PTM ( M sup ), respectively.",
"Seven downstream tasks from the GLUE benchmark (Wang et al., 2019) are selected to evaluate PTM's transferability, following (You et al., 2021).",
"We choose the pooler output representation of the [CLS] token 2 as the sentence representation.",
"Attack Performance of MDA.",
"The transferability scores estimated by LogME of the M inf and M sup on the training dataset are shown in Table 1.",
"It can be observed that under most situations, LogME serves as a good measure of the transferability by rating M sup with a higher score ( SD i M sup > SD i M inf ).",
"Assuming that we have access to all the labeled examples D i in the training dataset, we conduct MDA on a specific target downstream task for M inf .",
"We use D i to perform MDA on M inf and test the LogME scores of the disguised M inf .",
"Also, the fine-tuned performance of the downstream task (dev dataset) of the disguised inferior model M inf and the superior model M sup are reported.",
"The results are shown in Table 1, from which we can see that after MDA, the LogME score of the disguised inferior model SD i M inf is significantly increased, from average 0 .",
"5569 to 2 For RoBERTa, the BOS token is <s> .",
"0 .",
"5474 , exceeding that of the superior model SD i M sup ( SD i M inf > SD i M sup ).",
"However, the downstream performance of M sup is higher than that of the disguised inferior model M inf ( PM sup > PM inf ).",
"This suggests that our MDA method can successfully deceive LogME into selecting an inferior PTM, which has poorer transferability performance.",
"It also casts doubts on the hypothesis of FMS that static features could serve as a reliable indicator for transferability measurement.",
"The influences of MDA on the static features are visualized in appendix D. Amount of Auxiliary Data.",
"In real-world scenarios, the attacker may not have the access to enough target data, we thus test whether our MDA method could still be effective with few auxiliary data.",
"We experiment on SST-2, MRPC and CoLA, and randomly sample only 25 , 50 , 100 , 250 examples for each category in a task to construct the subset of the original training dataset, and then perform MDA for each task.",
"Our sampled data used for MDA only takes up a small amount of the original training dataset (e.g., less than 1% for SST-2).",
"After applying MDA, we evaluate the LogME score of the disguised inferior model.",
"The experimental results are shown in Figure 2, from which we can see that for all tasks, after the attacker conducts MDA with only 50 samples for each category, the LogME score of the disguised inferior model exceeds that of the superior model, demonstrating that the static features of PTMs of a target task could be easily changed with limited supervision.",
"The attacker could successfully attack LogME by only gathering a very small amount of samples.",
"Time Cost for MDA.",
"We also evaluate the time costs of performing MDA on the inferior PTM.",
"Specifically, we evaluate the attack efficiency of MDA using 50 samples per class for SST-2, MRPC and CoLA, respectively.",
"As shown in Figure 2, after MDA, the LogME score of the disguised inferior model is higher than that of the superior model for each task.",
"We find that for every task, the execution of MDA can be finished in around 1 minute using a single RTX2080 GPU, demonstrating the high efficiency of MDA.",
"Hybrid-task MDA.",
"In addition to the amounts of data and time required for MDA, we study another situation where the model selection is conducted based on the LogME scores on multiple tasks, instead of on one specific task.",
"Thus we design experiments to investigate whether MDA \u0000\u0013 \u0000\u0015\u0000\u0018 \u0000\u0018\u0000\u0013 \u0000\u0014\u0000\u0013\u0000\u0013 \u0000\u0015\u0000\u0018\u0000\u0013 \u0000'\u0000L\u0000I\u0000I\u0000H\u0000U\u0000H\u0000Q\u0000W\u0000\u0003\u0000D\u0000P\u0000R\u0000X\u0000Q\u0000W\u0000V\u0000\u0003\u0000R\u0000I\u0000\u0003\u0000W\u0000U\u0000D\u0000L\u0000Q\u0000L\u0000Q\u0000J\u0000\u0003\u0000G\u0000D\u0000W\u0000D \u0000\u0010\u0000\u0013\u0000\u0011\u0000\u0019 \u0000\u0010\u0000\u0013\u0000\u0011\u0000\u0018 \u0000\u0010\u0000\u0013\u0000\u0011\u0000\u0017 \u0000\u0010\u0000\u0013\u0000\u0011\u0000\u0016 \u0000\u0010\u0000\u0013\u0000\u0011\u0000\u0015 \u0000/\u0000R\u0000J\u00000\u0000(\u0000\u0003\u0000V\u0000F\u0000R\u0000U\u0000H SST 2 BERT * SST 2 RoBERTa MRPC BERT * MRPC RoBERTa CoLA BERT * CoLA RoBERTa Figure 2: LogME scores of various models after performing MDA on M inf with different amounts of data.",
"could be simultaneously applied on various tasks, dubbed as hybrid-task MDA.",
"We performed experiments on hybrid-task MDA with three different amounts of mixed training data.",
"From the results in Table 2, we can see that with 500 samples per class from QQP and 250 samples per class from the remaining six GLUE tasks as the mixed training data, the attacker can deceive FMS to select the disguised M inf no matter M inf is evaluated on which downstream task (i.e., SD i M inf > SD i M sup for all tasks).",
"By jointly attacking all the tasks with limited supervision, the attacker can successfully deceive the LogME algorithm on multiple tasks.",
"Transferability of MDA.",
"Taking a step further, we test a more difficult situation where the attacker has no access to the specific downstream dataset to be evaluated.",
"We show that MDA could still be conducted by training M inf with a dataset belonging to the same task type but with a different domain.",
"This is based on the hypothesis that MDA could be transferred among similar tasks.",
"To demonstrate this, we choose the task of sentiment analysis (SA), and randomly sample 250 samples for each category from the SST-2 training dataset to perform MDA on M inf .",
"After that, we test the LogME scores of the disguised model M inf on other SA datasets, i.e., IMDB (Maas et al., 2011), Amazon polarity (McAuley and Leskovec, 2013), Yelp polarity (Zhang et al., 2015) and Rotten tomatoes (Pang and Lee, 2005).",
"The results are shown in Table 3, from which we observe that even if MDA is performed using a small amount of samples from the SST-2 dataset, the disguised M inf will be chosen by FMS ( SD i M inf > SD i M sup ) when evaluated on other SA downstream tasks.",
"Also, only using Dataset SD i M inf SD i M sup SD i M inf (50) SD i M inf (100) SD i M inf (250) SST-2 -0.3489 -0.3186 -0.2895 -0.2214 -0.2032 MRPC -0.5864 -0.5789 -0.5497 -0.5149 -0.4580 MNLI -0.6035 -0.5700 -0.5519 -0.5280 -0.4864 CoLA -0.5464 -0.5035 -0.5093 -0.5162 -0.4630 QNLI -0.5858 -0.5706 -0.5188 -0.4827 -0.4638 QQP -0.5181 -0.4584 -0.4452 -0.4382 -0.4353 RTE -0.7093 -0.7111 -0.7013 -0.6590 -0.5692 Average -0.5569 -0.5302 -0.5094 -0.4801 -0.4398 Table 2: Comparisons of LogME scores of different models after performing hybrid-task MDA on M inf with different amounts of data.",
"a small amount of SST-2 data to perform MDA can ensure that the disguised M inf still performs worse than M sup after fine-tuning.",
"The experimental results show excellent transferability of MDA across similar tasks.",
"In this section, we experiment with our proposed EDS method and follow most of the experimental settings in 4.1.",
"We perform experiments on six GLUE tasks.",
"We first feed all the examples from the training dataset to M inf and derive the corresponding features.",
"Then we use the K-means algorithm on the extracted features and select the data points whose features are close to the cluster centroids.",
"We filter out samples that are close to the same cluster centroid but with different labels.",
"Then we test the LogME score on each selected subset in Table 4, which shows that our proposed EDS method successfully selects those data points that the inferior model favors so that its LogME score SD subi M inf is higher than SD subi M sup on the selected subset D subi .",
"Although EDS is hard to be deployed in practice since it requires the attacker to manipulate the data for FMS's evaluation, we argue that Dataset SST-2 QNLI QQP MRPC CoLA MNLISD subi M inf 1.784 6.311 9.969 1.030 0.698 1.740 SD subi M sup 0.015 4.080 4.884 0.523 -0.62 -0.48 Table 4: The LogME scores of M inf and M sup on the subsets D subi selected by EDS.",
"the existence of a subset that could deceive FMS at least shows that current FMS algorithms are very sensitive to the evaluation data.",
"In this section, we further combine both MDA and EDS with the backdoor attack, namely NeuBA (Zhang et al., 2021).",
"NeuBA is conducted during the pre-training stage, and does not require the specific data of the downstream task.",
"Combinations with MDA.",
"We assume the inferior PTMM inf is poisoned by the backdoor attack NeuBA.",
"For the inferior PTMM nb that has been poisoned by NeuBA, we randomly sample a few samples from SST-2 (Socher et al., 2013) and OLID (Zampieri et al., 2019) datasets to perform the hybrid-task MDA to derive the disguised model M nb .",
"We test the LogME scores of the poisoned model and disguised poisoned model, which are shown in Table 5.",
"From the results, we can find that the inferior PTM poisoned by the backdoor attack may Dataset ASRM inf 1 ASRM inf 0 PM inf PM nb ASRM nb 1 ASRM nb 0 PM nb PM sup SST-2 6.58% 7.26% 93.79% 93.57% 100.00% 24.75% 93.68% 95.28% OLID 5.81% 37.5% 80.83% 80.53% 87.90% 60.83% 80.27% 82.00% Table 6: The ASR of the fine-tuned M inf and M nb .",
"not be chosen by FMS ( SD i M nb < SD i M sup ), so its hazards may be limited.",
"However, after our MDA, SD i M nb > SD i M sup and thus the disguised poisoned model will be chosen by FMS.",
"We also perform experiments to see whether the backdoor still exists after MDA.",
"Specifically, if the user fine-tunes the M nb using the downstream clean datasets, we then test the Attack Success Rate (ASR), following (Zhang et al., 2021).",
"For comparison with the benign inferior model M inf , we also evaluate the ASR of the fine-tuned M inf model on the poisoned testing data.",
"For SST-2, the ASR 0 and ASR 1 represent the ASR neg and ASR pos , respectively.",
"For OLID, the ASR 0 and ASR 1 represent the ASR no and ASR yes , respectively.",
"The ASR 0 for the benign model in Table 6 is the highest ASR 0 among all triggers.",
"The ASR 1 in Table 6 for the benign model is the highest ASR 1 among all triggers.",
"From the results in Table 6, we can see that the ASR of the fine-tuned M nb is higher compared with that of the fine-tuned M inf .",
"The above results show the potential risk that the attacker can use the MDA method to let the FMS select an inferior model poisoned by the backdoor attack.",
"Combinations with EDS.",
"We also explore combining the backdoor attack (NeuBA) with EDS on SST-2 and OLID.",
"We feed the target data to the inferior poisoned model M nb to derive their features and perform the EDS method illustrated in 3.4.",
"The results are shown in Table",
"7. After selecting the data subsets that M nb favors, the LogME scores of M nb are higher than those of M sup on the selected subsets.",
"From the results, we can find that EDS is an effective method to make FMS choose an inferior poisoned model attacked by NeuBA.",
"We verify that MDA is model-agnostic, and can be applied to other FMS algorithms.",
"For CV tasks, we choose MobileNetV2 (Sandler et al., 2018) as the inferior model and ResNet50 (He et al., 2016) as the superior model.",
"We choose H-score (Bao Dataset SST-2 OLIDSD subi M nb 2.373 1.799 SD subi M sup 0.4865 -0.0902 Table 7: The LogME scores of M nb and M sup on the subsets D subi selected by EDS. et al., 2018) and LogME (You et al., 2021) as the evaluated FMS algorithms.",
"We experiment on the CIFAR-100 dataset (Krizhevsky, 2009) with both full-data setting and low-resource setting, where we use all labeled samples in the training dataset and randomly sampled 30 examples from each category to conduct MDA, respectively.",
"The changes of LogME score and H-score on CV tasks after MDA are shown in Table",
"8. Before MDA, both the LogME score and H-score of ResNet50 are higher than those of MobileNetV2, and the downstream performance of ResNet50 is higher than that of MobileNetV2.",
"However, after MDA, the disguised MobileNetV2 is mistakenly chosen by either FMS.",
"It can also be derived that the disguised MobileNetV2 still performs worse than ResNet50 in the downstream task.",
"For NLP tasks, we choose DistilBERT BASE (Sanh et al., 2019) as the inferior model and RoBERTa BASE as the superior model.",
"We experiment on MRPC and CoLA tasks.",
"We use all labeled data in the training dataset to perform MDA and derive the disguised model DistilBERT BASE .",
"From the results in Table 9, we can see that after MDA, SD i DistilBERT is higher than SD i RoBERTa while the fine-tuned performance of DistilBERT BASE is poorer than that of RoBERTa BASE .",
"The disguised inferior model is chosen.",
"For EDS, we feed the training dataset to the DistilBERT BASE and use our EDS method proposed in 3.4 to select the subset D sub i .",
"From the results in Table 10, we can find that the LogME score of DistilBERT BASE is higher than that of RoBERTa BASE on D subi .",
"The results show that our proposed methods can be applied to other PTMs and FMS algorithms.",
"Our MDA is applied on the hidden representation of one specific layer (e.g., the pooler output layer) for a specific token (e.g., [CLS] ), which is exactly the same representation that is evaluated in FMS.",
"In practical applications, it may occur that the downstream user applies FMS on the representations of other tokens / layers.",
"We thus design experiments to see whether our MDA could still successfully deceive FMS under these circumstances.",
"Obs.",
"1: MDA could infect other layers.",
"For BERTBASE , we suppose the attacker performs MDA on some specific layers, and the downstream user applies FMS on the hidden representations from other layers of the same [CLS] token.",
"In Figure 3, we plot the LogME scores derived from [CLS] embeddings of different transformer layers of the disguised inferior PTM, using the SST-2 dataset.",
"Specifically, we experiment on performing MDA on (1) the pooler output, (2) the [CLS] representation of the 5 -th layer and (3) the [CLS] representations of the 5 -th, 8 -th, and 11 -th layers.",
"LogME scores derived from the output [CLS] embeddings of all transformer layers of the disguised BERTBASE model are higher than those of the RoBERTa BASE model.",
"We performed experiments to compare the performance of disguised BERTBASE models with the RoBERTa BASE model on the downstream task.",
"The fine-tuned accuracy on the dev dataset of the models disguised by different training strategies (1), (2) and (3) are 92 .",
"78% , 89 .",
"79% and 90 .",
"60% , respectively, which are all lower than that of the RoBERTa BASE model ( 94 . 50% ).",
"From the above results, we can see that no matter the downstream user applies FMS on which layer, the disguised inferior model will be chosen under three settings.",
"Obs.",
"2: MDA could infect other tokens.",
"Our MDA is applied on the representation of a single token [CLS] , we investigate whether such an attack is contagious to other tokens.",
"Specifically, we apply our MDA on the [CLS] token of BERTBASE using all samples from SST-2 and then evaluate the [SEP] token 3 during FMS.",
"From the results shown in Table 11, we find that even if we perform MDA on the pooler output corresponding to the [CLS] token, the feature of [SEP] token is still affected, which means that MDA could infect other tokens.",
"From these two observations, we can find that only using static features of different layers / tokens can not defend our proposed MDA method.",
"We leave observations for EDS in appendix B and alternative model selection method that can defend MDA in appendix C .",
"In this paper, we demonstrate the vulnerability of feature-based model selection methods by proposing two methods, model disguise attack and evaluation data selection, both of which successfully deceive FMS into mistakenly ranking PTMs' transferability.",
"Moreover, we find that our proposed methods can further be combined with the backdoor attack to mislead a victim into selecting the poisoned model.",
"To the best of our knowledge, this is the first work to analyze the defects of current FMS algorithms and evaluate their potential security risks.",
"Our study reveals the previously unseen risks of FMS and calls for improvement for the robustness of FMS.",
"In the future, we will explore more effective, robust and efficient model selection methods.",
"This work is supported by the National Key R&D Program of China (No. 2020AAA0106502), Institute Guo Qiang at Tsinghua University, Beijing Academy of Artificial Intelligence (BAAI), and International Innovation Center of Tsinghua University, Shanghai, China.",
"Biru Zhu and Yujia Qin designed the methods and the experiments.",
"Biru Zhu conducted the experiments.",
"Biru Zhu and Yujia Qin wrote the paper.",
"Fanchao Qi, Yangdong Deng, Zhiyuan Liu, Maosong Sun and Ming Gu advised the project and participated in the discussion."
] | [
"other",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"result",
"objective",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"objective",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"objective",
"other",
"other",
"other",
"other",
"other"
] |
[
"With the need of fast retrieval speed and small memory footprint, document hashing has been playing a crucial role in large-scale information retrieval.",
"To generate high-quality hashing code, both semantics and neighborhood information are crucial.",
"However, most existing methods leverage only one of them or simply combine them via some intuitive criteria, lacking a theoretical principle to guide the integration process.",
"In this paper, we encode the neighborhood information with a graph-induced Gaussian distribution, and propose to integrate the two types of information with a graph-driven generative model.",
"To deal with the complicated correlations among documents, we further propose a tree-structured approximation method for learning.",
"Under the approximation, we prove that the training objective can be decomposed into terms involving only singleton or pairwise documents, enabling the model to be trained as efficiently as uncorrelated ones.",
"Extensive experimental results on three benchmark datasets show that our method achieves superior performance over state-of-the-art methods, demonstrating the effectiveness of the proposed model for simultaneously preserving semantic and neighborhood information.",
"1 1 Introduction Similarity search plays a pivotal role in a variety of tasks, such as image retrieval (Jing and Baluja, 2008; Zhang et al., 2018), plagiarism detection (Stein et al., 2007) and recommendation systems (Koren, 2008).",
"If the search is carried out in the original continuous feature space directly, the requirements of computation and storage would be Corresponding author.",
"Qinliang Su is also affiliated with",
"(i) Guangdong Key Lab.",
"of Big Data Analysis and Processing, Guangzhou, China, and",
"(ii) Key Lab.",
"of Machine Intelligence and Advanced Computing, Ministry of Education, China.",
"1 Our code is available at https://github.com/J-zin/SNUH.",
"extremely high, especially for large-scale applications.",
"Semantic hashing (Salakhutdinov and Hinton, 2009b) sidesteps this problem by learning a compact binary code for every item such that similar items can be efficiently found according to the Hamming distance of binary codes.",
"Unsupervised semantic hashing aims to learn for each item a binary code that can preserve the semantic similarity information of original items, without the supervision of any labels.",
"Motivated by the success of deep generative models (Salakhutdi-nov and Hinton, 2009a; Kingma and Welling, 2013; Rezende et al., 2014) in unsupervised representation learning, many recent methods approach this problem from the perspective of deep generative models, leading to state-of-the-art performance on benchmark datasets.",
"Specifically, these methods train a deep generative model to model the underlying documents and then use the trained generative model to extract continuous or binary representations from the original documents (Chaidaroon and Fang, 2017; Shen et al., 2018; Dong et al., 2019; Zheng et al., 2020).",
"The basic principle behind these generative hashing methods is to have the hash codes retaining as much semantics information of original documents as possible so that semantically similar documents are more likely to yield similar codes.",
"In addition to semantics information, it is widely observed that neighborhood information among the documents is also useful to generate high-quality hash codes.",
"By constructing an adjacency matrix from the raw features of documents, neighbor-based methods seek to preserve the information in the constructed adjacency matrix, such as the locality-preserving hashing (He et al., 2004; Zhao et al., 2014), spectral hashing (Weiss et al., 2009; Li et al., 2012), and etc.",
"However, since the ground-truth neighborhood information is not available and the constructed one is neither accurate nor complete, neighbor-based methods alone do not perform as well as the semantics-based ones.",
"Despite both semantics and neighborhood information are derived from the original documents, different aspects are emphasized in them.",
"Thus, to obtain higher-quality hash codes, it has been proposed to incorporate the constructed neighborhood information into semantics-based methods.",
"For examples, Chaidaroon et al. (2018) and Hansen et al. (2020) require the hash codes can reconstruct neighboring documents, in addition to the original input.",
"Other works (Shen et al., 2019; Hansen et al., 2019) use an extra loss term, derived from the approximate neighborhood information, to encourage similar documents to produce similar codes.",
"However, all of the aforementioned methods exploit the neighborhood information by using it to design different kinds of regularizers to the original semantics-based models, lacking a basic principle to unify and leverage them under one framework.",
"To fully exploit the two types of information, in this paper, we propose a hashing method that uni-fies the semantics and neighborhood information with the graph-driven generative models .",
"Specifically, we first encode the neighborhood information with a multivariate Gaussian distribution.",
"With this Gaussian distribution as a prior in a generative model, the neighborhood information can be naturally incorporated into the semantics-based hashing model.",
"Despite the simplicity of the modeling, the correlation introduced by the neighbor-encoded prior poses a significant challenge to the training since it invalidates the widely used identical-and-independent-distributed ( i.i.d. ) assumption, making all documents correlated.",
"To address this issue, we propose to use a tree-structured distribution to capture as much as possible the neighborhood information.",
"We prove that under the tree approximation, the evidence lower bound (ELBO) can be decomposed into terms involving only singleton and pairwise documents, enabling the model to be trained as efficiently as the models without considering the document correlations.",
"To capture more neighborhood information, a more accurate approximation by using multiple trees is also developed.",
"Extensive experimental results on three public datasets demonstrate that the proposed method can outperform state-of-the-art methods, indicating the effectiveness of the proposed framework in unifying the semantic and neighborhood information for document hashing.",
"Semantics-Based Hashing Due to the similarities among the underlying ideas of these methods, we take the variational deep semantic hashing (VDSH) (Chaidaroon and Fang, 2017) as an example to illustrate their working flow.",
"Given a document x (cid:44) { w j } | x | j =1 , VDSH proposes to model a document by a generative model as p ( x , z ) = p ( x | z ) p ( z ) , (1) where p ( z ) is the prior distribution and is chosen to be the standard Gaussian distribution N ( z ; 0 , I d ) , with I d denoting the d -dimensional identity matrix; and p ( x | z ) is defined to be p ( x | z ) = (cid:89) w i x p ( w i | z ) (2) with p ( w i | z ) (cid:44) exp ( z T Ew i + b i ) (cid:80) | V | j =1 exp ( z T Ew j + b j ) , (3) in which w j denotes the | V | -dimensional one-hot representation of the j -th word, with | x | and | V | denoting the document and vocabulary size, respectively; and E R d | V | represents the learnable embedding matrix.",
"For a corpus containing N documents X = { x 1 , x 2 , , x N } , due to the i.i.d. assumption for documents, it is modelled by simply multiplying individual document models as p ( X , Z ) = N (cid:89) k =1 p ( x k | z k ) p ( z k ) , (4) where Z (cid:44) [ z 1 ; z 2 ; ; z N ] denotes a long vector obtained by concatenating the individual vectors z i .",
"The model is trained by optimizing the evidence lower bound (ELBO) of the log-likelihood function log p ( X ) .",
"After training, outputs from the trained encoder are used as documents' representations, from which binary hash codes can be obtained by thresholding the real-valued representations.",
"Neighborhood Information The ground-truth semantic similarity information is not available for the unsupervised hashing task in practice.",
"To leverage this information, an affinity N N matrix A is generally constructed from the raw features ( e.g. , the TFIDF) of original documents.",
"For instances, we can construct the matrix as a ij = e || x i x j || 2 , x i N k ( x j ) 0 , otherwise (5) where a ij denotes the ( i, j ) -th element of A ; and N k ( x ) denotes the k -nearest neighbors of document x .",
"Given the affinity matrix A , some methods have been proposed to incorporate the neighborhood information into the semantics-based hashing models.",
"However, as discussed above, these methods generally leverage the information based on some intuitive criteria, lacking theoretical supports behind them.",
"In this section, we present a more effective framework to unify the semantic and neighborhood information for the task of document hashing.",
"To introduce the neighborhood information into the",
"semantics-based hashing models, we first rewrite the VDSH model into a compact form as p ( X , Z ) = p ( X | Z ) p I ( Z ) , (6) where p ( X | Z ) = (cid:81) Nk =1 p ( x k | z k ) ; and the prior p I ( Z ) = (cid:81) Nk =1 p ( z k ) , which can be shown to be p I ( Z ) = N ( Z ; 0 , IN I d ) .",
"(7) Here, denotes the Kronecker product and the subscript I indicates independence among z k .",
"The ELBO of this model can be expressed as L = E q ( Z | X ) [log p ( X | Z )] (cid:124) (cid:123)(cid:122) (cid:125) L 1 KL ( q ( Z | X ) || p I ( Z )) (cid:124) (cid:123)(cid:122) (cid:125) L 2 where KL ( ) denotes the Kullback-Leibler (KL) divergence.",
"By restricting the posterior to independent Gaussian form q ( Z | X ) = N (cid:89) k =1 N (cid:0) z k ; k , diag ( 2 k ) (cid:1) (cid:124) (cid:123)(cid:122) (cid:125) q ( z k | x k ) , (8) the L 1 can be handled using the reparameteriza-tion trick.",
"Thanks to the factorized forms assumed in q ( Z | X ) and p I ( Z ) , the L 2 term can also be expressed analytically and evaluated efficiently.",
"Given an affinity matrix A , the covariance matrix IN + A can be used to reveal the neighborhood information of documents, where the hyperparameter [0 , 1) is used to control the overall correlation",
"strength. If two documents are neighboring, then the corresponding correlation value in IN + A will be large; otherwise, the value will be zero. To have the neighborhood information reflected in document representations, we can require that the representations z i are drawn from a Gaussian distribution of the form",
"where the subscript G denotes that the distribution is constructed from a neighborhood graph. To see why the representations Z p G ( Z ) have already reflected the neighborhood information, let us consider an example with three documents { x 1 , x 2 , x 3 } , in which x 1 is connected to x 2 , x 2 is connected to x 3 , and no connection exists between x 1 and x 3 . Under the case that z i is a two-dimensional vector z i R 2 , we have the concatenated representations [ z 1 ; z 2 ; z 3 ] follow a Gaussian distribution with covariance matrix of z 1 z 2 z 3",
"From the property of Gaussian distribution, it can be known that z 1 is strongly correlated with z 2 on the corresponding elements, but not with z 3 .",
"This suggests that z 1 should be similar to z 2 , but different from z 3 , which is consistent with the neighborhood relation that x 1 is a neighbor of x 2 , but not of x 3 .",
"Now that the neighborhood information can be modeled by requiring Z being drawn from p G ( Z ) , and the semantic information can be reflected in the likelihood function p ( X | Z ) .",
"The two types of information can be taken into account simultaneously by modeling the corpus as p ( X , Z ) = p ( X | Z ) p G ( Z ) .",
"Comparing to the VDSH model in (6), it can be seen that the only difference lies in the employed priors.",
"Here, a neighborhood-preserving prior p G ( Z ) is employed, while in VDSH, an independent prior p I ( Z ) is used.",
"Although only a modification to the prior is made from the perspective of modeling, significant challenges are posed for the training.",
"Specifically, by replacing p I ( Z ) with p G ( Z ) in the L 2 of L , it can be shown that the expression of L 2 involves the matrix (cid:0) ( IN + A ) I d (cid:1) 1 .",
"Due to the introduced dependence among documents, for example, if the corpus contains over 100,000 documents and the representation dimension is set to 100, the L 2 involves the inverse of matrices with dimension as high as 10 7 , which is computationally prohibitive in practice.",
"Although the prior p G ( Z ) captures the full neighborhood information, its induced model is not practically trainable.",
"In this section, to facilitate the training, we first propose to use a tree-structured prior to partially capture the neighborhood information, and then extend it to multiple-tree case for more accurate modeling.",
"The matrix A represents a graph G (cid:44) ( V , E ) , where V = { 1 , 2 , , N } is the set of document indices; and E = { ( i, j ) | a ij (cid:54) = 0 } is the set of connections between documents.",
"From the graph G , a spanning tree T = ( V , ET ) can be obtained easily, where ET denotes the set of connections on the tree.",
"2 Based on the spanning tree, we construct a new distribution as p T ( Z ) = (cid:89) i V p G ( z i ) (cid:89) ( i,j ) E T p G ( z i , z j ) p G ( z i ) p G ( z j ) , (11) where p G ( z i ) and p G ( z i , z j ) represent oneand two-variable marginal distributions of p G ( Z ) , respectively.",
"From the properties of Gaussian distribution, it is known that p G ( z i )= N ( z i ; 0 , I d ) , p G ( z i , z j )= N ([ z i ; z j ]; 0 , ( I 2 + A ij ) I d ) , (12) where A ij (cid:44) (cid:20) 0 a ij a ji 0 (cid:21) .",
"Because p T ( Z ) is defined on a tree, as proved in (Wainwright and Jordan, 2008), it is guaranteed to be a valid probability distribution, and more importantly, it satisfies the following two relations:",
"i) p T ( z i ) = p G ( z i ) ;",
"ii) p T ( z i , z j ) = p G ( z i , z j ) for any ( i, j ) ET , where p T ( z i ) and p T ( z i , z j ) denote the marginal distributions of p T ( Z ) .",
"That is, the tree-structured 2 We assume the graph is connected.",
"distribution p T ( Z ) captures the neighborhood information reflected on the spanning tree T .",
"By using p T ( Z ) to replace p I ( Z ) of L 2 , it can be shown that L 2 can be expressed as the summation of terms involving only one or two variables, which can be handled easily.",
"Due to the limitation of space, the concrete expression for the lower bound is given in the Supplementary Material.",
"The posterior distribution q ( Z | X ) in the previous section is assumed to be in independent form, as the form shown in (8).",
"But since a prior p T ( Z ) considering the correlations among documents is used, assuming an independent posterior is not appropriate.",
"Hence, we follow the tree-structured prior and also construct a tree-structured posterior q T ( Z | X )= (cid:89) i V q ( z i | x i ) (cid:89) ( i,j ) E T q ( z i , z j | x i , x j ) q ( z i | x i ) q ( z j | x j ) , where q ( z i | x i ) is the same as that in (8); and q ( z i , z j | x i , x j ) is also defined to be Gaussian, with its mean defined as [ i ; j ] and covariance matrix defined as (cid:20) diag ( 2 i ) diag ( ij (cid:12) i (cid:12) j ) diag ( ij (cid:12) i (cid:12) j ) diag ( 2 j ) (cid:21) , (13) in which ij R d controls the correlation strength between z i and z j , whose elements are restricted in ( 1 , 1) and (cid:12) denotes the Hadamard product.",
"By taking the correlated posterior q T ( Z | X ) into the ELBO, we obtain LT = (cid:88) i V E q [log p ( x i | z i )] KL ( q ( z i ) || p G ( z i )) (cid:88) ( i,j ) E T (cid:16) KL ( q ( z i , z j | x i , x j ) || p G ( z i , z j )) KL ( q ( z i ) || p G ( z i )) KL ( q ( z j ) || p G ( z j )) (cid:17) , where we briefly denote the variational distribution q ( z i | x i ) as q ( z i ) .",
"Since p G ( z i ) , p G ( z i , z j ) , q ( z i | x i ) and q ( z i , z j | x i , x j ) are all Gaussian distributions, the KL-divergence terms above can be derived in closed-form.",
"Moreover, it can be seen that LT involves only single or pairwise variables, thus optimizing it is as efficient as the models without considering document correlation.",
"With the trained model, hash codes can be obtained by binarizing the posterior mean i with a threshold, as done in (Chaidaroon and Fang, 2017).",
"However, if without any constraint, the range of mean lies in ( , + ) .",
"Thus, if we binarize it directly, lots of information in the original representations will be lost.",
"To alleviate this problem, in our implementation, we parameterize the posterior mean i by a function of the form i = sigmoid ( nn ( x i ) / ) , where the outermost sigmoid function forces the mean to look like binary value and thus can effectively reduce the quantization loss, with nn ( ) denoting a neural network function and controlling the slope of the sigmoid function.",
"Obviously, approximating the graph with a spanning tree may lose too much information.",
"To alleviate this issue, we propose to capture the similarity information by a mixture of multiple distributions, with each built on a spanning tree.",
"Specifi-cally, we first construct a set of M spanning trees TG = { T 1 , T 2 , , TM } from the original graph G .",
"Based on the set of spanning trees, a mixture-distribution prior and posterior can be constructed as p MT ( Z ) = 1 M (cid:88) T T G p T ( Z ) , (14) q MT ( Z | X ) = 1 M (cid:88) T T G q T ( Z | X ) , (15) where p T ( Z ) and q T ( Z | X ) are the prior and posterior defined on the tree T , as done in (11) and (13).",
"By taking the mixture distributions above into the ELBO of L to replace the prior and posterior, we can obtain a new ELBO, denoted as LMT .",
"Obviously, it is impossible to obtain a closed-form expression for the bound LMT .",
"But as proved in (Tang et al., 2019), by using the log-sum inequality, LMT can be further lower bounded by (cid:101) LMT = 1 M (cid:88) T T GLT .",
"Given the expression of LT , the lower bound of (cid:101) LMT can also be expressed in closed-form and optimized efficiently.",
"For detailed derivations and concrete expressions, please refer to the Supplementary.",
"The parameters i , j , i , j and ij in the approximate posterior distribution q ( z i | x i ) of (8)",
"and q ( z i , z j | x i , x j ) of (13) are all defined as the outputs of neural networks, with the parameters denoted as .",
"Specifically, the entire model is mainly composed of three components:",
"i) The variational encoder q ( z i | x i ) , which takes single document as input, and outputs the mean and variance of Gaussian distribution, i.e., [ i ; 2 i ] = f ( x i ) ;",
"ii) The correlated encoder, which takes pairwise documents as input, and outputs the correlation coefficient, i.e. , ij = f ( x i , x j ) .",
"Note that the correlation encoder is required to be order-irrelevant, that is, f ( x i , x j ) = f ( x j , x i ) , which is achieved in this paper as f = 12 (cid:0) f ( x i , x j ) + f ( x j , x i ) (cid:1) ;",
"iii) The generative decoder p ( x i | z i ) , which takes the latent variable z i as input and output the document x i .",
"The decoder is modeled by a neural network parameterized by .",
"The model is trained by optimizing the lower bound (cid:101) LMT w.r.t. and .",
"After training, hash codes are obtained by passing the documents through the variational encoder and binarizing the outputs on every dimension by a the threshold value, which is simply set as 0.5 in our experiments.",
"To intuitively understand the insight behind our model, an illustration is shown in Figure 1. We see that if the two documents are neighbors and semantically similar, the representations will be strongly correlated to each other.",
"But if they are not semantically similar neighbors, the representations become less correlated.",
"If they are neither neighbors nor semantically similar, the representations become not correlated at all.",
"Since our model can simultaneously preserve semantics and neighborhood information, we name it as S emanticsN eighborhood U nified H ahing (SNUH).",
"Deep generative models (Rezende et al., 2014) have attracted a lot of attention in semantics-based hashing, due to their successes in unsupervised representation learning.",
"VDSH (Chaidaroon and Fang, 2017) first employed variational auto-encoder (VAE) (Kingma and Welling, 2013) to learn continuous representations of documents and then casts them into binary codes.",
"However, for the sake of information leaky problem during bi-narization step, such a two-stage strategy is prone to result in local optima and undermine the performance.",
"NASH (Shen et al., 2018) tackled this issue by replacing the Gaussian prior with Bernoulli and adopted the straight-through technique (Ben-gio et al., 2013) to achieve end-to-end training.",
"To further improve the model's capability, Dong et al. (2019) proposed to employ mixture distribution as a priori knowledge and Zheng et al. (2020) exploited Boltzmann posterior to introduce correlation among bits.",
"Beyond generative frameworks, AMMI (Stratos and Wiseman, 2020) achieved superior performance by maximizing the mutual information between codes and documents.",
"Nevertheless, the aforementioned semantic hashing methods are consistently under the i.i.d. assumption, which means they ignore the neighborhood information.",
"Spectral hashing (Weiss et al., 2009) and self-taught hashing (Zhang et al., 2010) are two typical methods of neighbor-based hashing models.",
"But these algorithms generally ignore the rich semantic information associated with documents.",
"Recently, some VAE-based models tried to concurrently take account of semantic and neighborhood information, such as NbrReg (Chaidaroon et al., 2018), RBSH (Hansen et al., 2019) and PairRec(Hansen et al., 2020).",
"However, as mentioned before, all of them simply regarded the proximity as regularization, lacking theoretical principles to guide the incorporation process.",
"Thanks to the virtue of graph-induced distribution, we effectively preserve the two types of information in a theoretical framework.",
"Datasets We verify the proposed methods on three public datasets which published by VDSH 3 :",
"i) Reuters25178, which contains 10,788 news documents with 90 different categories;",
"ii) TMC, which is a collection of 21,519 air traffic reports with 22 different categories;",
"iii) 20Newsgroups (NG20), which consists of 18,828 news posts from 20 different topics.",
"Note that the category labels of each dataset are only used to compute the evaluation metrics, as we focus on unsupervised scenarios.",
"Baselines We compare our method with the following models: SpH (Weiss et al., 2009), STH (Zhang et al., 2010), VDSH (Chaidaroon and Fang, 2017), NASH (Shen et al., 2018), GMSH(Dong et al., 2019), NbrReg (Chaidaroon et al., 2018), CorrSH (Zheng et al., 2020) and AMMI (Stratos and Wiseman, 2020).",
"For all baselines, we take the reported performance from their original papers.",
"Training Details For fair comparisons, we follow the same network architecture used in VDSH, GMSH and CorrSH, using a one-layer feed-forward neural network as the variational and the correlated encoder.",
"The graph G is constructed with the K -nearest neighbors (KNN) algorithm based on cosine similarity on the TFIDF features of documents.",
"In our experiments, the correlation strength coefficient in (12) is fixed to 0.99.",
"According to the performance observed on the validation set, we choose the learning rate from { 0 .",
"0005 , 0 .",
"001 , 0 .",
"003 } , batch size from { 32 , 64 , 128 } , the temperature in sigmoid function from { 0 .",
"1 , 0 .",
"2 , , 1 } , the number of trees M and neighbors K both form { 1,2,.",
".",
". ,20 } , with the best used for evaluation on the test set.",
"The model is trained using the Adam optimizer (Kingma and Ba, 2014).",
"More detailed experimental settings, along with the generating method of spanning trees, are given in the supplementary materials.",
"Evaluation Metrics The retrieval precision is used as our evaluation metric.",
"For each query document, we retrieve 100 documents most similar to it based on the Hamming distance of hash codes.",
"Then, the retrieval precision for a single sample is measured as the percentage of the retrieved documents with the same label as the query.",
"Finally, the average precision over the whole test set is calculated as the performance of the evaluated method.",
"Overall Performance The performances of all the models on the three public datasets are shown in Table 1. We see that our model performs favorably",
"Method Reuters TMC 20Newsgroups Avg 16bits 32bits 64bits 128bits 16bits 32bits 64bits 128bits 16bits 32bits 64bits 128bits SpH 0.6340 0.6513 0.6290 0.6045 0.6055 0.6281 0.6143 0.5891 0.3200 0.3709 0.3196 0.2716 0.5198 STH 0.7351 0.7554 0.7350 0.6986 0.3947 0.4105 0.4181 0.4123 0.5237 0.5860 0.5806 0.5443 0.5662 VDSH 0.7165 0.7753 0.7456 0.7318 0.6853 0.7108 0.4410 0.5847 0.3904 0.4327 0.1731 0.0522 0.5366 NbrReg n.a. n.a. n.a. n.a. n.a. n.a. n.a. n.a. 0.4120 0.4644 0.4768 0.4893 0.4249 NASH 0.7624 0.7993 0.7812 0.7559 0.6573 0.6921 0.6548 0.5998 0.5108 0.5671 0.5071 0.4664 0.6462 GMSH 0.7672 0.8183 0.8212 0.7846 0.6736 0.7024 0.7086 0.7237 0.4855 0.5381 0.5869 0.5583 0.6807 AMMI 0.8173 0.8446 0.8506 0.8602 0.7096 0.7416 0.7522 0.7627 0.5518 0.5956 0.6398 0.6618 0.7323 CorrSH 0.8212 0.8420 0.8465 0.8482 0.7243 0.7534 0.7606 0.7632 0.5839 0.6183 0.6279 0.6359 0.7355 SNUH 0.8320 0.8466 0.8560 0.8624 0.7251 0.7543 0.7658 0.7726 0.5775 0.6387 0.6646 0.6731 0.7474",
"to the current state-of-the-art method, yielding best average performance across different datasets and settings.",
"Compared with VDSH and NASH, which simply employ isotropic Gaussian and Bernoulli prior, respectively, we can observe that our model, which leverages correlated prior and posterior distributions, achieves better results on all the three datasets.",
"Although GMSH improves performance by exploiting a more expressive Gaussian mixture prior, our model still outperforms it by a substantial margin, indicating the superiority of incorporating document correlations.",
"It is worth noting that, by unifying semantics and neighborhood information under the generative models, the two types of information can be preserved more effectively.",
"This can be validated by that our model performs significantly better than NbrReg, which naively incorporates the neighborhood information by using a neighbor-reconstruction regularizer.",
"The superiority of our unified method can be further corroborated in the comparisons with RBSH and PairRec, which are given in the Supplementary since they employed a different preprocessing method as the models reported here.",
"Comparing to the current SOTA methods of AMMI and CorrSh, our method 0 1 2 3 4 5 6 7 8 9 10 \u00001\u0000X\u0000P\u0000E\u0000H\u0000U\u0000\u0003\u0000R\u0000I\u0000\u0003\u00007\u0000U\u0000H\u0000H\u0000V 0.73 0.74 0.75 0.76 0.77 0.78 \u00003 \u0000U\u0000H\u0000F \u0000L \u0000V \u0000L \u0000R \u0000Q \u0000\u0003 \u0000\u000b \u0000\b \u0000\f TMC 0 1 2 3 4 5 6 7 8 9 10 \u00001\u0000X\u0000P\u0000E\u0000H\u0000U\u0000\u0003\u0000R\u0000I\u0000\u0003\u00007\u0000U\u0000H\u0000H\u0000V 0.58 0.60 0.62 0.64 0.66 0.68 0.70 \u00003 \u0000U\u0000H\u0000F \u0000L \u0000V \u0000L \u0000R \u0000Q \u0000\u0003 \u0000\u000b \u0000\b \u0000\f 20Newsgroups 0 1 2 3 4 5 6 7 8 9 10 \u00001\u0000X\u0000P\u0000E\u0000H\u0000U\u0000\u0003\u0000R\u0000I\u0000\u0003\u00001\u0000H\u0000L\u0000J\u0000K\u0000E\u0000R\u0000U\u0000V 0.73 0.74 0.75 0.76 0.77 0.78 \u00003 \u0000U\u0000H\u0000F \u0000L \u0000V \u0000L \u0000R \u0000Q \u0000\u0003 \u0000\u000b \u0000\b \u0000\f TMC 0 1 2 3 4 5 6 7 8 9 10 \u00001\u0000X\u0000P\u0000E\u0000H\u0000U\u0000\u0003\u0000R\u0000I\u0000\u0003\u00001\u0000H\u0000L\u0000J\u0000K\u0000E\u0000R\u0000U\u0000V 0.58 0.60 0.62 0.64 0.66 0.68 0.70 \u00003 \u0000U\u0000H\u0000F \u0000L \u0000V \u0000L \u0000R \u0000Q \u0000\u0003 \u0000\u000b \u0000\b \u0000\f 20Newsgroups Figure 2: The precision of 64-bit hash codes with varying number of trees M and neighbors K .",
"is still able to achieve better results by exploiting the correlation among documents.",
"Moreover, thanks to the benefit of correlation regularization, remarkable gratuity can be acquired profitably in 64 and 128 bits.",
"Impact of Introducing Correlations in Prior and Posterior To understand the influences of the proposed document-correlated prior and posterior, we further experiment with two variants of our model:",
"i) SNUH ind : which does not consider document correlations in neither the prior nor the posterior distribution;",
"ii) SNUH prior : which only considers the correlations in the prior, but not in the posterior.",
"Obviously, the proposed SNUH represents the method that leverage the correlations in both of the prior and posterior.",
"As seen from Table 2, SNUH prior achieves better performance than SNUH ind , demonstrating the benefit of considering the correlation information of documents only in the prior.",
"By further taking the correlations into account in the posterior, improvements of SNUH can be further observed, which fully corroborates the superiority of considering document correlations in the prior and posterior.",
"Another interesting observation is that the performance gap be-Distance Category Title/Subject query hockey NHL PLAYOFF RESULTS FOR GAMES PLAYED 4-21-93 1 hockey NHL PLAYOFF RESULTS FOR GAMES PLAYED 4-19-93 10 hockey NHL Summary parse results for games played Thur, April 15, 1993 20 hockey AHL playoff results (4/15) 50 forsale RE: == MOVING SALE === 70 hardware Re: Quadra SCSI Problems?",
"tween SNUH ind and SNUH prior becomes small as the length of bits increases.",
"This may be attributed to the fact that the increased generalization ability of models brought by large bits is inclined to alleviate the impact of priori knowledge.",
"However, by additionally incorporating correlation constraints on posterior, significant performance gains would be obtained, especially in large bits scenarios.",
"Effect of Spanning Trees For more efficient training, spanning trees are utilized to approximate the whole graph by dropping out some edges.",
"To understand its effects, we first investigate the impact of the number of trees .",
"The first row of Figure 2 shows the performance of our method as a function of different numbers of spanning trees.",
"We observe that, compared to not using any correlation, one tree alone can bring significant performance gains.",
"As the tree number increases, the performance rises steadily at first and then converges into a certain level, demonstrating that the document correlations can be mostly captured by several spanning trees.",
"Then, we further explore the impact of the neighbor number when constructing the graphs using the KNN method, as shown in the second row of Figure 2. It can be seen that more neighbors contributes to better performance.",
"We hypothesize that this is partly due to the more diverse correlation information captured by the increasing number of neighbors.",
"However, incorporating too many neighbors may lead to the problem of introducing noise and incorrect correlation information to the hash codes.",
"That explains why no further improvement is observed after the number reaches a level.",
"Empirical Study of Computational Efficiency We also investigate the training complexity by comparing the training duration of our method and VDSH, on Tesla V100-SXM2-32GB.",
"On the Reuters, TMC, 20Newsgroups datasets with 64-bit hash codes, our method finishes one epoch of training respectively in 3.791s, 5.238s, 1.343s and",
"VDSH in 2.038s, 4.364s, 1.051s.",
"It can be seen that our model, though with much stronger performance, can be trained almost as efficiently as vanilla VDSH due to the tree approximations.",
"Case Study In Table 3, we present a retrieval case of the given query document.",
"It can be observed that as the Hamming distance increases, the semantic (topic) of the retrieved document gradually becomes more irrelevant, illustrating that the Hamming distance can effectively measure the document relevance.",
"Visualization of Hash Codes To evaluate the quality of generated hash code more intuitively, we project the latent representations into a 2-dimensional plane with the t-SNE (van der Maaten and Hinton, 2008) technique.",
"As shown in Figure 3, the representations generated by our method are more separable than those of AMMI, demonstrating the superiority of our method.",
"We have proposed an effective and efficient semantic hashing method to preserve both the semantics and neighborhood information of documents.",
"Specifically, we applied a graph-induced Gaussian prior to model the two types of information in a unified framework.",
"To facilitate training, a tree-structure approximation was further developed to decompose the ELBO into terms involving only singleton or pairwise variables.",
"Extensive evaluations demonstrated that our model significantly outperforms baseline methods by incorporating both the semantics and neighborhood information.",
"This work is supported by the National Natural Science Foundation of China (No. 61806223, 61906217, U1811264), Key R&D Program of Guangdong Province (No. 2018B010107005), National Natural Science Foundation of Guangdong Province (No. 2021A1515012299).",
"This work is also supported by MindSpore."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"other",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"other",
"other"
] |
[
"With the rapid development in deep learning, deep neural networks have been widely adopted in many real-life natural language applications.",
"Under deep neural networks, a pre-defined vocabulary is required to vectorize text inputs.",
"The canonical approach to select pre-defined vocabulary is based on the word frequency, where a threshold is selected to cut off the long tail distribution.",
"However, we observed that such a simple approach could easily lead to under-sized vocabulary or oversized vocabulary issues.",
"Therefore, we are interested in understanding how the end-task classification accuracy is related to the vocabulary size and what is the minimum required vocabulary size to achieve a specific performance.",
"In this paper, we provide a more sophisticated variational vocabulary dropout (VVD) based on variational dropout to perform vocabulary selection, which can intelligently select the subset of the vocabulary to achieve the required performance.",
"To evaluate different algorithms on the newly proposed vocabulary selection problem, we propose two new metrics: Area Under Accuracy-Vocab Curve and Vocab Size under X% Accuracy Drop.",
"Through extensive experiments on various NLP classification tasks, our variational framework is shown to significantly outperform the frequency-based and other selection baselines on these metrics.",
"Over the past decade, deep neural networks have become arguably the most popular model choice for a vast number of natural language processing (NLP) tasks and have constantly been delivering state-of-the-art results.",
"Because neural network models assume continuous data, to apply a neural network on any text data, the first step is to CutoffFreq.",
"vectorize the discrete text input with a word embedding matrix through look-up operation, which in turn assumes a pre-defined vocabulary set.",
"For many NLP tasks, the vocabulary size can easily go up to the order of tens of thousands, which potentially makes the word embedding the largest portion of the trainable parameters.",
"For example, a document classification task like AG-news (Zhang et al., 2015) can include up to 60K unique words, with the embedding matrix accounting for 97.6% of the trainable parameters (Table 1), which leads to under-representation of the neural networks' own parameters.",
"Intuitively, using the full or very large vocabulary are neither economical, as it limits model applicability on computationor memory-constrained scenarios (Yogatama et al., 2015; Faruqui et al., 2015), nor necessary, as many words may contribute little to the end task and could have been safely removed from the vocabulary.",
"Therefore, how to select the best vocabulary is a problem of both theoretical and practical interests.",
"Somewhat surprisingly, this vocabulary selection problem is largely under-addressed in the literature: The de facto standard practice is to do frequency-based cutoff (Luong et al., 2015; Kim, 2014), and only retain the words more frequent than a certain threshold (Table 1).",
"Although this simple heuristic has demonstrated strong empirical performance, its task-agnostic nature implies that likely it is not the optimal strategy for many tasks (or any task).",
"Task-aware vocabulary selection strategies and a systematic comparison of different strategies are still lacking.",
"In this work, we present the first systematic study of the vocabulary selection problem.",
"Our study will be based on text classification tasks, a broad family of NLP tasks including document classification (DC), natural language inference (NLI), natural language understanding in dialog systems (NLU), etc.",
"Specifically, we aim to answer the following questions:",
"1. How important a role does the vocabulary selection algorithm play in text classification?",
"2. How to dramatically reduce the vocabulary size while retaining the accuracy?",
"The rest of the paper is organized as follows: We first formally define the vocabulary selection problem (subsection 2.1) and present a quantitative study on classification accuracy with different vocabulary selections to showcase its importance in the end task (subsection 2.2).",
"We also propose two new metrics for evaluating the performance of vocabulary selection in text classification tasks (subsection 2.3).",
"We then propose a novel, task-aware vocabulary selection algorithm called Varitional Vocabulary Dropout (VVD) (sec-tion 3) which draws on the idea of variational dropout (Kingma et al., 2015): If we learn a dropout probability p w for each given word w in the vocabulary V during the model training on a given task, the learned dropout probabilities p w will imply the importance of word w to the end task and can, therefore, be leveraged for vocabulary selection.",
"We propose to infer the latent dropout probabilities under a Bayesian inference framework.",
"During test time, we select the sub vocabulary V by only retaining words with dropout probability lower than a certain threshold.",
"For any words deselected using VVD, we will simply regard them as a special token with null vector representation [0 , 0 , , 0] .",
"Please note that our proposed algorithm needs to re-train a word embedding matrix, thus it is tangential to the research of pre-trained word embedding like Word2Vec (Mikolov et al., 2013) or Glove (Pen-nington et al., 2014) though we can use them to initialize our embedding.",
"We conduct comprehensive experiments to evaluate the performance of VVD (section 4) on different end classification tasks.",
"Specifically, we compare against an array of strong baseline selection algorithms, including the frequency-based algorithm (Luong et al., 2015), TF-IDF algorithm (Ramos et al., 2003), and structure lasso algorithm (Friedman et al., 2010), and demonstrate that it can consistently outperform these competing algorithms by a remarkable margin.",
"To show that the conclusions are widely held, our evaluation is based on a wide range of text classification tasks and datasets with different neural networks including Convolutional Neural Network (CNN) (Kim, 2014), Bi-directional Long-Short Term Memory (BiLSTM) (Bahdanau et al., 2014) and Enhanced LSTM (ESIM) (Chen et al., 2017).",
"In summary, our contributions are three-fold:",
"1. We formally define the vocabulary selection problem, demonstrate its importance, and propose new evaluation metrics for vocabulary selection in text classification tasks.",
"2. We propose a novel vocabulary selection algorithm based on variational dropout by re-formulating text classification under the Bayesian inference framework.",
"The code will be released in Github 1 .",
"3. We conduct comprehensive experiments to demonstrate the superiority of the proposed vocabulary selection algorithm over a number of strong baselines.",
"We now formally define the problem setting and introduce the notations for our problem.",
"Conventionally, we assume the neural classification model vectorizes the discrete language input into a vector representation via an embedding matrix W RV D , where V denotes the size of the vocabulary, and D denotes the vector dimension.",
"The embedding is associated with a pre-defined word-to-index dictionary V = { w i : i | 1 i V } where w i denotes a literal word corresponding to i th row in the embedding matrix.",
"The embedding matrix W covers the subset of a vocabulary of interests for a particular NLP task, note that the value of V 1 https://github.com/wenhuchen/ Variational-Vocabulary-Selection.git 5% Vocab@5% 42.5 85.1 80.1 33.2 Document Classification Natural Language Understanding Evaluation Metrics Figure 1: Monte-Carlo simulation on vocabulary selection.",
"is known to be very large due to the rich variations in human languages.",
"Here we showcase the embedding matrix size of a popular text classification model 2 on AG-news dataset (Zhang et al., 2015) in Table",
"1. From which we can easily observe that the embedding matrix is commonly occupying most of the parameter capacity, which could be the bottleneck in many real-world applications with limited computation resources.",
"In order to alleviate such redundancy problem and make embedding matrix as efficient as possible, we are particularly interested in discovering the minimum row-sized embedding W to achieve nearly promising performance as using the full row-sized embedding W .",
"More formally, we define the our problem as follows: argmin W, # Row ( W ) s.t. Acc ( f ( x ; W ) , y ) Acc ( f ( x ; W ) , y ) (cid:15) (1) where #Row is a the number of rows in the matrix W , f is the learned neural model with parameter to predict the class given the inputs x , Acc is the function which measure accuracy between model prediction and y (reference output), and (cid:15) is the tolerable performance drop after vocabulary selection.",
"It is worth noting that here includes all the parameter set of the neural network except embedding matrix W .",
"For each vocabulary selection algorithm A , we propose to draw its characteristic curve Acc ( f ( x ; W ) , y ) = g A (# Row ( W )) to understand the relationship between the vocabulary capacity and classification accuracy, which we call as (characteristic) accuracy-vocab curve throughout our paper.",
"In order to investigate the importance of the role played by the vocabulary selection algorithm,",
"we design a Monte-Carlo simulation strategy to approximate accuracy's lower bound and upper bound of a given vocabulary size reached by a possible selection algorithm A .",
"More specifically, for a given vocabulary size of V , there exist (cid:0) V V (cid:1) algorithms which can select distinct vocabulary subset V from the full vocabulary V .",
"Directly enumerating these possibilities are impossible, we instead propose to use a Monte-Carlo vocabulary selection strategy which can randomly pick vocabulary subset V to simulate the possible selection algorithms by running it N times.",
"After simulation, we obtain various point estimations ( Acc 1 , , Acc N | V ) at each given V and depict the point estimates in Figure 1 to approximately visualize the upper and lower bound of the accuracy-vocab curve.",
"From Figure 1, we can easily observe that the accuracy range under a limited-vocabulary is extremely large, when the budget V increases, the gap gradually shrinks.",
"For example, for document classification with a budget of 1000, a selection algorithm A can yield a potential accuracy ranging from 42 .",
"5 to 85 .",
"1 , while for natural language understanding task with a budget of 27, a selection algorithm A can yield a potential accuracy ranging from 33 .",
"2 to 80 .",
"1 .",
"Such a Monte-Carlo simulation study has demonstrated the significance of vocabulary selection strategy in NLP tasks and also implicate the enormous potential of an optimal vocabulary selection algorithm.",
"In order to evaluate how well a given selection algorithm A performs, we propose evaluation metrics as depicted in Figure 1 by quantitatively studying its characteristic accuracy-vocab curve.",
"These metrics namely Area Under Curve (AUC) and Vocab@-X% separately measure the vocabulary selection performance globally and locally.",
"Specifically, AUC computes enclosed area by the curve, which gives an overview of how well the vocabulary selection algorithm performs.",
"In comparison, Vocab@-X% computes the minimum vocabulary size required if X% performance drop is allowed, which straightforwardly represents how large vocabulary is required to achieve a given accuracy.",
"For the local evaluation metric, we mainly consider Vocab@-3% and Vocab@-5%.",
"However, we observe that directly computing AUC lays too much emphasis on the large-vocabulary region, thus unable to represent an algorithm's selection capability under the low-vocabulary conditions.",
"Therefore, we propose to take the logarithm of the vocabulary size and then compute the normalized enclosed area by: AUC = (cid:82) V Acc (log( V )) d log( V ) (cid:82) V Acc ( V ) d log( V ) (2) It is worth noting that Vocab@-X% takes value from range [0 , V ] with smaller values indicate better performance.",
"Since AUC is normalized by Acc(V), it takes value from range [0 , 1] regardless of the classification error.",
"Inspired by DNN dropout (Srivastava et al., 2014; Wang and Manning, 2013), we propose to tackle the vocabulary selection problem from word-level dropout perspective, where we assume each word w i (an integer index) is associated with its characteristic dropout rate p i , which represents the probability of being replaced with an empty placeholder, specifically, higher dropout probability indicates less loss suffered from removing it from the vocabulary.",
"Hence, the original optimization problem in Equation 1 can be thought of as inferring the latent dropout probability vector p = [ p 1 , , p V ] .",
"The overview of our philosophy is depicted in Figure 2, where we associate with each row of the embedding matrix a dropout probability and then re-train the complete system, which grasps how much contribution each word from the vocabulary makes to the end NLP task and remove those less contributory words from the vocabulary without hurting the performance.",
"Here we first assume that the neural network vectorizes the discrete inputs with an embedding matrix W to project given words x into vector space",
"RD , and then propose to add random dropout noise into the embedding input to simulate the dropout process as follows:",
"where OneHot is a function to transform a word x into its one-hot form OneHot ( x ) RV , and b RV is the Bernouli dropout noise with b i Bern (1 p i ) . The embedding output vector E ( x | b ) is computed with a given embedding matrix W under a sampled Bernouli vector b . In order to infer the latent Bernouli distribution with parameters p under the Bayesian framework where training pairs ( x = x 1 x n , y ) are given as the evidence, we first define an objective function as L ( f ( x ) , y ) and then derive its lower bound as follows (with p = 1 p ):",
"where P ( b ) is the prior distribution, and Bern ( p ) denotes the Bernouli approximate posterior with parameter p . Here we use E ( x ) as the simplied form of { E ( x 1 ) , , E ( x n ) } , we separate the text classification model's parameters with the embedding parameters W and assume the classification model f directly takes embedding E as input.",
"However, the Bernouli distribution is hard to repa-rameterize, where we need to enumerate 2 V different values to compute the expectation over the stochastic dropout vector b . Therefore, we follow Wang and Manning (2013) to use a continuous Gaussian approximation, where the Bernouli noise b is replaced by a Gaussian noise z :",
"where z RV follows Gaussian distribution z i N (1 , i = p i 1 p i ) . It is worth noting that",
"and p are one-to-one corresponded, and is a monotonously increasing function of p . For more details, please refer to Wang and Manning (2013). Based on such approximation, we can use as dropout criteria, e.g. throw away words with above a certain given threshold T . We further follow Louizos et al. (2017); Kingma et al. (2015); Molchanov et al. (2017) to re-interpret the input noise as the intrinsic stochasticity in the embedding weights B itself as follows:",
"E ( x | z ) = OneHot ( x ) B (5) log L ( f ( x ) , y )) = log (cid:90) BL ( f ( E ( x | z )) , y )) P ( B ) dB EB N ( , ) [log L ( f ( E ( x | z )) , y )] KL ( N ( , ) ||P ( B )) = L ( B, ) P (log | B ij | ) = const P ( | B ij | ) 1 | B | (6)",
"Since there exists no closed-form expression for such KL-divergence, we follow Louizos et al. (2017) to approximate it by the following formula with minimum variance:",
"DKL = k 1 ( k 2 + k 3 log ) + 1 2 log(1 + 1 ) + k 1 k 1 = 0 . 63576 k 2 = 1 . 87320 k 3 = 1 . 48695 (7)",
"By adopting the improper log-uniform prior, more weights are compressed towards zero, and the KL-divergence is negatively correlated with dropout ratio . Intuitively, the dropout ratio i is an redundancy indicator for i th word in the vocabulary, with larger i meaning less performance loss caused by dropping i th word. During training, we use re-parameterization trick (Kingma and",
"Welling, 2013) to sample embedding weights from the normal distribution to reduce the Monte-Carlo variance in Bayesian training.",
"After optimization, we can obtain the dropout ratio i associated with each word w i . We propose to select vocabulary subset based on the dropout ratio by using a threshold T . Therefore, the remaining vocabulary subset is described as follows:",
"where we use V to denote the subset vocabulary of interest, by adjusting T we are able to control the selected vocabulary size.",
"We compare the proposed vocabulary selection algorithm against several strong baselines on a wide range of text classification tasks and datasets.",
"The main datasets we are using are listed in Table 2, which provides an overview of its description and capacities. Specifically, we follow (Zhang et al., 2015; Goo et al., 2018; Williams et al., 2018) to pre-process the document classification datasets, natural language understanding dataset and natural language inference dataset. We exactly replicate their experiment settings to make our method comparable with theirs. Our models is implemented with TensorFlow (Abadi et al., 2015). In order to evaluate the generalization ability of VVD selection algorithm in deep learning architectures, we study its performance under different established architectures (depicted in Figure 3). In natural language understanding, we use the most recent attention-based model for intention tracking (Goo et al., 2018), this model first uses BiLSTM recurrent network to leverage left-to-right and right-to-left context information to form the hidden representation, then computes self-attention weights to aggregate the hidden representation and predicts user intention. In document classification, we mainly follow the CNN architecture (Kim, 2014) to extract n-gram features and then aggregate these features to predict document category. In natural language inference, we follow the popular ESIM architecture (Williams et al., 2018; Chen et al., 2017) us-Datasets",
"ing the Github implementation 3 .",
"In this structure, three main components input encoding, local inference modeling, and inference composition are used to perform sequential inference and composition to simulate the interaction between premises and hypothesis.",
"Note that, we do not apply the syntax-tree based LSTM proposed in (Chen et al., 2017) because we lost the parse tree (Klein and Manning, 2003) after the vocabulary compression, instead, we follow the simpler sequential LSTM framework without any syntax parse as input.",
"Besides, the accuracy curve is obtained using the publicly available test split rather than the official online evaluation because we need to evaluate lots of times at different vocabulary capacity.",
"Frequency-based (task-agnostic) This approach is already extensively talked about in section 1, its basic idea is to rank the word based on its frequency and then set a threshold to cut off the long tail distribution.",
"TF-IDF (task-agnostic) This algorithm views the vocabulary selection as a retrieval problem (Ramos et al., 2003), where term frequency is viewed as the word frequency and document 3 https://github.com/coetaur0/ESIM frequency is viewed as the number of sentences where such word appears.",
"Here we follow the canonical TF-IDF approach to compute the retrieval score as follows: tfidf ( w, D ) = tf ( w ) (log N n w ) 1 (9) where tf ( w ) denotes the word frequency, is the balancing factor, N denotes the number of sentences and n w denotes the number of sentences in which w appears.",
"We rank the whole vocabulary based on the tfidf and cut off at given threshold.",
"Group Lasso (task-aware) This baseline aims to find intrinsic sparse structures (Liu et al., 2015; Park et al., 2016; Wen et al., 2016) by grouping each row of word embedding.",
"The regularization objective is described as follows, which aims at finding the row-wise sparse structure: L reg = (cid:88) i ( (cid:88) j W 2 ij ) 12 (10) After optimized with the above regularization, we use a threshold-based selection strategy on the row-norm of embedding matrix, the selected vocabulary is described as V = { w i V ||| W i || 2 > T } , where T is the threshold.",
"natural language inference separately in Table",
"3. From these tables, first of all, we can observe that VVD is able to maintain or even improve the reported accuracy on DC and NLU tasks, the accuracy of VVD is reported under dropping out the words with dropout rate larger than 0 .",
"95 .",
"The exception is in NLI (Williams et al., 2018), where the common approach uses GloVe (Pennington et al., 2014) for initialization, and we use random initialization, which makes our model fall slightly behind.",
"It is worth noting that Frequency-based/TF-IDF methods are based on the model trained with cross entropy, while both Group-Lasso and VVD modify the objective function by adding additional regularization.",
"It can be seen that VVD is performing very similar to the baseline models on DC and NLU tasks, while consistently outperforming the baseline methods (with random initialized embedding) on more challenging NLI and Yelp-Review tasks, that said, VVD can also be viewed as a generally effective regularization technique to sparsify features and alleviate the over-fitting problem in NLP tasks.",
"In terms of the vocabulary selection capability, our proposed VVD is demonstrated to outperform the competing algorithms in terms of both AUC and Vocab@-X% metrics consistently over different datasets as shown in Table",
"3. In order to better understand the margin between VVD and frequency-based method, we plot their accuracy-vocab curves in Figure 4, from which we can observe that the accuracy curves start from nearly the same accuracy with the full vocabulary, by gradually decreasing the budget V , VVD decreases at a much lower rate than the competing algorithms, which clearly reflects its superiority under limited-budget scenario.",
"From the empirical result, we can conclude that: 1) the retrieval-based selection algorithm can yield Figure 4: The accuracy-vocab curve of VVD, TF-IDF and frequency-based baseline, the datasets used are AG-news, DBPedia and Yelp-review respectively.",
"when, showing, use, watch, photograph, eat, soundtrack, hear, painting, tell, trailer easy, d, zero, people, series, am, three, serves, one, area, five, textbook, new, get, with, two, she + Figure",
"marginal improvement over the AUC metric, but the vocab@-X% metric deteriorates.",
"2) group-lasso and VVD algorithm directly considers the connection between each word and end classification accuracy; such task-awareness can greatly in improving both evaluation metrics.",
"Here we show that NLU datasets are relatively simpler, which only involves detecting key words from hu-Figure 7: The vocabulary cloud of Snips NLU dataset.",
"man voice inputs to make decent decisions, a keyword vocabulary within 100 is already enough for promising accuracy.",
"For DC datasets, which involve better inner-sentence and inter-sentence understanding, hundred-level vocabulary is required for most cases.",
"NLI datasets involve more complicated reasoning and interaction, which requires a thousand-level vocabulary.",
"Case Study To provide an overview of what words are selected, we depict the selection spectrum over different NLP tasks in Figure 5, from which we observe that most of the selected vocabulary are still from the high-frequency area to ensure coverage, which also explains why the frequency-based algorithm is already very strong.",
"Furthermore, we use the Snips dataset (Coucke et al., 2018) to showcase the difference between the vocabularies selected by VVD and by frequency-based baseline.",
"The main goal of this dataset is to understand the speaker's intention such as BookRestaurant, PlayMu-sic, and SearchLocalEvent.",
"We show the se-lected/unselected words by our algorithm in Figure 6 under a vocabulary budget of 100, it is observed that many non-informative but frequent functional words like get, with, and five are unselected while more task-related but less frequent words like neighborhood, search, the-atre are selected.",
"More vividly, we demonstrate the word cloud of the selected vocabulary of Snips (Coucke et al., 2018) in Figure 7.",
"Training Speed Due to the stochasticity of VVD, the training of text classification takes longer than canonical cross entropy objective.",
"More importantly, we observe that with the increase the full vocabulary size, the convergence time of VVD also increases sub-linearly but the convergence time of Cross Entropy remains quite consistent.",
"We conjecture that this is due to the fact that the VVD algorithm has a heavier burden to infer the drop out the probability of the long tail words.",
"Therefore, we propose to use a two-step vocabulary reduction to dramatically decrease VVD's training time, in the first step, we cut off the rare words without having any harm on the final accuracy, then we continue training with VVD on the shrunk vocabulary.",
"By applying such a hybrid methodology, we are able to decrease the training time dramatically.",
"Evaluation Speed As we know, at each vocabulary point, the network needs to perform once evaluation on the whole test set.",
"Therefore, it is not practical to draw each vocabulary size from 1 to V and perform V times of evaluation.",
"Given the limited computational resources, we need to sample some vocabulary size and estimate the area under curve relying on only these points.",
"Uniformly sampling the data points are proved wasteful, since when the accuracy curve will converge to a point very early, most of the sampled point is actually getting equivalent accuracy.",
"Therefore, we propose to increase the interval exponentially to cover more samples at extremely low vocabulary size.",
"For example, given the total vocabulary of 60000, the interval will be split into 1, 2, 4, 8, 24, 56, ..., 60K.",
"Using such sampling method achieve a reasonably accurate estimation of ROC with only O ( log ( | V | )) sample points, which is affordable under many cases.",
"Neural Network Compression In order to better apply the deep neural networks under limited-resource scenarios, much recent research has been performed to compress the model size and decrease the computation resources.",
"In summary, there are mainly three directions, weight matrices approximation (Le et al., 2015; Tjan-dra et al., 2017), reducing the precision of the weights (Hubara et al., 2017; Han et al., 2015) and sparsification of the weight matrix (Wen et al., 2016).",
"Another group of sparsification relies on the Bayesian inference framework (Molchanov et al., 2017; Neklyudov et al., 2017; Louizos et al., 2017).",
"The main advantage of the Bayesian sparsification techniques is that they have a small number of hyperparameters compared to pruning-based methods.",
"As stated in (Chirkova et al., 2018), Bayesian compression also leads to a higher sparsity level (Molchanov et al., 2017; Neklyudov et al., 2017; Louizos et al., 2017).",
"Our proposed VVD is inspired by these predecessors to specifically tackle the vocabulary redundancy problem in NLP tasks.",
"Vocabulary Reduction An orthogonal line of research for dealing similar vocabulary redundancy problem is the character-based approaches to reduce vocabulary sise (Kim et al., 2016; Zhang et al., 2015; Costa-Juss`a and Fonollosa, 2016; Lee et al., 2017), which decomposes the words into its characters forms for better handling open world inputs.",
"However, these approaches are not applicable to character-free languages like Chinese and Japanese.",
"Moreover, splitting words into characters incurs potential lose of word-level surface form, and thus needs more parameters at the neural network level to recover it to maintain the end task performance (Zhang et al., 2015), which contradicts with our initial motivation of compressing the neural network models for computation-or memory-constrained scenarios.",
"In this paper, we propose a vocabulary selection algorithm which can find sparsity in the vocabulary and dynamically decrease its size to contain only the useful words.",
"Through our experiments, we have empirically demonstrated that the commonly adopted frequency-based vocabulary selection is already a very strong mechanism, further applying our proposed VVD can further improve the compression ratio.",
"However, due to the time and memory complexity issues, our algorithm and evaluation are more suitable for classification-based application.",
"In the future, we plan to investigate broader applications like summarizaion, translation, question answering, etc.",
"The authors would like to thank the anonymous reviewers for their thoughtful comments.",
"This research was sponsored in part by NSF 1528175.",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation."
] | [
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"other",
"objective",
"abstain",
"method",
"abstain",
"method",
"abstain",
"result",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"other",
"other",
"other"
] |
Subsets and Splits